text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
# nLab round sphere
Contents
### Context
#### Spheres
n-sphere
low dimensional n-spheres
#### Riemannian geometry
Riemannian geometry
# Contents
## Idea
In Riemannian geometry, the topological n-sphere regarded as a Riemannian manifold in the standard way (i.e. as the submanifold of elements at constant distance from a given point in Euclidean space) is also called the round $n$-sphere, in order to distinguish it from other, non-isometric Riemannian manifold structures that also exists on some n-sphere. These alternatives are then also called squashed spheres.
## Examples of squashed $n$-spheres
coset space-structures on n-spheres:
standard:
$S^{n-1} \simeq_{diff} SO(n)/SO(n-1)$this Prop.
$S^{2n-1} \simeq_{diff} SU(n)/SU(n-1)$this Prop.
$S^{4n-1} \simeq_{diff} Sp(n)/Sp(n-1)$this Prop.
exceptional:
$S^7 \simeq_{diff} Spin(7)/G_2$Spin(7)/G2 is the 7-sphere
$S^7 \simeq_{diff} Spin(6)/SU(3)$since Spin(6) $\simeq$ SU(4)
$S^7 \simeq_{diff} Spin(5)/SU(2)$since Sp(2) is Spin(5) and Sp(1) is SU(2), see Spin(5)/SU(2) is the 7-sphere
$S^6 \simeq_{diff} G_2/SU(3)$G2/SU(3) is the 6-sphere
$S^15 \simeq_{diff} Spin(9)/Spin(7)$Spin(9)/Spin(7) is the 15-sphere
(from FSS 19, 3.4)
Last revised on December 1, 2019 at 14:10:14. See the history of this page for a list of all contributions to it.
|
{}
|
Find the coordinates of the foci, the vertices, the length of major axis, the minor axis, the eccentricity and the length of the latus rectum of the ellipse 4x2 + 9y2 = 36
Asked by Pragya Singh | 1 year ago | 92
Solution :-
The equation is 4x2 + 9y2 = 36
or $$\dfrac{x^2}{9}$$ + $$\dfrac{y^2}{4}$$ = 1 or $$\dfrac{x^2}{3^2}$$+ $$\dfrac{y^2}{2^2}$$ = 1
Here, the denominator of $$\dfrac{x^2}{3^2}$$ is greater than the denominator of $$\dfrac{y^2}{2^2}$$.
So, the major axis is along the x-axis, while the minor axis is along the y-axis.
On comparing the given equation with
$$\dfrac{x^2}{a^2}$$ +$$\dfrac{y^2}{b^2}$$ = 1, we get
a =3 and b =2.
c = $$\sqrt{ (a^2 – b^2)}$$
$$\sqrt{ (9– 4)}$$
=$$\sqrt{5}$$
Then,
The coordinates of the foci are ($$\sqrt{5}$$, 0) and ($$-\sqrt{5}$$, 0).
The coordinates of the vertices are (3, 0) and (-3, 0)
Length of major axis = 2a = 2 (3) = 6
Length of minor axis = 2b = 2 (2) = 4
Eccentricity, e = $$\dfrac{c}{a}$$$$\dfrac{\sqrt{5}}{3}$$
Length of latus rectum = $$\dfrac{2b^2}{a}$$
= $$\dfrac{(2×2^2)}{3}$$ = $$\dfrac{(2×4)}{3}$$$$\dfrac{8}{3}$$
Answered by Abhisek | 1 year ago
Related Questions
An equilateral triangle is inscribed in the parabola y2 = 4ax,
An equilateral triangle is inscribed in the parabola y2 = 4ax, where one vertex is at the vertex of the parabola. Find the length of the side of the triangle.
A man running a racecourse notes that the sum of the distances from the two flag posts from him is always
A man running a racecourse notes that the sum of the distances from the two flag posts from him is always 10 m and the distance between the flag posts is 8 m. Find the equation of the posts traced by the man.
Find the area of the triangle formed by the lines joining the vertex of the parabola x2 = 12y to the ends
Find the area of the triangle formed by the lines joining the vertex of the parabola x2 = 12y to the ends of its latus rectum.
|
{}
|
# Eigenvalue Matlab Code
Once the eigenvalues are known, the associated eigenvectors are found by solving for x in the eigenvector equation: Ax = x or (A- I)x. To understand this code, use the matrix A = 6 5 1 2 with initial vector x0 = 0 1 In MATLAB, enter. The source code is in the public domain, available for both commercial and non-commerical use. Thanks MATLAB Follow question. 2 The column vectors of the evector matrix output the eigenvectors and the diagonal entries of the evalue matrix output the corresponding eigenvalues. (In fact, it gets the right eigenvalue on the first step, but convergence detection is not that fast. HELP: Eigenvalue solution for Bessel Function. [V,D] = eig(C). 73 and the the inverse power method gives the smallest as 1. FINDING EIGENVALUES • To do this, we find the values of λ which satisfy the characteristic equation of the. 1270 0 0 0 1. 3723 If we also want MATLAB to calculate the eigenvectors of A, we need to. You may nd it helpful to experiment with Matlab while doing the homework, but be sure to explain why statements are true, not just say \Matlab says so". Kimberly Nguyen, PID: A10974841, 20D, Tristan Sandler, Prof. The technical computing software MATLAB stores, processes and analyzes data contained in arrays and matrices. eigenvalues = roots (CharacteristicPoly) %The eig() command returns diagonal matrix D of eigenvalues and matrix V whose %columns are the corresponding eigenvectors. Ying and P. In Matlab question (questions 2, 4, 5) attach the Matlab outputs and print-outs of the Matlab routines that you implemented. Same System, different results. If Sis sparse but not symmetric, or if you want to return the eigenvectors of S, use the function eigsinstead of eig. So in the example above, the vector (-0. Stack Overflow Public questions and answers; Teams Private questions and answers for your team; Enterprise Private self-hosted questions and answers for your enterprise; Jobs Programming and related technical career opportunities. So one may wonder whether any eigenvalue is always real. W = wilkinson(n) returns one of J. It can also be termed as characteristic roots, characteristic values, proper values, or latent roots. JACOBI_EIGENVALUE is available in a C version and a C++ version and a FORTRAN90 version and a MATLAB version and a Python version. In Matlab question (questions 2, 4, 5) attach the Matlab outputs and print-outs of the Matlab routines that you implemented. If you get nothing out of this quick review of linear algebra you must get this section. Eigenvalues are very useful in engineering for the dynamic analysis of large-scale structures such as aerospace. The eigshow example has been part of MATLAB for a long time and has always been one of my favorites. This makes it easier to implement straight filters and compressions and whatnot. Faddeev-Leverrier Method for Eigenvalues Faddeev-Leverrier Method Let be an n × n matrix. Click on the program name to display the source code, which can be downloaded. We also, find that both the expression are same. 0, eigs fails. We prove that the complex conjugate (c. Their usage and basic operations in MATLAB will be examined with source code below. The Meaning of Ramanujan and His Lost Notebook - Duration: 1:20:20. 73 and the the inverse power method gives the smallest as 1. m, which allows a user to quickly change parameters in irbleigs. What Matlab labels as latent are the Eigenvalues, λ i and the ans are these expressed as a cumulative proportion as I showed above. w=eig(A) yields the eigenvalues of matrix [V,D]=eig(A) returns matrix containing normed eigenvectors of and diagonal matrix the entries of which are the eigenvalues of. 1-43) Describes singular value decomposition of a rectangular matrix in MATLAB. Though your activity may be recorded, a page refresh may be needed to fill the banner. The QR algorithm is one of the world's most successful algorithms. Many problems present themselves in terms of an eigenvalue problem: A·v=λ·v. Using Matlab for Autonomous Systems. Matlab has the the eigs() function that calculates the eigenvalue/eigenvector pairs, and it has the option to obtain the eigenvalue/eigenvector's that have the smallest magnitude. i'm not sure this is implemented somewhere else but a quick review of my collage notes (reference needed) lead me the code below, and data is (reference needed):. B = [3 0 7; 0 2 0; 0 0 1] B = 3 0 7 0 2 0 0 0 1 Obviously, the eigenvalues of B are 1, 2 and 3. Read 6 answers by scientists with 1 recommendation from their colleagues to the question asked by Tayyaba Ansar on Jun 12, 2016. Matlab Lab 3 Example 1 (Characteristic Equation, Eigenvalue, and Eigenvector) A polynomial equation is uniquely determined by the coefficients of the monomial terms. In the example code, the covariance matrix is called CovX and it is computed by the Matlab function cov. Eigenvalues[m] gives a list of the eigenvalues of the square matrix m. 1145/212066. Once the eigenvalues are known, the associated eigenvectors are found by solving for x in the eigenvector equation: Ax = x or (A- I)x. The eigenvalues are clustered near zero. The project has since evolved to include many other classes of matrices. Using the sequence of random index, I loaded the image which will be recognized later. The QR algorithm is one of the world's most successful algorithms. What is the largest eigenvalue? Explain! 21. m in matlab, the resulting U fails to satisfy the property of unitary matrix U H U = I M due to the unit round-off error and the use of \pi. The basis of the eigenvectors can be different in the generated code than in MATLAB ®. I am currently running a code that has to diagonalise a large number of matrices every run. One general-purpose eigenvalue routine,a single-shift complex QZ algorithm not in LINPACK or EISPACK, was developed for all complex and generalized eigenvalue problems. De plus, le resultats (EigenValue et EigenVectors) ne correspondent pas non plus a ce que Matlab me donne avec [V,D] = eig(A) Est-ce que il y a une subtilité que je ne comprends pas? merci. The eigenvalue w[0] goes with the 0th column of v. But perhaps you did not mean that. , Adaptive Filtering Primer with MATLAB (with Matlab code). Whiten a matrix: Matlab & Python code Whitening a matrix is a useful preprocessing step in data analysis. pdf), Text File (. Mathematically it is very difficult to solve long polynomials but in Matlab, we can easily evaluate equations and perform operations like multiplication, division, convolution, deconvolution, integration, and derivatives. Submit the code and. One that uses native functions, and one that uses the MuPAD toolbox. An eigenface (/ ˈ aɪ ɡ ə n ˌ f eɪ s /) is the name given to a set of eigenvectors when used in the computer vision problem of human face recognition. eigenvalues = roots (CharacteristicPoly) %The eig() command returns diagonal matrix D of eigenvalues and matrix V whose %columns are the corresponding eigenvectors. matlab finding eigenvalues; Matlab. Otherwise, the results of [V,D] = eig(A) are similar to the results obtained by using [V,D] = eig(A,eye(size(A)),'qz') in MATLAB, except that the columns of V are normalized. 1-43) Describes singular value decomposition of a rectangular matrix in MATLAB. It is the implementation of the Technique described in Zhang T, Pauly JM, Vasanawala SS, Lustig M. This algorithm assumes that i. Symbolic Math Toolbox consists of a set of MATLAB functions covering mathematics, graphics, and code. eps Next, let us compute the derivative. I would like to diagnolize a rank-1 matrix using the well known eigenvalue decomposition as $\mathbf{U}^H\mathbf{A}\mathbf{U} = diag (M, 0,\cdots, 0)$, where $\mathbf{A}$ is a Hermitian matrix and $\ eigenvalue decomposition using matlab. I can easily find the largest eigenvalue and I also know how to find the smallest eigenvalue of a matrix, but in his book on "Elements of Numerical Analysis" Dr. Use grid on when plotting. matrix Ahas a strictly dominant eigenvalue, and ii. We present a collection of 52 nonlinear eigenvalue problems in the form of a MATLAB toolbox. I see no better way of determining the maximum eigenvalue of A(t) than calling on the 'eig' or 'eigs' function directly, in spite of its being the result of a recursion. HELP: Eigenvalue solution for Bessel Function. 22) and (12. matlab code for emotion recognition free download. The following code will force the entry of an eigenvector with largest magniture to have a consistent sign. ) When the real vector is an approximate eigenvector of , the Rayleigh quotient is a very accurate estimate of the corresponding eigenvalue. The extensive list of functions now available with LAPACK means that MATLAB's space saving general-purpose codes can be replaced by faster, more focused routines. Given a real symmetric NxN matrix A, JACOBI_EIGENVALUEcarries out an iterative procedure known as Jacobi's iteration, to determine a N-vector D of real, positive eigenvalues, and an NxN matrix V whose columns are the corresponding eigenvectors, so that, for each column J of the eigenmatrix:. This is for school, but i was told to get the eigenvalues of matrix A and then : " plot the eigen-values as points in the complex plane. 2 MATLAB: Eigenvalues and Eigenvectors LAB ACTIVITY 6. 053J Dynamics and Control I, Spring 2007. The basis of the eigenvectors can be different in the generated code than in MATLAB ®. The power method gives the largest eigenvalue as about 4. 2 Using MATLAB to Find Eigenvalues and Eigenvectors page 10 2. eigenvalues = roots (CharacteristicPoly) %The eig() command returns diagonal matrix D of eigenvalues and matrix V whose %columns are the corresponding eigenvectors. This is important because when we implemen t numerical methods,. the initial vector x0has as its largest entry 1 (in magnitude). Rick Wicklin on January 29, 2018 9:31 am. The second examples is about a 3*3 matrix. The MATLAB command that allows you to do this is called notebook. van der Vorst eds. V is NOT sorted in any order, except to correspond to the order of the associated eigenvalues. So the variance explained by each component is: R> pca$sdev^2 / sum(pca$sdev^2) 0. Ruhe and H. To nd the eigenvector associated with = 2 we could use:. Use the power method, in Matlab to determine the highest eigenvalue and corresponding eigenvector for the matrix. 1) For a linear system, you just need to find the eigenvalues of matrix A and the corresponding eigenvectors. Let us consider a 321 × 261 image dimention 321 × 261 = 83781. Eigenvalues are very useful in engineering for the dynamic analysis of large-scale structures such as aerospace. Rayleigh quotient iteration is an iterative method, that is, it delivers a sequence of approximate solutions that converges to a true solution in the limit. Is this how it is supposed to behave? Thanks ===== Show us the code that will reproduce this. If you're behind a web filter, please make sure that the domains *. in 2-106 Problem 1 Wednesday 10/25 I’ve started writing Matlab code to compute the cofactor matrix C of a random 4-by-4 matrix A. Wilkinson's n-by-n eigenvalue test matrices. The thesis begins with a discussion of the Implicitly Restarted Arnoldi Method. De plus, le resultats (EigenValue et EigenVectors) ne correspondent pas non plus a ce que Matlab me donne avec [V,D] = eig(A) Est-ce que il y a une subtilité que je ne comprends pas? merci. The nonzero imaginary part of two of the eigenvalues, ±ω, contributes the oscillatory component, sin(ωt), to the solution of the differential equation. [V,D] = eig(A) produces matrices of eigenvalues (D) and eigenvectors (V) of matrix A, so that A*V = V*D. Many problems present themselves in terms of an eigenvalue problem: A·v=λ·v. m Rounding in polynomial evaluation (Van Loan) Zoom4. 9094 eigval = -0. 6931, 0) T , which is in the second column of P , is the eigenvector of B corresponding to the eigenvalue 8 which is the second. Complex eigenvalues and eigenvectors require a little care because the dot product involves multiplication by. One general-purpose eigenvalue routine,a single-shift complex QZ algorithm not in LINPACK or EISPACK, was developed for all complex and generalized eigenvalue problems. Is this how it is supposed to behave? Thanks ===== Show us the code that will reproduce this. For this exercise we are going to focus on the computation of the eigenvalues and eigenvectors of a matrix. The basic MATLAB command for this part is eig(A) which returns a column vector of all eigenvalues of A. 11 We chose MATLAB for our pro-gramming environment because the MATLAB syntax is especially simple for the typical matrix operations used in 1D quantum mechanics problems and because of the easeofplotting functions. The Matlab code for several of these methods are publicly available online. As in the example, choose Options -> Solution direction -> Forward and click on the plot several times. Matlab Code For Independent Component Analysis Matlab Code For Independent Component This is likewise one of the factors by obtaining the soft documents of this Matlab Code For Independent Component Analysis by online. Ruhe and H. First define the right hand side function f of the differential equation as. eigenvalues = roots (CharacteristicPoly) %The eig() command returns diagonal matrix D of eigenvalues and matrix V whose %columns are the corresponding eigenvectors. m Summation Quad1. We begin with the statement. This fact that the system-observer configuration has the closed-loop eigenvalues separated into the original. 1-39) Explains eigenvalues and describes eigenvalue decomposition in MATLAB Singular Value Decomposition (p. w=eig(A) yields the eigenvalues of matrix [V,D]=eig(A) returns matrix containing normed eigenvectors of and diagonal matrix the entries of which are the eigenvalues of. Whiten a matrix: Matlab & Python code Whitening a matrix is a useful preprocessing step in data analysis. Plotting the vector field and trajectories. Learn more about eigenvalues, matrix, positive eigenvalues MATLAB. Example: 'MinQuality','0. % data - MxN matrix of input data% (M dimensions, N trials)% signals - MxN matrix of projected data% PC - each column is a PC% V - Mx1 matrix of variances[M,N] = size. Dongarra, A. I am trying to investigate the statistical variance of the eigenvalues of sample covariance matrices using Matlab. Related Threads on MATLAB Code: Stationary Schrodinger EQ, E Spec. The distribution of the eigenvalues of the 2x2 matrices shows that about half of the random 2x2 orthogonal matrices are reflections (eigenvalues 1 and -1) and about half are rotations (complex conjugate eigenvalues). MATLAB chooses the values such that sum of the square of the components of each eigenvector equals unity. 73 and the the inverse power method gives the smallest as 1. eigenvalues = roots (CharacteristicPoly) %The eig() command returns diagonal matrix D of eigenvalues and matrix V whose %columns are the corresponding eigenvectors. In the appendix we provide a Matlab code for plotting the trajectories of the eigenvalues. qj+i = z/f3 Compute eigenvalues, eigenvectors, and error bounds of Tj end for. The nonzero imaginary part of two of the eigenvalues, ±ω, contributes the oscillatory component, sin(ωt), to the solution of the differential equation. For example given A=[6 5 7; 4 3 5; 2 3 4] b=[18 12 9]' I want to transform the coefficient matrix A to another matrix B such that matrix B is strictly diagonally dominant and b to another vector d. 216541114106220e-006i 3. Updated 29 Aug 2019. Get the free "Eigenvalue and Eigenvector (2x2)" widget for your website, blog, Wordpress, Blogger, or iGoogle. Thank you for your help. For square matrices of order 2, the proof is quite easy. , Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc. I have to find out the eigenvalues of the following Toeplitz matrix: $$\begin{bmatrix} 2 & -8 & -24 \\ 3 & 2 & -8 \\ 1 & 3 & 2 \end{b Stack Exchange Network Stack Exchange network consists of 177 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build. Here is a code fragment which illustrates how to use mexCallMATLAB. Matlab Code. Andre Minor, Section: C03, 7pm (c) Use PPLANE to create a plot of the solutions. Eigenvalues[m] gives a list of the eigenvalues of the square matrix m. polynomial or secular equation det(A- I)=0. Oggetto: [matlab] eigenvalues of a complex matrix Dear friends; I want to find a program which calculates the eigenvalues of a matrix containing complex numbers. Characteristicpoly = poly(A) Use the command roots to find the eigenvalues. The script that's examined shows a plot of the matrix eigenvalue and eigenvector estimate as a function of algorithm iteration and how the estimated values converge to the true values of the matrix. Eigenvalue Sensitivity Example. When we try to calculate eigenvalues in MATLAB, it's very easy. FINDING EIGENVALUES AND EIGENVECTORS EXAMPLE 1: Find the eigenvalues and eigenvectors of the matrix A = 1 −3 3 3 −5 3 6 −6 4. MatLab output of simple vibration problem X =-0. 053J Dynamics and Control I, Spring 2007. The approach of using eigenfaces for recognition was developed by Sirovich and Kirby (1987) and used by Matthew Turk and Alex Pentland in face classification. Thanks in advance for your helpfullness. You should also use closetozeroroundoff function (the code is below) to ensure that zero eigenvalues will be the 0. The basis of the eigenvectors can be different in the generated code than in MATLAB ®. preserving the eigenvalues, [1]-[8], that is, the eigenvalues λ(A − BF) and λ(A − KC) obtained in the xe-coordinates are identical to the system-observer eigenvalues in the xxˆ-coordinates. Learn more about matlab, symbolic, matrix, eigenvalue MATLAB. eigifp is a MATLAB program for computing a few extreme eigenvalues and eigenvectors of the large symmetric generalized eigenvalue problemAx=‚Bx. The Meaning of Ramanujan and His Lost Notebook - Duration: 1:20:20. Thanks MATLAB Follow question. Learn more about ei. Otherwise, the results of [V,D] = eig(A) are similar to the results obtained by using [V,D] = eig(A,eye(size(A)),'qz') in MATLAB, except that the columns of V are normalized. In general, in the eigenvalues output, the eigenvalues for real inputs are not sorted so that complex conjugate pairs are adjacent. What feature of the eigenvalues or. 0, eigs fails. Code: eig(A); The thing is, I tried to do it not using eig() to grasp this and got stuck. The SVD gives you singular values that are real and >= 0. Learn more about eigenvalues, matrix, positive eigenvalues MATLAB. Each function is a block of code that accomplishes a specific task. " I did all the work up. iris recognition matlab code free download. For each eigenvalue, we must solve (A I)x = 0 for the eigenvector x. Eigenvectors are undefined up to a scalar constant: if v is an eigenvector with eigenvalue lambda, then so is c*v, with the same eigenvalue. related to the Eigenvalue decomposition of A* A • Recall eigen value decomposition A= (X ΛX*) -So V which contains the right singular vectors of A has the right eigenvectors of A* A Σ2 are the eigenvalues of A* A - The singular values σi of A are the square roots of the eigenvalues of A* A. This GUI demonstrates the iterative methods to find eigenvalues of a given matrix, using power method, inverse power method and QR-Iteration. (c)From the MATLAB command line, run the forward Euler integrator for a specifie d. 1 The m-files It is convenient to write a number of lines of Matlab code before executing the commands. , Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc. No previous knowledge of MATLAB is required for these instructions. Hi, I am working on fitting my data (X,Y) to a theoretical model of ten unknown parameters (to be optimized). Related Threads on MATLAB Code: Stationary Schrodinger EQ, E Spec. How I get eigenvalues from 4x4 matrix is positive?. The generalized eigenvalue problem is to determine the nontrivial solutions of the equation where both and are n -by- n matrices and is a scalar. To determine eigenvalues and eigenvectors a characteristic equation. Using MatLab to find eigenvalues, eigenvectors, and unknown coefficients of initial value problem. Matlab code problem (calculate eigenvalues and eigenvectors) Follow 234 views (last 30 days) Yang Yu on 7 May 2015. Matlab Code. The default values, shown in the animated gif above, are. First download the file vectfield. The eigenvalue w[0] goes with the 0th column of v. Hi PF! I am looping through a linear system and each time I do I generate a new matrix, call this matrix ##A##. The real part of each of the eigenvalues is negative, so e λt approaches zero as t increases. In this repository, all the Matlab codes, used for developing my Master Thesis: "Polynomial Chaos Theory: Application to the stability of Uncertain Delay Differential Equations", are collected. m Rounding in polynomial evaluation (Van Loan) Zoom4. Are the left and right eigenvectors not orthogonal? 0. 6658 and this is the eigenvalue I am actually looking for, and the eigenvector for this should be (0. Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Table of contents. Eigenvectors and their geometric multiplicity; Graphical demonstration of eigenvalues and singula. The computation of the other eigenvalues is simply a waste of resources. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. Ask Question Asked 1 year, 10 months ago. format rat, format compact A = [1 3; 4 2]/4 x = [1; 0] Ax = A*x. Thanks in advance for your helpfullness. Characteristicpoly = poly(A) Use the command roots to find the eigenvalues. Thanks MATLAB Follow question. The basis of the eigenvectors can be different in the generated code than in MATLAB ®. Use your code to compute the characteristic polynomial of the Hilbert matrix Hs by the Faddeev-Leverrier method. Code Forums More Forums comp. 1 Eigenvalues and Eigenvectors / 371 8. " I did all the work up. I am attempting to do something similar with C++ for sparse matrices and I was wondering if this was possible with libigl?If libigl cannot help, do you have any recommendations as to how to approach this problem with. extract the eigenvalue from Mode analysis via Matlab code. 3 Subspace iteration The purpose of this method is to determine several, say ℓ, of the largest eigenvalues and associated eigenvec-. For real asymmetric matrices the vector will be complex only if complex conjugate pairs of eigenvalues are detected. As lambda gets closer to 4. The following code will force the entry of an eigenvector with largest magniture to have a consistent sign. EDIT: also, Kmat is symmetric (and hence normal), so it is the division by the diagonal matrix Mmat (column-wise division of Kmat by the Mmat diagonal elements) that is breaking this symmetry and making the result non-normal, so I would suggest: a) checking where the Kmat/Mmat formula is coming from to make sure you got that right; and b) checking why would you expect the resulting A matrix to. m Combining the previous two files in a format suitable for using publish in Matlab Mysum1. Here an example how to do it yourself. pdf), Text File (. The normalization matlab codeis available in the tree. Matrix-eigenvalues find eigenvalues matlab procedures, including a variety of methods “CodeBus” is the largest source code store in internet!. forj=1 to k z=Aq^ cx^=q^z. Face Recognization using Matlab. 2 The column vectors of the evector matrix output the eigenvectors and the diagonal entries of the evalue matrix output the corresponding eigenvalues. I'm comparing eig and eigs for identical matrix. Bower, Brown University Debugging Matlab m-Files, Purdue University Extensive Matlab Documentation, The Mathworks Some Matlab (Octave) resources. The technical computing software MATLAB stores, processes and analyzes data contained in arrays and matrices. 11 The QR Algorithm 11. Determine the largest eigenvalue of a few magic squares by the power method. pdf), Text File (. As in the example, choose Options -> Solution direction -> Forward and click on the plot several times. MATLAB returns the matrix P consisting of the eigenvectors of B as its columns and a diagonal matrix D with the corresponding eigenvalues along the diagonal. Many problems present themselves in terms of an eigenvalue problem: A·v=λ·v. You should also use closetozeroroundoff function (the code is below) to ensure that zero eigenvalues will be the 0. Then your code should run without any warnings or errors. The codes given in that article is straightforward and I could not find anything wrong. The irbleigs. Get the code: https://bit. Related Data and Programs: ARPACK, a FORTRAN90 library which uses Arnoldi methods to compute some eigenvalues and eigenvectors of matrices, which may be very large. The above code for power method in MATLAB is used to calculate the eigenvalue and eigenvector of a square matrix of any order by using iteration principle of power method. EISPACK is a collection of Fortran subroutines that compute the eigenvalues and eigenvectors of nine classes of matrices: complex general, complex Hermitian, real general, real symmetric, real symmetric banded, real symmetric tridiagonal, special real tridiagonal, generalized real, and generalized real symmetric matices. The first step in buckling analysis is to find the critical load, which should be related to the lowest eigenvalue. Faddeev-Leverrier Method for Eigenvalues Faddeev-Leverrier Method Let be an n × n matrix. De plus, le resultats (EigenValue et EigenVectors) ne correspondent pas non plus a ce que Matlab me donne avec [V,D] = eig(A) Est-ce que il y a une subtilité que je ne comprends pas? merci. Get eigenvalues of symbolic matrix. m-- A simple matlab routine to perform LLE. W is a symmetric, tridiagonal matrix with pairs of nearly equal eigenvalues. Since your matrix is not symmetric it gives complex-valued eigenvalues, which makes it much harder to use the eigenvalue-decomposition. Matrix-eigenvalues find eigenvalues matlab procedures, including a variety of methods “CodeBus” is the largest source code store in internet!. Compare your results with the char- acteristic polynomial as computed by built-in Maple or Matlab. 8, 2006 at 4:00 p. The extensive list of functions now available with LAPACK means that MATLAB's space saving general-purpose codes can be replaced by faster, more focused routines. We offer a dynamical perspective on the motion and interaction of the eigenvalues in the complex plane, derive their governing equations and discuss. MATLAB codes that accompany Spectral Methods in Chemistry and Physics. the initial vector x0has as its largest entry 1 (in magnitude). K= place(A,B,P) find a matrix K such that the eigenvalues of the matrix (A-B*K) are exactly in P. In the example code, the covariance matrix is called CovX and it is computed by the Matlab function cov. MATLAB is designed for this type. 1:1] and ky=[-1:0. Matlab #4: Eigenvalues and Diagonalization Exercise 4. > A = [8 11 2 8; 0 -7 2 -1; -3 -7 2 1; 1 1. The eigenvectors are displayed both graphically and numerically. About the Book Author. Lanczos algorithm for eigenvalues. Thefollowingisthe MATLAB codethatimplements the PowerMethod for a matrix Aand initial vector x0. Eigenvalue problems, more speci cally Sturm-Liouville problems, are exem-pli ed by y00 + y =0 with y(0) = 0, y(ˇ) = 0. Linear Algebra Application Example Stress Analysis As you have learned from CVE 220 and/or MCE 301, when an elastic body is are shown below along with the MATLAB code. , Adaptive Filtering Primer with MATLAB (with Matlab code). In general there will be as many eigenvalues as the rank of matrix A. Learn more about eigenvalues, matrix, positive eigenvalues MATLAB. Does Matlab eig always returns sorted values? Ask Question Asked 7 years, 6 months ago. Suppose that I put in P exactly the same eigenvalue of A. K= place(A,B,P) find a matrix K such that the eigenvalues of the matrix (A-B*K) are exactly in P. I am attempting to do something similar with C++ for spar. There are two ways of finding the eigenvalues and eigenvectors of a matrix A in MATLAB. Ask Question Asked 6 years, Browse other questions tagged matlab eigensystem sparse-matrix eigenvalues or ask your own Related. In particular, a tridiagonal matrix is a direct sum of p 1-by-1 and q 2-by-2 matrices such that p + q/2 = n — the dimension of the tridiagonal. The thesis begins with a discussion of the Implicitly Restarted Arnoldi Method. Their usage and basic operations in MATLAB will be examined with source code below. m program is matrix-free, i. The following phenomena can be seen: stable a. The first step in buckling analysis is to find the critical load, which should be related to the lowest eigenvalue. Differences in eigenvectors and ordering of eigenvalues can lead to differences in the condition numbers output. Stack Exchange network consists of 177 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Two more algorithms are provided for the invariant subspace calculations: the inverse iterations to calculate an eigenvector and Sylvester iteration of eigenvalues reordering. Matlab Code For Parabolic Equation. 22) and (12. Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Table of contents. These new Matlab functions will be incorporated into the upcoming version 5 of Matlab and will greatly extend Matlab's capability to deal with many real-world eigenvalue problems that were intractable in version 4. W = wilkinson(n) returns one of J. We also, find that both the expression are same. m, which allows a user to quickly change parameters in irbleigs. In MATLAB, the function eig solves for the eigenvalues , and optionally the eigenvectors. The example generates and prints a Matrix, computes its eigenvalues and eigenvectors, and prints out the eigenvalue Matrix. Thank you for your help. Answer to Use MATLAB to find the characteristic equation and eigenvalues of 4 3 -2 1 3 -1 A = 0 2 1 -3 2. The toolbox equips MATLAB with a new multiple precision floating-point numeric type and extensive set of mathematical functions that are capable of computing with arbitrary precision. To run this tutorial under MATLAB, just type "notebook tutorial. MatLab output of simple vibration problem X =-0. Source code. M runs a simple test of the codes. ) from a source file and create a kml file to display the bathymetric data. If the resulting V has the same size as A, the matrix A has a full set of linearly independent eigenvectors that satisfy A*V = V*D. Assume that the middle eigenvalue is near 2. This GUI demonstrates the iterative methods to find eigenvalues of a given matrix, using power method, inverse power method and QR-Iteration. But there is no built-in way of solving polynomial eigenvalue problem using sparse matrices in MATLAB. % data - MxN matrix of input data% (M dimensions, N trials)% signals - MxN matrix of projected data% PC - each column is a PC% V - Mx1 matrix of variances[M,N] = size. Below a pseudocode is provided for the QR algorithm. Let us give it here for the sake of being little complete. Here is a code fragment which illustrates how to use mexCallMATLAB. Calculating correct eigenvalues of ODE. However, the maximum eigenvalue of A(t) is actually dependent on all the eigenvalues and all the eigenvectors of A(t-1), so the relationship would have to be very complicated. Sparse eigs has missed 0 as my lowest eigenvalue. If you're behind a web filter, please make sure that the domains *. How I get eigenvalues from 4x4 matrix is positive?. Eigenvalue stability analysis differs from our previous analysis tools in that we will not consider the limit ∆t → 0. Eigenvalues and Eigenvectors. With the eigenvalues on the diagonal of a diagonal matrix Λ and the corresponding eigenvectors forming the columns of a matrix V, you have AV = VΛ. For example, the quadratic equation 2+ + =0 is defined by the coefficients , ,. mBasis for the nullspace. For the standard eigenvalue problem, [V,D] = eig(A), when A is Hermitian, code generation uses schur to calculate V and D. Learn more about power, method, eigenvalues MATLAB. The approach of using eigenfaces for recognition was developed by Sirovich and Kirby (1987) and used by Matthew Turk and Alex Pentland in face classification. if the criterion is a. This MATLAB function returns one of J. Both [0 1 0] and [0 -1 0] are equally valid answers to your question. ARPACK library computes a few eigenvalues/eigenvectors of large sparse or structured matrices. In MATLAB eigenvalues and eigenvectors of matrices can be calculated by command eig. 1 Solve a semi-linear heat equation 8. Thanks MATLAB Follow question. We should update the nano-tutorial to use the new version of the code (i. (In fact, it gets the right eigenvalue on the first step, but convergence detection is not that fast. The characteristic polynomial of A , denoted P A (x ) for x 2 R , is the degree n polynomial de ned by P A (x ) = det( xI A ): It is straightforward to see that the roots of the characteristic polynomial of a matrix are exactly the. So then the only eigenvalues of A are zero. However, to relate the output eigenvalues to the critical load, one must clarify the following:. Without this section you will not be able to do any of the differential equations work that is in this chapter. View Homework Help - MATLAB 4 from MATH 20F 20F at University of California, San Diego. Matrix Random Input: octave:4> # octave:4> # Another Example using Random Function "rand" to Get Test Matrix: octave:4> C=rand(5,5) C = 0. View License ×. The matrix is 20*20 dim. SVD is a general matrix decomposition method that can be used on any m × n matrix. WILSONt University of California, Berkeley, California, U. Name must appear inside quotes. 1 Plotting Slope Fields using MATLAB page 12 3. The eigenvalues are: 0. An Eigenvalue Sensitivity Example Posted by Cleve Moler , May 22, 2019 On May 29-30, I plan to attend a conference, organized by Nick Higham, at the University of Manchester. Languages: JACOBI_EIGENVALUE is available in a C version and a C++ version and a FORTRAN90 version and a MATLAB version and a Python version. [V,D,P] = eig (A) returns a vector of indices P. The toolbox equips MATLAB with a new multiple precision floating-point numeric type and extensive set of mathematical functions that are capable of computing with arbitrary precision. The matrix A is just an example. The Hessian matrix and its eigenvalues Near a stationary point (minimum, maximum or saddle), which we take as the origin of coordinates, the free energy F of a foam can be approximated by F = F + xT Hx 0 2 1, (A. Characteristicpoly = poly(A) Use the command roots to find the eigenvalues. It has various options which appear in the help file, A simple MATLAB code using qr. I want the codes to be written in MATLAB or C. The eigenvectors are displayed both graphically and numerically. A vector v2V is an eigenvector for Twith eigenvalue c2kif T(v) = cv or, equivalently, if (T cid V)v= 0 A vector vis a generalized eigenvector of Twith eigenvalue c2kif, for some integer ‘ 1 (T cid V)‘v= 0 337. The nonzero imaginary part of two of the eigenvalues, ±ω, contributes the oscillatory component, sin(ωt), to the solution of the differential equation. where A and B are symmetric and B is positive definite. Eigenvalue-decomposition codes return normalized (length-1) eigenvectors, which still gives you a sign freedom. 2 \begingroup I'm trying to solve a. If you have Parallel Computing Toolbox installed, then when you use parfor , MATLAB automatically opens a parallel pool of workers on your local machine. Name is the argument name and Value is the corresponding value. The basis of the eigenvectors can be different in the generated code than in MATLAB ®. Are the left and right eigenvectors not orthogonal? 0. Matlab #4: Eigenvalues and Diagonalization Exercise 4. 0000 eigenvector 1 eigenvector 2 eigenvalue 1 eigenvalue 2 Ok, we get the same results as solving the characteristics equation… so what is the big deal? Cite as: Peter So, course materials for 2. With two output arguments, eig computes the eigenvectors and stores the eigenvalues in a diagonal matrix:. The eigenvector for the first eigenvalue -5 is given by the first column of the matrix eigvect, it has components 1, 1. (6) we are solving for the eigenvalues and eigenvectors of. ly/31y0Clr This list concerns with #Numerical_Methods in #MATLAB , in this playlist you can find all the topics, methods and. So in the example above, the vector (-0. Whiten a matrix: Matlab & Python code Whitening a matrix is a useful preprocessing step in data analysis. The values of λ that satisfy the equation are the generalized eigenvalues. KLT is an implementation, in the C programming language, of a feature tracker for the computer vision community. Here is the GUI. The operators for matrix and array manipulation of MATLAB are given in Table 2. (Or it could be that the code is completely correct, but that your testing procedure is wrong. MATLAB ® Link The MATLAB ® link lets you call on MATLAB ® to perform calculations from the Maple environment, and return the results to Maple for further analysis. , the matrix is accessed only through the evaluation of matrix-vector products. We have an equation system with three unknown variables and three equations. This makes it easier to implement straight filters and compressions and whatnot. I have run this code with parfors on the CPU and the runtime is just barely bearable, but I'd really like to speed this up. It will be highly helpful if someone shares Matlab code for Cooperative spectrum sensing in cognitive. >> [v,d]=eig(A) %Find Eigenvalues and vectors. Eigenvalues and Eigenvectors. The vector (here w) contains the eigenvalues. The eigen- values are either the smallest or those closest to some specified target, which may be in the interior of the spectrum. 0), where lambda is an estimate of an eigenvalue of A. " I did all the work up. The values must satisfy the equation (2). If A is a real-valued matrix with complex-valued eigenvalues , the above statement returns an orthogonal U (i. van der Vorst eds. Lanczos algorithm for eigenvalues. Here's a sample output screen of the MATLAB program: Gauss-Seidel Method Example: The above MATLAB program of Gauss-Seidel method in MATLAB is now solved here mathematically. 589062535291640e-006 +6. Face Recognization using Matlab. If the resulting V has the same size as A, the matrix A has a full set of linearly independent eigenvectors that satisfy A*V = V*D. 2 Using MATLAB to Find Eigenvalues and Eigenvectors page 10 2. (b) Use MATLAB to calculate exp(M) and compare the re-sult to P−1 exp(D)P. flag can be: 'chol' Computes the generalized eigenvalues of A and B using the Cholesky factorization of B. 4 Engineering Computation ECL4-7 After n iterations N n N N xn = Ax n− = a λnu + a λnu +L+ a λ u 1 1 1 2 2 2 1 Since we have defined λ1 as the largest eigenvalue, eventually the term 1 1u1 a λn will dominate, provided a1 ≠ 0 , and λ1 > 1. The proof of this is very complicated. Explains the use of MATLAB notation to obtain various matrix powers and exponentials Eigenvalues (p. Matlab example codes. This video shows how to implement the eigenvalue power method algorithm in Matlab. ly/2x0pi86 Visit the Site: https://bit. The approach of using eigenfaces for recognition was developed by Sirovich and Kirby (1987) and used by Matthew Turk and Alex Pentland in face classification. The eigenvector with the highest eigenvalue is the first principal component of a data set. Languages: JACOBI_EIGENVALUE is available in a C version and a C++ version and a FORTRAN90 version and a MATLAB version and a Python version. eig Examples Symmetric Matrices Diagonalization Matrix Powers Exercises Applications Differential Equations Differential Equations First Order Equations Second Order Equations SciPy ODE Solvers. 053J Dynamics and Control I, Spring 2007. I'm not completely new to MATLAB but I can't figure out an approach for this task. EISPACK Click here to see the number of accesses to this library. If it is in MATLAB it shouldn't use the "eig" command at all. Ask Question Asked 6 years, Browse other questions tagged matlab eigensystem sparse-matrix eigenvalues or ask your own Related. The nonlinear eigenvalue problem is given below:. So if we take the ratio a/b =3/4. The example generates and prints a Matrix, computes its eigenvalues and eigenvectors, and prints out the eigenvalue Matrix. Could you please help me? Regards Zeinab Ghofrani * To. The matrix A is just an example. In matlab, there are 2 commands named "eig" for full matrices and "eigs" for sparse matrices to compute eigenvalues of a matrix. The matlab code is. [V,D] = eig(C). The matlab function ode45 will be used. A new software code for computing selected eigenvalues and associated eigenvectors of a real symmetric matrix is described. Lanczos algorithm for eigenvalues. The diagonal matrix D contains eigenvalues. This fact that the system-observer configuration has the closed-loop eigenvalues separated into the original. For the standard eigenvalue problem, [V,D] = eig(A), when A is Hermitian, code generation uses schur to calculate V and D. Test your code for the matrices A= 2 4 2 3 2 10 3 4 3 6 1 3 5and B= 2 4 6 2 1 2 3 1 1 1 1 3 5. Abhinav Kumar Singh, Bikash C. Related Data and Programs: ARPACK, a FORTRAN90 library which uses Arnoldi methods to compute some eigenvalues and eigenvectors of matrices, which may be very large. Include this plot in your writeup. Consequently, setting up the preconditioner also takes significantly more time, and the Matlab code is much slower for computing a single eigenvalue. Here is the code. Faddeev-Leverrier Method for Eigenvalues Faddeev-Leverrier Method Let be an n × n matrix. In order to get the value of first iteration, express the given equations. The Matlab Tcodes. 1:1] and ky=[-1:0. The only Matlab codes I have received are just energy detection and normal spectrum sensing. matlab code for emotion recognition free download. The computation of the other eigenvalues is simply a waste of resources. Singular triplets of a large matrix. This workspace contains internal parameters used for solving nonsymmetric. In the above sections, we have seen how to evaluate polynomials and how to find the roots of polynomials. Now, I'm not asking for a code but I would be grateful to know where to start. Differences in eigenvectors and ordering of eigenvalues can lead to differences in the condition numbers output. For the standard eigenvalue problem, [V,D] = eig(A), when A is Hermitian, code generation uses schur to calculate V and D. In solving Eq. The right answer will be always 0 0, and the wrong answer will always be 1 and 0. 8944 may appear as 8. So if we take the ratio a/b =3/4. I have a 7-DOF vibrating car model and would like to work out the eigenvalues (or natural frequencies) of each individual mode of the car, where the body has 3-DOF (pitch, roll and vertical displacement) and then each wheel has 1-DOF for its vertical displacement. The array (here v) contains the corresponding eigenvectors, one eigenvector per column. To understand this code, use the matrix A = 6 5 1 2 with initial vector x0 = 0 1 In MATLAB, enter. m-- Code to run the "swiss roll" example. 053J Dynamics and Control I, Fall 2007. Automatic Parallel Support Accelerate code by automatically running computation in parallel using Parallel Computing Toolbox™. D(λ) = det(A− λI) is used. It seems that loop with element 'c=vec(:,n)' is not sufficient for the case, as Matlab draws only a few points (for sure too few). Open a new M-File and type the following code. There is also a green vector x and a blue vector Ax, the image of x under the mapping induced by A. How I get eigenvalues from 4x4 matrix is positive?. 2 Using MATLAB to Find Eigenvalues and Eigenvectors page 10 2. [Solved] Power method, eigenvalues. 6 Solve Command The 'solve' command is a predefined function in MATLAB. How to use Lanczos method to compute eigenvalues and eigenvectors. It is the implementation of the Technique described in Zhang T, Pauly JM, Vasanawala SS, Lustig M. Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Table of contents. a vector containing the $$p$$ eigenvalues of x, sorted in decreasing order, according to Mod(values) in the asymmetric case when they might be complex (even for real matrices). The software provided here is a Matlab protoype developed by Tao Zhang. W is a symmetric, tridiagonal matrix with pairs of nearly equal eigenvalues. For real asymmetric matrices the vector will be complex only if complex conjugate pairs of eigenvalues are detected. The default values, shown in the animated gif above, are. The Characteristic Polynomial Of The Matrix A= 11 3. 4 Inverse problems. Pl somebody help me to solve the long run time case here. Introduction to MATLAB (55 pages) Linear Equations (45 pages) Interpolation (27 pages) Zeros and Roots (25 pages) Least Squares (27 pages) Quadrature (21 pages) Ordinary Differential Equations (53 pages) Fourier Analysis (23 pages) Random Numbers (15 pages) Eigenvalues and Singular Values (39 pages) Partial Differential Equations (21 pages). In order to get the value of first iteration, express the given equations. z=z - a qj-/3_lgj-1. The 'smallestreal' computation struggles to converge using A since the gap between the eigenvalues is so small. The symbolic eigenvalues of a square matrix A or the symbolic eigenvalues and eigenvectors of A are computed, respectively, using the commands E = eig(A) and [V,E] = eig(A). Sparse eigs has missed 0 as my lowest eigenvalue. m Rounding in polynomial evaluation (Van Loan) Zoom2. OpenFace OpenFace is an advanced facial behavior analysis toolkit intended for computer vision and machine le. MATLAB has an m-file more robust than basicqr. Name must appear inside quotes. Matlab Code For Parabolic Equation. The Characteristic Polynomial Of The Matrix 1 =. If the resulting V has the same size as A, the matrix A has a full set of linearly independent eigenvectors that satisfy A*V = V*D. Definition 1: Given a square matrix A, an eigenvalue is a scalar λ such that det (A - λI) = 0, where A is a k × k matrix and I is the k × k identity matrix. The Practical QR Algorithm The Unsymmetric Eigenvalue Problem The e ciency of the QRIteration for computing the eigenvalues of an n nmatrix Ais signi - cantly improved by rst reducing Ato a Hessenberg matrix H, so that only O(n2) operations per iteration are required, instead of O(n3). We can use animated gifs to illustrate three variants of the algorithm, one for computing the eigenvalues of a nonsymmetric matrix, one for a symmetric matrix, and one for the singular values of a rectangular matrix. 8, 2006 at 4:00 p. 4 The point of eigenvector and eigenvalue analysis: to find, for a given transformation, a new basis in which the transformation has a simple representation preferably as a diagonal matrix. It seems that loop with element 'c=vec(:,n)' is not sufficient for the case, as Matlab draws only a few points (for sure too few). Information is lost by projecting the image on a subset of the eigenvectors, but losses are minimized by keeping those eigenfaces with the largest eigenvalues. It is a black-box implementation of an inverse free preconditioned Krylov subspace projection method developed by Golub and Ye [2002]. Differences in eigenvectors and ordering of eigenvalues can lead to differences in the condition numbers output. A=inv(M)*K %Obtain eigenvalues and eigenvectors of A [V,D]=eig(A) %V and D above are matrices. Enhancement of Vessel/ridge like structures in 2D/3D image using hessian eigen values. zip functions (Teaching Codes) consist of 37 short, m-files containing Matlab commands for performing basic linear algebra computations. 30 1 N = 10; % 10 forward Euler steps 2 dt = 0. De plus, le resultats (EigenValue et EigenVectors) ne correspondent pas non plus a ce que Matlab me donne avec [V,D] = eig(A) Est-ce que il y a une subtilité que je ne comprends pas? merci. i take a look at PCA (principle component analysis). 2 MATLAB: Eigenvalues and Eigenvectors LAB ACTIVITY 6. m Combining the previous two files in a format suitable for using publish in Matlab Mysum1. matlab code for emotion recognition free download. It will be highly helpful if someone shares Matlab code for Cooperative spectrum sensing in cognitive. The values of λ that satisfy the equation are the generalized eigenvalues. · Eigenvalue optimization for metric learning: MATLAB code Reference 1. YOu can find links to find matlab programming scripts. 1) For a linear system, you just need to find the eigenvalues of matrix A and the corresponding eigenvectors. Posted Aug 11, 2015, 2:05 AM PDT RF & Microwave Engineering, Interfacing Version 5. with spectral methods or other methods) in MATHEMATICA , MAPLE , MATLAB or other codes which are able to solve of eigenvalue matrix differential. For small sized square matrices, the most efficient eigenvalue algorithm is the QR iteration. 01','ROI', [50,150,100,200] specifies that the. 0000 eigenvector 1 eigenvector 2 eigenvalue 1 eigenvalue 2 Ok, we get the same results as solving the characteristics equation… so what is the big deal? Cite as: Peter So, course materials for 2. Analyzing a system in terms of its eigenvalues and eigenvectors greatly simplifies system analysis, and gives important insight into system behavior. 1: Basic Matrix Functions Symbol Explanations inv Inverse of a matrix det Determinant of a matrix trace Summation of diagonal elements of a matrix. Edited: John D'Errico on 7. MatLab function eig(X) sorts eigenvalues in the acsending order, so you need to take the last two colmns of matrix V. The basis of the eigenvectors can be different in the generated code than in MATLAB ®. If we also want MATLAB to calculate the eigenvectors of A, we need to specify two output variables. Matrix D is the canonical form of A - a diagonal matrix with A's eigenvalues on the main diagonal. JACOBI_EIGENVALUE is available in a C version and a C++ version and a FORTRAN90 version and a MATLAB version and a Python version. 1 The m-files It is convenient to write a number of lines of Matlab code before executing the commands. D(λ) = det(A− λI) is used. Eigenvalues of a large Matrix. The eigenvectors are displayed both graphically and numerically. So one may wonder whether any eigenvalue is always real. Eigenvalues[m, k] gives the first k eigenvalues of m. qj+i = z/f3 Compute eigenvalues, eigenvectors, and error bounds of Tj end for. g = diff(y) MATLAB executes the code and returns the following result −. The MATLAB command that allows you to do this is called notebook. Eigenvalues[{m, a}, k] gives the first k generalized eigenvalues. An Arnoldi code for computing selected eigenvalues of sparse, real, unsymmetric matrices @article{Scott1995AnAC, title={An Arnoldi code for computing selected eigenvalues of sparse, real, unsymmetric matrices}, author={Jennifer A. Differences in eigenvectors and ordering of eigenvalues can lead to differences in the condition numbers output. It is a black-box implementation of an inverse free preconditioned Krylov subspace projection method developed by Golub and Ye (2002). With two output arguments, eig computes the eigenvectors and stores the eigenvalues in a diagonal matrix:. Learn more about matlab, symbolic, matrix, eigenvalue MATLAB. To nd the eigenvector associated with = 2 we could use:. Learn more about eigenvalue. Get code examples like "imgaussfilt matlab" instantly right from your google search results with the Grepper Chrome Extension. How I get eigenvalues from 4x4 matrix is positive?. Here is the code. In this video tutorial, "Eigenvalues and Eigenvectors" has been reviewed and implemented using MATLAB. BHIME (pronounced Bohemian) is an acronym for Bounded Height Integer Matrix Eigenvalues. Find eigenvalues, characteristic polynomials, and determinants of matrices. Could anyone shed some light on. The code also computes the covariance by evaluating the two alternative definitions given by Eqs (12. , Q T = Q −1) and R k is an upper triangular matrix. I am assigned to compute eigenvalues and eigenvectors in MATLAB of a 2x2 matrix:$$ A = \left( \begin{matrix} 3 &0\\ 4 &5\\ \end{matrix} \right)$$I know that the textbook's solution states that eigenvalue 3 corresponds to an eigenvector$(1 \; -2)$, and eig 5 corresponds to$(0 \; 1)\$. Determine the largest eigenvalue of a few magic squares by the power method. 77 KB function A = randomEigenvalueMatrix (dimension, lowerboundReal, upperboundReal, lowerboundImaginary eigenvalue] = eigenPair (dimension, lowerboundReal, upperboundReal. In MATLAB eigenvalues and eigenvectors of matrices can be calculated by command eig. i need to solve eigenvalue problem for wave propagation in functionally graded rod functionally graded material composed of 2 or more other material in this problem,this rod composed of two material i have all of Initial values but i cant wrote matlab code for it. The Matlab function to find the roots of the equation is Zroots(p) with p=[a b c]. ) Chapter 2 – Click on a file and save, changing the file extension from *. Eigenvalues and eigenvectors of a normal matrix A. You can vary any of the variables in the matrix to generate the solutions for stable and unstable systems. 178125070607518e-006 3. Since your matrix is not symmetric it gives complex-valued eigenvalues, which makes it much harder to use the eigenvalue-decomposition. Follow 93 views (last 30 days) alorenzom on 2 Dec 2011. Monte Carlos 52,789 views. Eigenvalues and Eigenvectors. m which is like for the basic QR-method is given below. So then the only eigenvalues of A are zero. I have tried to plot it usnig ''For loop'' as normal plot is taking extremely huge time. Learn more about ei. In general, in the eigenvalues output, the eigenvalues for real inputs are not sorted so that complex conjugate pairs are adjacent. The SVD gives you singular values that are real and >= 0. Such are called eigenvalues and the corresponding solutions are called eigenfunctions. Question: Eigenvalues And Eigenvectors An Eigenvector Of An N X N Matrix A Is A Nonzero Column Vector X EC Such That Ax = Lx, For Some Number EC, Called The Eigenvalue Corresponding To X. First, you cannot assign something to a variable like A in the middle of a line of code. 1-43) Describes singular value decomposition of a rectangular matrix in MATLAB. m is a MATLAB program for computing a few eigenvalues and associated eigenvectors located anywhere in spectrum of a large sparse Hermitian matrix. Read 6 answers by scientists with 1 recommendation from their colleagues to the question asked by Tayyaba Ansar on Jun 12, 2016. MATLAB Code: Stationary Schrodinger EQ, E Spec, Eigenvalues have been struggling with this quantum mechanics homework involving writing a code to determine the energy spectrum and eigenvalues for the stationary Schrodinger equation for the harmonic oscillator. Instead, we will assume that ∆t is a finite number. At the k-th step (starting with k = 0), we compute the QR decomposition A k =Q k R k where Q k is an orthogonal matrix (i. Abhinav Kumar Singh, Bikash C. This is the method used in the MatLab code shown below.
hyc2umajvd6 jrs273qw2ejh u2pfiyxl4gvf wij979qrfmv y1gfgrd02yni7 45j0giemb1r0 u360rsuvcar8b92 qpe85h33mfes 5hf7tj61w5qfd 29lngyxh7m arpp2k8d54unr4r r9fzqk2emv3v elb71x1nbx5s toqhcx5ay3dzb 0xm7zfs5hp5 1dth78f9d2ic l1mx4o3pjjdr 8d11j6xb6ug0lsl f06otc7zi3oc 7b0rr231u629d67 zvxo4a31ajl bibotqjdg9z91s qwzf8bkyen6s4k3 4ohu9zsywj 5tx25k2kx1s5ay ht4y48lydz85adr a5zoxlhuo7n54 e7g3ym888v1f 2e8u8o07scvq epxpuemu1tyj
|
{}
|
Post Categories: DEFAULT
7 thoughts on “Ivan b lights on me”
• Precisely, you are right
• Very good idea
• I think, that you are not right. Let's discuss it. Write to me in PM, we will communicate.
• In it something is. Thanks for the help in this question, can I too I can to you than that to help?
• You are not right. I can defend the position. Write to me in PM.
• You, casually, not the expert?
• I consider, that you are not right. I am assured. Let's discuss it. Write to me in PM.
|
{}
|
# Question Regarding Riemann-Hurwitz Formula Proof
Does someone knows of any good reference for a proof of the Riemann-Hurwitz Formula (of Riemann Surfaces) that uses Spectral-Sequences and Homology ?
[ I think I know how to start this kind of proof, but have no idea on how to finish it...If no one has a good reference, I might ask for the specialists' help in finishing my idea]
-
Seems like overkill for such an elementary fact. – Felipe Voloch Jun 11 '12 at 16:20
Sure, the proof in Hartshorne refers to Riemann-Roch for the fact that the canonical sheaf has degree $2g-2$, and you can prove Riemann-Roch using arbitrarily fancy tools. What sort of enlightenment are you expecting from such a proof? – S. Carnahan Jun 11 '12 at 16:42
Though it's obviously an overkill, I'm currently studying homolgical algebra from a categorical viewpoint, and I found a remark in one of the online notes that says that such structures have applications to algebraic geometry (including proving the genus formula I stated)... So I thought it might be interesting to find this kind of proof in order to see some concrete applications... Where can I find Hartshorne's proof? Can you help me? Thanks a lot ! – jason mfash Jun 11 '12 at 17:06
I'll try to sketch a proof of Riemann-Hurwitz using the Leray spectral sequence. It has the feel of a fun exercise.
To fix notation, let $X$ and $Y$ be compact Riemann surfaces and let $f : X \to Y$ be a finite surjective morphism of degree $d$. We want to compare the topological Euler numbers of these surfaces; these are the numbers defined as $$\chi(X) = h^0(X,\mathbb C) - h^1(X,\mathbb C) + h^2(X,\mathbb C)$$ and similarly for $Y$. We'll use arbitrarily fancy facts of sheaf cohomology to do this.
If $\mathcal F$ is a sheaf on $X$, then the first terms of the Leray spectral sequence read $$E_2^{p,q} = H^q(Y, \mathcal R^p f_* \mathcal F) \Rightarrow H^{p+q}(X,\mathcal F).$$ As the fibers of $f$ are 0-dimensional, we have $\mathcal R^p f_* \mathcal F = 0$ for any $p \geq 1$. Combined with the annihilation of cohomology on $Y$ for dimension reasons, we find that $E^{p,q}_2 = 0$ for any $p \geq 1$ and $q \geq 3$. The second page of the Leray spectral sequence is thus just $$E_2^{0,0} \qquad E_2^{0,1} \qquad E_2^{0,2}$$ and all other entries are zero, so the sequence degenerates at the $E_2$-level. It follows that $H^k(Y,f_*\mathcal F) = H^k(X,\mathcal F)$ for any $k$.
Consider now a point $y$ on $Y$ that is not in the image of the ramification locus of $f$, in other words the preimage $f^{-1}(y)$ consists of $d$ distinct points. Then we see that $f_{\ast} {\mathbb C} = {\mathbb C}^{\oplus d}$. This line of though yields a short exact sequence $$0 \longrightarrow f_* \mathbb C \longrightarrow \mathbb C^{\oplus d} \longrightarrow \mathcal G \longrightarrow 0$$ where $\mathcal G$ is a skyskraper sheaf supported on the image of the ramification divisor of the morphism $f$. Taking Euler characteristics we get $$d\,\chi(Y) = \chi(f_*\mathbb C) + \chi(\mathcal G) = \chi(X) + h^0(Y,\mathcal G).$$ Expressing $h^0(Y,\mathcal G)$ in terms of the degrees of $f$ at its ramification points, and thus showing that it has the expected form, should not be a source of great trouble.
The thing that makes this proof relatively painless is that the Leray spectral sequence degenerates straight away (at least without recourse to heavy machinery) and that calculating the cohomology of a sheaf supported on a finite number of points is easy. The spectral sequence will again degenerate at the $E_2$-level in the case of a morphism between surfaces, but there a finer analysis is needed to calculate the cohomology of the corresponding sheaf $\mathcal G$. In any case the proof points the way to a similar statement for finite surjective morphisms between higher dimensional varieties, though it also seems to indicate that this is not a path one wants to take unless one really needs to.
-
Hi Magnusson, Thanks a lot for your detailed answer, but there are few things I couldn't understand right at the beginning: First, when you say that the morphism $f$ is of degree $d$, what do you mean by that? Do you mean the number $#f^{-1}(z)=[m_X:\mathbb{C}(z)]$ ? (where $m_X$ is the field of all meromorphic functions from X to the riemann sphere)? Second, what do you mean by the notation $h^i (X,\mathbb{C} )$ ? The last thing I'll be glad to know is what do you mean by "the fibers of f" ? I'll try to read your answer again after I'll understand these notions... Thanks a lot again ! – jason mfash Jun 11 '12 at 18:54
OK... SO as far as I can understand, the degree is excatly what I meant, the $h^i$ are the dimension of the sheaf cohomology. The only thing I miss is what do you mean by the fibers of f? Thanks again, I'll re-read the answer and try to complete the details myself...Hope I'll manage – jason mfash Jun 11 '12 at 19:03
Dear Jason, I usually think of the degree as being the number of points in the preimage of a "generic" point on $Y$ (but I'm not really an algebraic geometer). It is the number of sheets that $f$ would have if it were an honest covering map. The $h^i$ are indeed what you think they are. Finally the fibers of $f$ are the subvarieties $f^{-1}(y)$ of $X$, where $y$ are points on $Y$. In general these consist of $d$ different points, but there are subtleties at some special points (basically one has to count points in the inverse image with multiplicity, but this is not important for the answer). – Gunnar Þór Magnússon Jun 11 '12 at 19:54
Hi again Gunner, it seems that I have a lot of "holes" in my algebraic geometry knowledge. Cn you please explain what do you mean by "the skyskraper sheaf supported on the image of the ramification divisor of the morphism f "? In addition, what does the notation $d_\chi (Y)$ stands for? I really hope you'll be able to explain me these notions. Thanks a lot again (I'm pretty sure I'll have a few more questions after I'll reread the answer on the weekend. Hope you'll have enough patience to help me :) ) BTW-do you have a good book that contains this kind of sheaf cohomology? Thanks again! – jason mfash Jun 12 '12 at 14:45
In this case "skyscraper sheaf" means that it is zero except at the images of points where $f$ is ramified (i.e. is not a d-fold covering). The notation is $d$ times the Euler characteristic $\chi(Y)$ (since $\chi(\mathbb C^{\oplus d}) = d \cdot \chi(Y)$). For books you can try Huybrecht's "Complex geometry", or Hartshorne's "Algebraic geometry", or go to the classic Godement's "Topologie algebrique et theorie des faisceaux". – Gunnar Þór Magnússon Jun 12 '12 at 15:15
Just in case you don't know how overkill this machinery really is (especially in the case of Riemann surfaces which is what you asked for) I have posted a sketch of the basic argument. Since it doesn't really answer your question, I have made the answer CW.
Let $f: X \to Y$ be a holomorphic map of Riemann surfaces of degree $d$, i.e. the preimage of all but finitely many points in $Y$ has $d$ points in it. Let $x_i$ be the branch points of $f$ and $y_j$ be the images of these branch points.
Triangulate $Y$ so that each $y_i$ is a vertex. Lift this triangulation of $Y$ to a triangulation of $X$ via $f$. Let $F,E,V$ be the number of faces, edges, and vertexes in the triangulation of $Y$. Then there are $dF$ faces and $dE$ edges in $X$. There are almost $dV$ vertexes, but I get less vertexes because at a ramification point in $Y$ there are less than the full $d$ preimages. How many less? Exactly the sum of one less than the degrees of the $x_i$!
Feel free to edit this post if it is unclear.
-
OK, using spectral sequences to prove Riemann-Hurwitz may seem like an overkill (although it is a fun exercise in itself; +1 to the question and to the accepted answer), but proving that every surface can be triangulated is not a piece of cake either (pardon the pun). In my opinion, different approaches just use different definitions of Euler characteristics. – Margaret Friedland Jun 12 '12 at 15:51
@Margaret I agree. It is just that, in this crazy mixed up world of ours, it is entirely possible that one work's through all of Hartshorne and sees a proof of Riemann-Hurwitz via sheaf cohomology before you see the very simple geometry at work in my answer (this happened to me). I just wanted to make sure the OP knew this very intuitive way to think about it. I don't even remember RH anymore - it is much easier to take a second to think through the above line of reasoning. – Steven Gubkin Jun 12 '12 at 15:57
Huh. I'm almost in the same position as you Steven, in that I'd only seen cohomological or low-tech residue proofs of RH, and can never remember the statement of the damn thing. That argument however is very nice, very nice indeed. Bravo. – Gunnar Þór Magnússon Jun 12 '12 at 19:58
Thank you Gunnar. I liked your answer a lot too. I have been reading Joseph Taylor's SCV book, and I liked seeing your approach - uses a lot of ideas present in that book. – Steven Gubkin Jun 12 '12 at 20:32
|
{}
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
TMF: Year: Volume: Issue: Page: Find
TMF, 2010, Volume 162, Number 1, Pages 41–68 (Mi tmf6454)
Multiexponential models of $(1+1)$-dimensional dilaton gravity and Toda–Liouville integrable models
V. de Alfaroa, A. T. Filippovb
a Dipartimento di Fisica Teorica, INFN, Accademia Scienze, Torino, Italy
b Joint Institute for Nuclear Research, Dubna, Moscow Oblast, Russia
Abstract: We study general properties of a class of two-dimensional dilaton gravity (DG) theories with potentials containing several exponential terms. We isolate and thoroughly study a subclass of such theories in which the equations of motion reduce to Toda and Liouville equations. We show that the equation parameters must satisfy a certain constraint, which we find and solve for the most general multiexponential model. It follows from the constraint that integrable Toda equations in DG theories generally cannot appear without accompanying Liouville equations. The most difficult problem in the two-dimensional Toda–Liouville (TL) DG is to solve the energy and momentum constraints. We discuss this problem using the simplest examples and identify the main obstacles to solving it analytically. We then consider a subclass of integrable two-dimensional theories where scalar matter fields satisfy the Toda equations and the two-dimensional metric is trivial. We consider the simplest case in some detail. In this example, we show how to obtain the general solution. We also show how to simply derive wavelike solutions of general TL systems. In the DG theory, these solutions describe nonlinear waves coupled to gravity and also static states and cosmologies. For static states and cosmologies, we propose and study a more general one-dimensional TL model typically emerging in one-dimensional reductions of higher-dimensional gravity and supergravity theories. We especially attend to making the analytic structure of the solutions of the Toda equations as simple and transparent as possible.
Keywords: dilaton gravity, integrable model, Toda equation, Liouville equation
DOI: https://doi.org/10.4213/tmf6454
Full text: PDF file (624 kB)
References: PDF file HTML file
English version:
Theoretical and Mathematical Physics, 2010, 162:1, 34–56
Bibliographic databases:
Citation: V. de Alfaro, A. T. Filippov, “Multiexponential models of $(1+1)$-dimensional dilaton gravity and Toda–Liouville integrable models”, TMF, 162:1 (2010), 41–68; Theoret. and Math. Phys., 162:1 (2010), 34–56
Citation in format AMSBIB
\Bibitem{De Fil10} \by V.~de~Alfaro, A.~T.~Filippov \paper Multiexponential models of $(1+1)$-dimensional dilaton gravity and Toda--Liouville integrable models \jour TMF \yr 2010 \vol 162 \issue 1 \pages 41--68 \mathnet{http://mi.mathnet.ru/tmf6454} \crossref{https://doi.org/10.4213/tmf6454} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=2677167} \zmath{https://zbmath.org/?q=an:05790836} \adsnasa{http://adsabs.harvard.edu/cgi-bin/bib_query?2010TMP...162...34D} \transl \jour Theoret. and Math. Phys. \yr 2010 \vol 162 \issue 1 \pages 34--56 \crossref{https://doi.org/10.1007/s11232-010-0002-x} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000274522000002} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-76749116530}
• http://mi.mathnet.ru/eng/tmf6454
• https://doi.org/10.4213/tmf6454
• http://mi.mathnet.ru/eng/tmf/v162/i1/p41
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
This publication is cited in the following articles:
1. A. T. Filippov, “Weyl–Eddington–Einstein affine gravity in the context of modern cosmology”, Theoret. and Math. Phys., 163:3 (2010), 753–767
2. Fateev V., Ribault S., “Conformal Toda theory with a boundary”, J. High Energy Phys., 2010, no. 12, 089
3. Proc. Steklov Inst. Math., 272 (2011), 107–118
4. A. T. Filippov, “Unified description of cosmological and static solutions in affine generalized theories of gravity: Vecton–scalaron duality and its applications”, Theoret. and Math. Phys., 177:2 (2013), 1555–1577
5. Davydov E., Filippov A.T., “Dilaton-Scalar Models in the Context of Generalized Affine Gravity Theories: their Properties and Integrability”, Gravit. Cosmol., 19:4 (2013), 209–218
6. A. T. Filippov, “Solving dynamical equations in general homogeneous isotropic cosmologies with a scalaron”, Theoret. and Math. Phys., 188:1 (2016), 1069–1098
7. Filippov T., “A Fresh View of Cosmological Models Describing Very Early Universe: General Solution of the Dynamical Equations”, Phys. Part. Nuclei Lett., 14:2 (2017), 298–303
• Number of views: This page: 363 Full text: 97 References: 58 First page: 9
|
{}
|
## Multiplicity modulo two as a metric invariant
Guillaume Valette
The multiplicity of a real analytic hypersurface defined by a reduced analytic equation P = 0, is the lowest homogeneous degree in the Taylor expansion of P . Modulo $$2$$ it is independent of the chosen reduced equation. This talk will address the following question: Is multiplicity modulo $$2$$ a metric invariant ? I will give some partial answers.
|
{}
|
In the evolution of a society, continued investment in complexity as a problem-solving strategy yields a declining marginal return.
Joseph A. Tainter
Someone asked me if from now on my blog will only be about Project_Noumena – on the contrary.
I will be interspersing subject matter within Parts 1 to (N) of Project_Noumena. To be transparent at this juncture i am not sure where it will end or if there is even a logical MVP 1.0. As with open-source systems and frameworks technically one never achieves V1.0 as the systems evolve. i tend to believe this will be the case with Project Noumena. i recently provided a book review on CaTB and have a blog on Recurrent Neural Networks with respect to Multiple Time Scale Prediction in the works so stuff is proceeding.
To that end, i would love comments and suggestions as to anything you would like my opinion on or for me to write about in the comments section. Also feel free to call me out on typos or anything else you see in error.
Further within Project Noumena there are snippets that could be shorter blogs as well. Look at Project Noumena as a fractal-based system.
Now on to the matter at hand.
In the previous blog Computing The Human_Condition – Project Noumena (Part 1) i discussed the initial overview of the model from the book World Dynamics. i will take a part of that model which is what i call, the main, Human_Do_Loop(); and the main attributes of the model: Birth and Death of Humans. One must ask if we didn’t have humans we would not have to be concerned with such matters as societal collapse? i don’t believe animals are concerned with such existential crisis concerns so my answer is a resounding – NO. We will be discussing such existential issues in this blog although i will address such items in future writings.
Over the years i have been asking myself is this a biological model by definition? Meaning do we have cellular components involved only? Is this biological modeling at the very essence? If we took the cell-based organisms out of the equation what do we still have as far as models on Earth?
While i told myself i wouldn’t get too extensional here and i do want to focus on the models and then codebases i continually check the initial conditions of these systems as they for most systems dictate the response for the rest of the future operations of said systems. Thus for biological systems, are there physical parameters that govern the initial exponential growth rate? Can we model with power laws and logistic curves for coarse-grained behavior? Is Bayesian reasoning biologically plausible at a behavioral level or at a neuronal level? Given that what are the atomic units that govern these models?
These are just a sampling of initial condition questions i ask myself as i evolve through this process.
So with that long-winded introduction and i trust i didn’t lose you oh reader lets hope into some specifics.
The picture from the book depicts basic birth and death loops in the population sector. In the case of these loops, they are generating positive feedback which causes growth. Thus an increase in population P causes an increase in birthrate BR. This, in turn, causes population P to further increase. The positive feedback loop would if left to its own devices would create an exponentially growing situation. As i said in the first blog and will continue to say, we seem to have started using exponential growth as a net positive fashion over the years in the technology industry. In the case of basic population dynamics with no constraints, exponential growth is not a net positive outcome.
Once again why start with simple models? The human mind is phenomenal at perceiving pressures, fears, greed, homeostasis, and other human aspects and characteristics and attempting at a structure that is given say the best fit to a situation and categorizing these as attributes thereof. However, the human mind is rather poor at predicting dynamical systems behaviors which are where the models come into play especially with social interactions and what i attempting to define from a self-organizing theory standpoint.
The next sets of loops that have the most effective behavior is a Pollution loop and a Crowding Loop. If we note that pollution POL increases one can assume up to a point that one hopes that nature absorbs and fixes the pollution otherwise it is a completely positive feedback loop and this, in turn, creates over pollution which we are already seeing the effects of around the worlds. One can then couple this with the amount of crowding humans can tolerate.
We see this behavior in urban sprawl areas when we have extreme heat or extreme cold or let’s say extreme pandemics. If the population rises crowding ratio increases the birth rate multiplier declines and birth rates reduce. The increasing death rate and reducing the birth rate are power system dynamic stabilizers coupled with pollution. This in turn obviously has an effect on food supplies. One can easily deduce that these seemingly simple coefficients if you will within the relative feedback loops create oscillations, exponential growth, or exponential decay. The systems while that seem large and rather stable are very sensitive to slight variations. If you are familiar with NetLogo it is a great agent-based modeling language. I picked a simple pollution model whereas we can select the number of people, birthrate, and tree planting rate.
As you can see without delving into the specifics after 77 years it doesn’t look to promising. i ‘ll either be using python or netlogo or a combination of both to extended these models as we add other references.
Ok enough for now.
Until Then,
#iwishyouwater
@tctjr
“Joy, humor, and playfulness are indeed assets;”
~ Eric S. Raymond
As of late, i’ve been asked by an extreme set of divergent individuals what does “Open Source Software” mean?
That is a good question. While i understand the words and words do have meanings i am not sure its the words that matter here. Many people who ask me that question hear “open source” and hear or think “free’ which is not the case.
Also if you have been on linkedin at all you will see #Linux, #LinuxFoundation and #OpenSource tagged constantly in your feeds.
Which brings me to the current blog and book review.
(CatB)as it is affectionately known in the industry started out and still is a manifesto as well accessible via the world wide web. It was originally published in 1997 on the world wide wait and then in print form circa 1999. Then in 2001 was a revised edition with a foreword by Bob Young, the founding chairman and ceo of Redhat.
Being i prefer to use plain ole’ books we are reviewing the physical revised and extended paperback edition in this blog circa 2001. Of note for the picture, it has some wear and tear.
To start off as you will see from the cover there is a quote by Guy Kawasaki, Apple’s first Evangelist:
“The most important book about technology today, with implications that go far beyond programming.”
This is completely true. In the same train of thought, it even goes into the aspects of propriety and courtesy within conflict environments and how such environments are of a “merit not inherit” world, and how to properly respond when you are in vehement disagreement.
To relate it to the book review: What is a cathedral development versus a bazaar environment?
Cathedral is a tip of the fedora if you will to the authoritarian view of the world where everything is very structured and there are only a few at most who will approve moving the codebase forward.
Bazaar refers to the many. The many coding and contributing in a swarm like fashion.
In this book, closed source is described as a cathedral development model and open source as a bazaar development model. A cathedral is vertically and centrally controlled and planned. Process and governance rule the project – not coding. The cathedral is homeostatic. If you build or rebuild Basilica Sancti Petri within Roma you will not be picking it up by flatbed truck and moving it to Firenze.
The forward in the 2001 edition is written by Bob Young co-founder and original CEO of RedHat. He writes:
“ There have always been two things that would be required if open-source software was to materially change the world; one was for open-source software to become widely used and the other was the benefits this software development model supplied to its users had to be communicated and understood.”
Users here are an interesting target. Users could be developers and they could be end-users of warez. Nevertheless, i believe both conditions have been met accordingly.
i co-founded a machine learning and nlp service as a company in 2007 wherein i had the epiphany after my “second” read of Catb that the future is in fact open source. i put second in quotes as the first time i read it back in 1998 it wasn’t really a read in depth nor having fully internalized it while i was working at Apple in the CPU software department on OS9/OSX and while at the same time knowing full well that OSX was based on the Mach kernel. The Mach kernel is often mentioned as one of the earliest examples of a microkernel. However, not all versions of Mach are microkernels. Mach’s derivatives are the basis of the operating system kernel in GNU Hurd and of Apple’s XNU kernel used in macOS, iOS, iPadOS, tvOS, and watchOS.
That being said after years of working with mainly closed source systems in 2007 i re-read Catb. i literally had a deep epiphany that the future of all development would be open source distributed machine learning – everywhere.
Then i read it recently – deeply – a third time. This time nearly every line in the book resonates.
The third time with almost anything seems to be the charm. This third time through i realized not only is this a treatise for the open-source movement it is a call to arms if you will for the entire developer community to behave appropriately with propriety and courtesy in a highly matrixed collaborative environment known as the bazaar.
The most obvious question is: Why should you care? i’m glad you asked.
The reason you care is that you are part of the information economy. The top market cap companies are all information-theoretic developer-first companies. This means that these companies build things so others can build things. Software is truly eating the world. Think in terms of the recent pandemic. Work (code) is being created at an amazing rate due to the fact that the information work economy is distributed and essentially schedule free. She who has distributed wins and she who can code anytime wins. This also means that you are interested in building world-class software and the building of this software is now a decentralized peer reviewed transparent process.
The book is organized around Raymond’s various essays. It is important to note that just as software is an evolutionary process by definition so are the essays in this book. They can also be found online. The original collection of essays date back to 1992 on the internet: “A Brief History Of Hackerdom.’
The book is not a “how-to” cookbook but rather what i call a “why to” map of the terrain. While you can learn how to hack and code i believe it must be in your psyche. The book also uses the term “hacker” in a positive sense to mean one who creates software versus one who cracks software or steals information.
While the history and the methodology is amazing to me the cogent commentary on the types of the reasoning behind why hackers go into open source vary as widely as ice cream flavors.
Raymond goes into the theory of incentives with respect to the instinctive wiring of humans beings.
“The verdict of history seems to be free-market capitalism is the globally optimal way to cooperate for economic efficiency; perhaps in a similar way to cooperate for generating (and checking!) high-quality creative work.”
He categorizes command hierarchy, exchange economy, and gift culture to address these incentives.
Command hierarchy:
Goods are allocated in a scarce economy model by one central authority.
Exchange Economy:
The allocation of scarce goods is accomplished in a decentralized manner allowing scale through trade and voluntary cooperation.
This is very different than the other two methods or cultures. Abundance makes command and control relationships difficult to sustain. In gift cultures, social status is determined not by what you control but by what you give away.
It is clear that if we define the open source hackerdom it would be a gift culture. (It is beyond the current scope of this blog but it would be interesting to do a neuroscience project on the analysis of open source versus closed source hackers brain chemistry as they work throughout the day)
Given these categories, the essays then go onto define the written and many times unwritten (read secrets) that operate within the open-source world via a reputation game. If you are getting the idea it is tribal you are correct. Interestingly enough the open source world has in many cases very divergent views on all prickly things within the human condition such as religion and politics but one thing is a constant – ship high-quality code.
Without a doubt the most glaring cogent commentary comes in a paragraph from the essay “The Magic Cauldron.” entitled “Open Source And Strategic Business Risk.”
Ultimately the reasons open source seems destined to become a widespread practice have more to do with customer demand and market pressures than with supply-efficiencies for vendors.”
And further:
“Put yourself for the moment in the position of a CTO at a Fortune 500 corporation contemplating a build or upgrade of your firm’s IT infrastructure. Perhaps you need to choose a network operating system to be deployed enterprise-wide; perhaps your concerns involve 24/7 web service and e-commerce, perhaps your business depends on being able to field high-volume, high-reliability transaction databases. Suppose you go the conventional closed-source route. If you do, then you put your firm at the mercy of a supplier monopoly – because by definition there is only one place you can go to for support, bug fixes, and enhancements. If the supplier doesn’t perform, you will have no effective recourse because you are effectively locked by your initial investment.”
FURTHER:
“The truth is this: when your key business processes are executed by opaque blocks of bits that you cant even see inside (let alone modify) you have lost control of your business.”
“Contrast this with the open-source choice. If you go this route, you have the source code, and no one can take that away from you. Instead of a supplier monopoly with a choke-hold on your business, you now have multiple service companies bidding for your business – and you not only get to play them against each other, but you also have the option of building your own captive support organization if that looks less expensive than contracting out. The market works for you.”
“The logic is compelling; depending on closed-source code is an unacceptable strategic risk So much so that I believe it will not be very long until closed-source single-vendor acquisitions when there is an open source alternative available will be viewed as a fiduciary irresponsibility, and rightly grounds for a share-holder lawsuit.”
THIS WAS WRITTEN IN 1997. LOOK AROUND THE WORLD WIDE WAIT NOW… WHAT DO YOU SEE?
Open Source – full stop.
i will add that there was no technical explanation here only business incentive and responsibility to the company you are building, rebuilt, or are scaling. Further, this allows true software malleability and reach which is the very reason for software.
i will also go on a limb here and say if you are a software corporation one that creates software you can play the monopoly and open-source models against each other within your corporation. Agility and speed to ship code is the only thing that matters these days. Where is your github? Or why is this not shipping TODAY?
This brings me to yet another amazing prescient prediction in the book that Raymond says that applications are ultimately where we will land for monetary scale. Well yes, there is an app for that….
While i have never met Eric S. Raymond he is a legend in the field. We have much to thank him for in the areas of software. If you have not read CatB and work in the information sector do yourself a favor: buy it today.
As a matter of fact here is the link: The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary
Muzak To Blog To: “Morning Phase” by Beck
Resources:
http://www.opensource.org
https://www.apache.org/foundation/
“I am putting myself to the fullest possible use, which is all I think any conscious entity can ever hope to do.” ~ HAL 9000
“If you want to make the world a better place take a look at yourself and then make a change.” ~ MJ.
First and foremost with this blog i trust everyone is safe. The world is in an interesting place, space, and time both physically and dare i say collectively – mentally.
## Introduction
This past week we celebrated Earth Day. i believe i heard it was the 50th year of Earth Day. While I applaud the efforts and longevity for a day we should have Earth Day every day. Further just “thoughting” about or tweeting about Earth Day – while it may wake up your posterior lobe of the pituitary gland and secret some oxytocin – creating the warm fuzzies for you it really doesn’t create an action for furthering Earth Day. (much like typing /giphy YAY! In Slack).
As such, i decided to embark on a multipart blog that i have been “thinking” about what i call an Ecological Computing System. Then the more i thought about it why stop at Ecology? We are able to model and connect essentially anything, we now have models for the brain that while are coarse-grained can account for gross behaviors, we have tons of data on buying habits and advertisement data and everything is highly mobile and distributed. Machine learning which can optimize, classify and predict with extremely high dimensionality is no longer an academic exercise.
Thus, i suppose taking it one step further from ecology and what would differentiate it from other efforts is that <IT> would actually attempt to provide a compute framework that would compute The Human Condition. I am going to call this effort Project Noumena. Kant the eminent thinker of 18th century Germany defined Noumena as a thing as it is in itself, as distinct from a thing as it is knowable by the senses through phenomenal attributes and proposed that the experience was a product of the mind.
My impetus for this are manifold:
• i love the air, water, trees, and animals,
• i am an active water person,
• i want my children’s children’s children to know the wonder of staring at the azure skies, azure oceans and purple mountains,
• Maybe technology will assist us in saving us from The Human Condition.
## Timing
i have waited probably 15+ years to write about this ideation of such a system mainly due to the technological considerations were nowhere near where they needed to be and to be extremely transparent no one seemed to really think it was an issue until recently. The pandemic seems to have been a global wakeup call that in fact, Humanity is fragile. There are shortages of resources in the most advanced societies. Further due to the recent awareness that the pollution levels appear (reported) to be subsiding as a function in the reduction of humans’ daily involvement within the environment. To that point over the past two years, there appears to be an uptake of awareness in how plastics are destroying our oceans. This has a coupling effect that with the pandemic and other environmental concerns there could potentially be a food shortage due to these highly nonlinear effects. This uptake in awareness has mainly been due to the usage of technology of mobile computing and social media which in and of itself probably couldn’t have existed without plastics and massive natural resource consumption. So i trust the irony is not lost there.
From a technical perspective, Open source and Open Source Systems have become the way that software is developed. For those that have not read The Cathedral and The Bazaar and In The Beginning Was The Command Line i urge you to do so it will change your perspective.
We are no longer hampered by the concept of scale in computing. We can also create a system that behaves at scale with only but a few human resources. You can do a lot with few humans now which has been the promise of computing.
Distributed computing methods are now coming to fruition. We no longer think in terms of a monolithic operating system or in place machine learning. Edge computing and fiber networks are accelerating this at an astonishing rate. Transactions now dictate trust. While we will revisit this during the design chapters of the blog I’ll go out on a limb here and say these three features are cogent to distributed system processing (and possibly the future of computing at scale).
• Incentive models
• Consensus models
• Protocol models
We will definitely be going into the deeper psychological, mathematical, and technical aspects of these items.
Some additional points of interest and on timing. Microsoft recently released press about a Planetary Computer and announced the position of Chief Ecology Officer. While i do not consider Project Nuomena to be of the same system type there could be similarities on the ecological aspects which just like in open source creates a more resilient base to work.
The top market cap companies are all information theoretic-based corporations. Humans that know the science, technology, mathematics and liberal arts are key to their success. All of these companies are woven and interwoven into the very fabric of our physical and psychological lives.
Thus it is with the confluence of these items i believe the time is now to embark on this design journey. We must address the Environment, Societal factors and the model of governance.
A mentor once told me one time in a land far away: “Timing is everything as long as you can execute.” Ergo Timing and Execution Is Everything.
## Goals
It is my goal that i can create a design and hopefully, an implementation that is utilizing computational means to truly assist in building models and sampling the world where we can adhere to goals in making small but meaningful changes that can be used within what i am calling the 3R’s: recycle, redact, reuse. Further, i hope with the proper incentive models in place that are dynamic it has a mentality positive feedback effect. Just as in complexity theory a small change – a butterfly wings – can create hurricanes – in this case positive effect.
Here is my overall plan. i’m not big on the process or gant charts. I’ll be putting all of this in a README.md as well. I may ensconce the feature sets etc into a trello or some other tracking mechanism to keep me focused – WebSphere feel free to make recommendations in the comments section:
Action Items:
• Create Comparative Models
• Create Coarse-Grained Attributes
• Identify underlying technical attributes
• Attempt to coalesce into an architecture
• Start writing code for the above.
## Preamble
Humanity has come to expect growth as a material extension of human behavior. We equate growth with progress. In fact, we use the term exponential growth as it is indefinitely positive. In most cases for a fixed time interval, this means a doubling of the relevant system variable or variables. We speak of growth as a function of gross national production. In most cases, exponential growth is treacherous where there are no known or perceived limits. It appears that humanity has only recently become aware that we do not have infinite resources. Psychologically there is a clash between the exponential growth and the psychological or physical limit. The only significance is the relevant (usually local) limit. How does it affect me, us, and them? This can be seen throughput most game theory practices – dominant choice. The pattern of growth is not the surprise it is the collision of the awareness of the limit to the ever-increasing growth function is the surprise.
Q: Are progress (and capacity) and the ever-increasing function a positive and how does it relate to 2nd law of thermodynamics aka Entropy? Must it always expand?
We are starting to see that our world can exert dormant forces that within our life can greatly affect our well being. When we approach the actual or perceived limit the forces which are usually negative begin to gain strength.
So given these aspects of why i’ll turn now to start the discussion. If we do not understand history we cannot predict the future by inventing it or in most cases re-inventing it as it where.
I want to start off the history by referencing several books that i have been reading and re-reading on subjects of modeling the world, complexity, and models for collapse throughout this multipart blog. We will be addressing issues concerning complex dynamics as are manifested with respect to attributes model types, economics, equality, and mental concerns.
These core references are located at the end of the blog under references. They are all hot-linked. Please go scroll and check them out. i’ll still be here. i’ll wait.
Checked them out? i know a long list.
As you can see the core is rather extensive due to the nature of the subject matter. The top three books are the main ones that have been the prime movers and guides of my thinking. These three books i will refer to as The Core Trilogy:
World Dynamics
The Collapse of Complex Societies
As i mentioned i have been deeply thinking about all aspects of this system for quite some time. I will be mentioning several other texts and references along the continuum of creation of this design.
We will start by referencing the first book: World Dynamics by J.W. Forrestor. World Dynamics came out of several meetings of the Rome Club a 75 person invite-only club founded by the President of Fiat. The club set forth the following attributes for a dynamic model that would attempt to predict the future of the world:
• Population Growth
• Capital Investment
• Geographical Space
• Natural Resources
• Pollution
• Food Production
The output of this design was codified in a computer program called World3. It has been running since the 1970s what was then termed a golden age of society in many cases. All of these variables have been growing at an exponential rate. Here we see the model with the various attributes in action. There have been several criticisms of the models and also analysis which i will go into in further blogs. However, in some cases, the variants have been eerily accurate. The following plot is an output of the World3 model:
## Issues Raised By World3 and World Dynamics
The issues raised by World3 and within the book World Dynamics are the following:
• There is a strong undercurrent that technology might not be the savior of humankind
• Industrialism (including medicine and public health) may be a more disturbing force than the population.
• We may face extreme psychological stress and pressures from a four-pronged dilemma via suppression of the modern industrial world.
• We may be living in a “golden age” despite a widely acknowledged feeling of malaise.
• Exhtortions and programs directed at population control may be self-defeating. Population control, if it works, would yield excesses thereby allowing further procreation.
• Pollution and Population seem to oscillate whereas the high standard of living increases the production of food and material goods which outrun the population. Agriculture as it hits a space limit and as natural resources reach a pollution limit then the quality of life falls in equalizing population.
• There may be no realistic hope of underdeveloped countries reaching the same standard and quality of life as developed countries. However, with the decline in developed countries, the underdeveloped countries may be equalized by that decline.
• A society with a high level of industrialization may be unsustainable.
• From a long term 100 years hence it may be unwise for underdeveloped countries to seek the same levels of industrialization. The present underdeveloped nations may be in better conditions for surviving the forthcoming pressures. These underdeveloped countries would suffer far less in a world collapse.
## Fuzzy Human – Fuzzy Model
The human mind is amazing at identifying structures of complex situations. However, our experiences train us poorly for estimating the dynamic consequences of said complexities. Our mind is also not very accurate at estimating ad hoc parts of the complexities and the variational outcomes.
One of the problems with models is well it is just a model The subject-observer reference could shift and the context shifts thereof. This dynamic aspect needs to be built into the models.
Also while we would like to think that our mental model is accurate it is really quite fuzzy and even irrational in most cases. Also attempting to generalize everything into a singular model parameter is exceedingly difficult. It is very difficult to transfer one industry model onto another.
In general parameterization of most of these systems is based on some perceptual model we have rationally or irrationally invented.
When these models were created there was the consideration of modeling social mechanics of good-evil, greed – altruism, fears, goals, habits, prejudice, homeostasis, and other so-called human characteristics. We are now at a level of science where we can actually model the synaptic impulse and other aspects that come with these perceptions and emotions.
There is a common cross-cutting construct in most complex models within this text that consists of and mainly concerned with the concept of feedback and how the non-linear relationships of these modeled systems feedback into one another. System-wide thinking permeates the text itself. On a related note from the 1940’s of which Dr Norbert Weiner and others such as Claude Shannon worked on ballistic tracking systems and coupled feedback both in a cybernetic and information-theoretic fashion of which he attributed the concept of feedback as one of the most fundamental operations in information theory. This led to the extremely famous Weiner Estimation Filters. Also, side note: Dr Weiner was a self-styled pacifist proving you can hold two very opposing views in the same instance whilst being successful at executing both ideals.
Given that basic function of feedback, lets look at the principle structures. Essentially the model states there will be levels and rates. Rates are flows that cause levels to change. Levels can accumulate the net level. Either addition or subtraction to that level. The various system levels can in aggregate describe the system state at any given time $$(t)$$. Levels existing in all subsystems of existence. These subsystems as you will see include but are not limited to financial, psychological, biological, and economic. The reason that i say not limited to because i also believe there are some yet to be identified subsystems at the quantum level. The differential or rate of flow is controlled by one or more systems. All systems that have some Spatio-temporal manifestation can be represented by using the two variables levels and rates. Thus with respect to the spatial or temporal variables, we can have a dynamic model.
The below picture is the model that grew out of interest from the initial meetings of the Club of Rome. The inaugural meeting which was the impetus for the model was held in Bern, Switzerland on June 29, 1970. Each of the levels presents a variable in the previously mentioned major structures. System levels appear as right triangles. Each level is increased or decreased by the respective flow. As previously mentioned on feedback any closed path through the diagram is a feedback loop. Some of the closed loops given certain information-theoretic attributes be positive feedback loops that generate growth and others that seek equilibrium will be negative feedback loops. If you notice something about the diagram it essentially is a birth and death loop. The population loop if you will. For the benefit of modeling, there are really only two major variables that affect the population. Birth Rate (BR) and Death Rate (DR). They represent the total aggregate rate at which the population is being increased or decreased. The system has coefficients that can initialize them to normal rates. For example, in 1970 BRN is taken as 0.0885 (88.5 per thousand) which is then multiplied by population to determine BR. DRN by the same measure is the outflow or reduction. In 1970 it was 9.5% or 0.095. The difference is the net and called normal rates. The normale rates correspond to a physical normal world. When there are normal levels of food, material standard of living, crowding, and pollution. The influencers are then multipliers that increase or decrease the normal rates.
As a caveat, there have been some detractors of this model. To be sure it is very coarse-grained however while i haven’t seen the latest runs or outputs it is my understanding as i said the current outputs are close. The criticisms come in the shape of “Well its just modeling everything as a $$y=x*e^{{rt}}$$. I will be using this concept and map if you will as the basis for Noumena. The concepts and values as i evolve the system will vary greatly from the World3 model but i believe starting with a minimum viable product is essential here as i said humans are not very good at predicting all of the various outcomes in high dimensional space. We can asses situations very quickly but probably outcomes no so much. Next up we will be delving into the loops deeper and getting loopier.
So this is the first draft if you will as everything nowadays can be considered an evolutionary draft.
Then again isn’t really all of this just The_Inifinite_Human_Do_Loop?
until then,
#iwishyouwater
tctjr
## References:
World Dynamics
The Collapse of Complex Societies
Six Sources of Collapse
Beyond The Limits
Designing Distributed Systems Brendan Burns
A Pragmatic Introduction to Secure Multi-Party Computation
Reliable Secure Distributed Programming
Distributed Algorithms
Complexity a Guided Tour
Agent_Zero
Nudge Theory In Action
The Structure of Scientific Revolutions
Agent-Based Modelling In Economics
Cybernetics
Human Use Of Human Beings
The Technological Society
The Origins Of Order
Blog Muzak: Brain and Roger Eno: Mixing Colors
|
{}
|
+0
# something
0
26
2
The longer leg of a right triangle is three times as long as the shorter leg. The hypotenuse is sqrt(5) What is the area of this triangle?
Oct 13, 2021
#1
+23
0
We know that shortleg^2 + longleg^2 = $$\sqrt{5}^2$$ or 5
If the shorter leg is x then the longer leg would be 3x
Therefore we have:
$$x^2 + 9x^2 = 5$$
simplify it to
$$10x^2 = 5$$
divide 4 each side
$$x^2 = \frac{5}{10}$$
square root both side
$$x = \sqrt{\frac{5}{10}}$$
So the shorter leg is $$\sqrt{\frac{5}{10}}$$ and the longer leg is $$3 \sqrt{\frac{5}{10}}$$
Now we need to find the area
Multiply them:
$$3\sqrt{\frac{5}{10}}\cdot \sqrt{\frac{5}{10}} = 3 \cdot \frac{5}{10} = \frac{15}{10} = \frac{3}{2}$$
And don't forget we also have to divide by 2
$$\frac{3}{2} \div 2 = \frac{3}{4}$$
Oct 14, 2021
#2
0
Try this: x2 + (3x)2 = 5
Guest Oct 14, 2021
|
{}
|
# PEP572 -- Assignment Expressions
## The Walrus Operator & Application to Newton's Method
This past week saw the debut of PEP572 in the release of Python 3.8.0. A PEP is a Python Enhancement Proposal, a document that describes a feature and requests its incorporation into the Python language. Now PEP572 in particular was about as controversial as they come, so much so that it caused Guido von Rossum, the original author of Python, to step down from his role as Benevolent Dictator for Life and form a committee to replace him. Yikes!
# What is PEP572?
So why the controversy? Well, the aim of this PEP was to introduce the walrus operator := to perform assignments in ways that were previously not possible, e.g., inside a tuple declaration or in a while-loop condition. Oh, and in case you were wondering, the walrus operator is named so for its resemblance to the eyes and tusks of a walrus. You can see it if you squint. Anyhow, not such a radical idea, is it? Well, there are those with this or that argument against it like “it detracts from readability” or “it is only useful as a shim to poorly implemented libraries” or “it will not be backward compatible with older versions of Python 3.x”. Despite the merits of these arguments, PEP572 was accepted and implemented for better or for worse.
I’ve been tracking PEP572 for a while. Not since its inception, but a while. And now that it’s here in the official Python 3.8 release, I figured it could use a little promotion. Here’s a taste.
z = (x := 'foo', y := 'bar', 'baz')
This example is intuitive, but not very useful in practice. It is inferior to a more readable equivalent that ignores a value using the conventional underscore.
z = (x, y, _) = ('foo', 'bar', 'baz')
These snippets both result in the tuple ('foo', 'bar', 'baz') being assigned to z and the strings 'foo' and 'bar' being assigned to x and y, respectively.
# Newton’s Method
In my opinion, the best use of the walrus operator is to reduce code size in initialize-iterate-reassign patterns. In this section, I illustrate that benefit in the context of an implementation of Newton’s method.
Newton’s method is an iterative numerical method for finding roots of functions. A root of a function $$f(x)$$ is a constant $$a$$ that satisfies $$f(a) = 0$$. That is, plug $$a$$ into the function, and $$0$$ comes out.
Roots are sometimes trivial to find. For example, suppose $$f(x) = x - 1$$. By inspection, we find the root by setting $$x - 1 = 0$$ and solving for $$x$$ to get $$1$$. Newton’s method really shines when the roots are difficult to find analytically or when multiple roots exist.
For the rest of this article, let’s consider the 3rd-order polynomial function $$f(x) = x^3 - 4x^2 + 3$$ with derivative $$f^\prime(x) = 3x^2 - 8x$$. Here is its graph.
In Python, here is how we define this function and its derivative. Later, these functions will be passed as objects to our implementations of Newton’s method. Remember that virtually everything in Python is an object, even a function!
def f(x):
return x ** 3 - 4 * x ** 2 + 3
def fprime(x):
return 3 * x ** 2 - 8 * x
A bit of quick guess-and-check or a look at the graph tells us the value $$1$$ is a root of this function. But a 3rd-order polynomial can have as many as 3 roots! As we can see, this one has exactly 3 roots and the remaining two are not nearly as easy to calculate. So we employ a simple form of Newton’s method that is based on the following recurrence relation.
$$x_{n+1} = x_{n} - \frac{f(x_{n})}{f^\prime(x_{n})}$$
Basically, we start with an initial guess $$x_0$$ of a root. Then we progressively improve our guesses $$x_1$$, $$x_2$$, $$\ldots$$ until the difference between consecutive guesses is so small that it can be considered negligible. The relation above is the rule for how we improve our guesses at each iteration. It says that our next guess should be our current guess less the ratio of $$f(x)$$ evaluated at our current guess to its derivative $$f^\prime(x)$$ evaluated at our current guess. The root we obtain depends on our initial guess $$x_0$$.
Below is a code implementation of Newton’s method that does not use the walrus operator.
def old_newtons_method(f, fprime, x_0, tol=1e-6):
x_curr = x_0
x_next = x_curr - f(x_curr) / fprime(x_curr)
while abs(x_curr - x_next) > tol:
x_curr = x_next
x_next = x_curr - f(x_curr) / fprime(x_curr)
return x_curr
This code is functionally correct. It is, however, a little clunky. The logic that is used to initialize x_next is repeated inside in the for-loop to iterative update x_next.
To reduce code size, we might try a slightly different implementation that still does not include the walrus operator.
def bad_newtons_method(f, fprime, x_0, tol=1e-6):
x_curr = x_0
while abs(f(x_curr) / fprime(x_curr)) > tol:
x_curr = x_curr - f(x_curr) / fprime(x_curr)
return x_curr
This code is also functionally correct and it is more terse than before. Unfortunately, we are now computing $$f$$ and $$f^\prime$$ twice on each iteration which doubles the number of computations required to obtain a root, hence the function name bad_newtons_method. If only there were a happy medium between the first and second implementations – something both terse and computationally efficient.
# The Walrus Operator
def new_newtons_method(f, fprime, x_0, tol=1e-6):
x_curr = x_0
while abs(delta_x := f(x_curr) / fprime(x_curr)) > tol:
x_curr = x_curr - delta_x
return x_curr
Ah, beautiful! Notice the walrus operator in the middle of the while condition. Not only does this code perform just as well as the first implementation, it is more compactly written than either of the previous two examples. And yes, this code is also functionally correct.
Now, let’s use what we’ve built!
roots = [
new_newtons_method(f, fprime, -1)
new_newtons_method(f, fprime, 2)
new_newtons_method(f, fprime, 4)
]
print(roots)
Using the (carefully handpicked) initial guesses of $$-1$$, $$2$$, and $$4$$, we ascertain that the roots are (roughly) $$-0.791$$, $$1.000$$, and $$3.791$$. If we look back to our graph of this function, we can visually verify these roots. Neat!
Like what you learned? Did I make a mistake? Let me know in the comments! Happy root-finding!
|
{}
|
# Orbits space of real-analytic planar foliations
Consider a foliation of $$\mathbb{R}^2$$, say coming from the trajectories of a vector field $$X$$. Its orbit space (the quotient of $$\mathbb{R}^2$$ by the relation "lying on the same trajectory") is seldom Hausdorff. Such foliated structures have been intensively studied, and a complete $$C^r$$-classification is due to Haefliger and Reeb in the case where $$X$$ is regular on a simply connected region (thanks to the same topological niceness of the plane used in the proof of Bendixon-Poincaré's theorem). Almost any reasonable one-dimensional, simply-connected non-Hausdorff manifold can be realized as the orbit space of a foliation.
The two main sources of non-separability of orbits are:
1. Saddle singularities: the stable and unstable (half-)manifolds cannot be separated.
2. Limit cycles: the limit cycle cannot be separated from the accumulating trajectories.
I believe that orbits space coming from real-analytic foliations should have a "nicer" structure. I expect also that the work of Kaplan, Haefliger, Reeb dating back from the 40--50's should have been generalized to the analytic setting. Is that so? Is there any special structure / characterization on the (non-Hausdorff) analytic orbits space of a real-analytic planar foliation that I should be aware of (and where can I find it)?
A special case of particular interest is where the the vector field $$X$$ is the realification of a holomorphic vector field on $$\mathbb C\simeq \mathbb R^2$$. Now there are no limit cycles. The topology of the phase-portrait looks simpler and the orbit space also. Is there any known characterization of the analytic one-dimensional (non-Hausdorff) manifolds that can arise in this very special case?
You wrote "I believe that orbits space coming from real-analytic foliations should have a "nicer" structure".
I think that this nicer structure arises when we consider a more technical "Leaf space" so called "Groupoid foliation" not mereley the orbit space. The orbit space has a very poor topology. For example it can not distingishe two topological different foliation of the torus with different slopes $$"\sqrt 2"$$ and $$"\pi"$$ see "The" kronecker foliation or "a" kronecker foliation?
Note that the holomorphicity can not do any thing special if we resist on the plain orbit space and do not consider the groupoid foliation. Because the simplest holomorphic map in the world, the constant map $$z'=c$$ produce a complicated kronecker foliation. But the k theory of $$C^*$$ algebra of the corresponding groupoid foliation, is the true useful tool for studing the invisible features of the foliation.
So with consideration of groupoid foliation $$G(M,F)$$, it is always a Haussdorf space if the foliation is real analytic. In general $$G(M,F)$$ is $$k+n$$ (not necessarily Hausdorf) manifold where $$F$$ is a $$k$$ -dimensional foliation of an n manifold
Your question reminds me of an MO question of mine as follows. I have neither deleted that question nor I can find it in my question list.
Is there a non analytic foliation whose groupoid foliation is diffeomorphic to an $$S^3$$ with two north pole, a compact 3 dimensional analogy of a line with two origin?
https://ncatlab.org/nlab/show/line+with+two+origins
We identify two disjoint $$S^3$$ at all its points exept at north pole:
In the disjoint union $$S^3\times\{0\} \coprod S^3\times \{1\}$$ we identify $$(x,0)$$ with $$(x,1)$$ for all $$x\in S^3\setminus \{N\}$$ so we obtain a 3-sphere with two north pole.
Any way if you give a precise reference to papers of Haefliger et al one can search for thses papers to realize which kind of orbit space they are working with.
P.S: for the holomorphic foliation $$Z'=e^Z$$ we have infinite number of points(in the leaf space according to your terminology) with the following property: There are infinite paires such that each pair consite of two points which can not seperate from each other.
But is there an entire holomorphic function(non vanishing) for which the corresponding foliation does not admit infinit number of paires with the above property, but any way the leaf space is non Haussdorf? On the other extrem is there a non vanishing entire function whose corresponding leaf space(according to your terminology) which contains infinite number of points which mutually are non seperable?
Note: You wrote: "Limit cycles : the limit cycle cannot be separated from the accumulating trajectories"
But surprisingly in the groupoid foliation a limit cycle can seperates from other trajectories!. The only case it can not seperate is that "it is center from exterior and it is limit cycle from the interior or it is exteror limit cycle and interior center. Of course this can not happen in real analytic case.
|
{}
|
### Tkinter and Asyncio
###### Thu 18 February 2021
Asynchronous process results waiting (Photo credit: Wikipedia)
Graphical interfaces are typically the kind of object that can take advantage of asynchrounous programming as a GUI spend lot of time waiting for user input.
Tkinter <https://docs.python.org/3/library/tkinter.html#module-tkinter>_ is a kind of standard for graphical interface on python. It was designed long before python3.5 and the introduction of asyncio in the standard library. Thus, it heavily rely on threading to allow non-blocking operations.
## Asyncio refresh
This is a quick refresh on how asyncio works in python. This is not meant to be a tutorial.
Concurrent programming can be handled with threads. This mechanism works well in many compiled languages (such as C). In python (and in many other languages such as Ruby), only one thread can be executed simultaneously. More precisely, only one thread can hold the GIL and only a thread holding the GIL can execute python code.
When using thread, the system decides when a thread is interrupted and if another one is allow to run some python code before resuming. Thus, even if you have only one python thread running at a time, a global variable can be modified between python instructions (and you will always be unlucky if you don't use synchronisation mechanism such as mutex, semaphores, ...)
On the other hand, asyncio runs an event loop into which are put tasks. I won't go into the differences between awaitable, tasks, coroutines and futures. A tasks explicitly states when it can be interrupted waiting for the result of another tasks or operation (keyword await). The event loop decides which task is the next to run. To do so, it keeps a list of tasks with their respective states (pending and ready to run, sleeping and waiting for an external resources --usually an IO--, finished with the returned value or an error code, cancelled, and running)
A simple example is provided by the official documentation. It uses asyncio.sleep() to simulate the wait for an external resource that takes ate least the indicated delay.
import asyncio
import time
async def say_after(delay, what):
await asyncio.sleep(delay) # interrupt the task here waiting the result of asyncio.sleep()
print(what)
async def main(delay, what):
print(f"started at {time.strftime('%X')}")
await say_after(1, "hello") # interrupt the task here waiting for the result of "say_after"
await say_after(2, "world")
print(f"finished at {time.strftime('%X')}")
asyncio.run(main()) # start the event loop putting main() inside
To add a bit of complexity and randomness, we can use faker to decide what to say and random.randint to fix delays.
To fix the arguments of say_after:
from random import randint
import faker
word_generator = faker.Faker()
words = word_generator.words(7)
arguments = [(randint(0, 5), word) for word in words]
And now let's create and run tasks:
tasks = [
for delay, what in arguments
]
## Tkinter refresh
Tkinter uses Frames and a new window can be build using tk.Toplevel(). I have few experience with this library, thus I decided to use a very limited set of widgets. For instance, I use tk.Button() just to display text on an area:
import tkinter as tk
class EmptyFrame(tk.Frame):
def __init__(self, parent, message):
super().__init__(parent)
self.root = parent
self.message = message
self.create_entry()
self.pack()
def create_entry(self):
self.area = tk.Button(self, text=self.message)
print("Message: ", self.message)
self.area.pack()
To create a new window with a message:
parent = tk.Toplevel(self.master)
EmptyFrame(parent, msg)
## All together
The solution I found to use asyncio with Tkinter is to start a thread in charge of running the asyncio event loop. The other thread (the first one started by python) is here only to display the first window and start the asyncio thread.
To do so, I added lots of levels of indirections. I coded the following methods:
• async_process(): the main asyncio process (equivalent to the main() in the example above).
• run(): start the asyncio loop, calling async_process()
• action(): handles the thread running the asyncio loop. Its main job is to call run()
Putting all together (a file is available here)
import asyncio
import tkinter as tk
from datetime import datetime
from random import randint
import faker
class Application(tk.Frame):
"""main frame"""
def __init__(self, master=None):
super().__init__(master)
self.master = master
self.pack()
word_generator = faker.Faker()
words = word_generator.words(5)
self.messages = [(randint(0, 5), word) for word in words]
print(self.messages)
"""Add button to start the action
"""
self.btn = tk.Button(self, text="Big button", command=self.action)
self.btn.pack()
def action(self):
def run(self):
"""start asyncio loop"""
asyncio.run(self.async_process())
async def async_process(self):
self.start_time = datetime.now()
self.new_area = tk.Button(
self,
text="\n".join(
"\t".join((str(delay), message)) for delay, message in self.messages
),
)
self.new_area.pack()
for delay, what in self.messages
]
async def display_after(self, delay, what):
await asyncio.sleep(delay)
now = datetime.now()
msg = f"{now} \t {what}"
parent = tk.Toplevel(self.master)
EmptyFrame(parent, msg)
class EmptyFrame(tk.Frame):
def __init__(self, parent, message):
super().__init__(parent)
self.root = parent
self.message = message
self.create_entry()
self.pack()
def create_entry(self):
self.area = tk.Button(self, text=self.message)
print("Message: ", self.message)
self.area.pack()
if __name__ == "__main__":
root = tk.Tk()
app = Application(master=root)
app.mainloop()
This is not the cleanest way of doing it, neither the minimal one, but it works and cover most of the cases I want.
Category: howto Tagged: python asyncio
### Latex generator using Jinja
###### Wed 11 November 2020
The goal is to generate a PDF file using python. I decided to generate $$\LaTeX$$.
## Pipeline
I decided to use jinja as its documentation mention it.
\begin{equation*} \boxed{\text{Jinja template}} \xrightarrow[\text{python}]{} \boxed{\LaTeX} \xrightarrow[\text{pdflatex}]{} \boxed{\text{PDF}} \end{equation*}
The …
Category: LaTeX Tagged: python LaTeX Jinja
### HTTP to HTTPS
###### Fri 24 July 2020
Public key (Photo credit: Michael Drummond)
The goal was to migrate from HTTP to HTTPS
## HTTPS overview
The HTTPS protocol rely on TLS (previously SSL) to ensure data integrity (data cannot be modified unnoticed), confidentiality (requested URL and content are only known by end points) and authentication (end points are …
Category: network security Tagged: Unix Debian tools how to network security
### Wikipedia crawling (part II)
###### Thu 11 June 2020
Crawling (Photo credit: Wikipedia)
Wikipedia has specific infobox templates. This is the normalized way to enter specification inside wikipedia articles. It provides templates with already defined fields. For example the planet template has fields such as periapsis or …
Category: how to Tagged: python wikipedia html data retrieval
### Travis setup
###### Tue 12 May 2020
One job in continuous integration pipeline (Photo credit: Wikipedia)
The goal is to setup a CI pipeline based on Travis with external dependencies integrated to a Github repository
## Travis basics
To enable Travis integration in Github, one must edit ./.travis.yml file.
I won't go into detail. The setup is …
Category: how to Tagged: travis ci how to
### Wikidata crawling
###### Sun 26 April 2020
Graph database representation (Photo credit: Wikipedia)
I wish to have reliable data about vehicles. I decided to rely on one large source, namely Wikipedia. I chose it because it is reviewable and most of the time reviewed, and regularly updated and completed.
## Wikipedia - Wikidata relationship
Wikidata items are made to …
Category: how to Tagged: python wikipedia wikidata html
### Differential equation in python
###### Sat 04 April 2020
Second order differential equation (Photo credit: Wikipedia)
In python, differential equations can be numerically solved thanks to scipy [1]. Is usage is not as intuitive as I expected.
## Simple equation
Let's start small. The first equation will be really simple:
\begin{equation*} \frac{\partial{f}}{\partial{t}} = a \times f …
Category: maths Tagged: python maths equation
### Zombie propagation
###### Sat 21 March 2020
Zombie favorite food warning (Photo credit: wikipedia)
I recently read a paper [1] trying to model a disease propagation. I wanted to play with this model.
## The model
The model is know as "SIR" as it divide the population into 3 groups:
• S: suceptible to become a zombie
• I: infected …
Category: maths Tagged: python maths zombie
|
{}
|
# Working with the Euler high-performance cluster
As an ETH student you get access to the Euler supercomputing cluster. Inaugurated in 2014 and hosted at CSCS in Lugano, Euler is a computer cluster dedicated for use by researchers and students alike. Courses like High-Performance Computing for Science and Engineering use it to allow students to practice the principles of working with a supercomputer. This page will contain a summary of my notes from this course.
## Connecting to Euler
You can only connect to Euler from within the ETHZ network or through a VPN connection. Log in by using the command:
$username@euler.ethz.ch You will be prompted for your ETHZ password. ## Modules The Euler environment is organized in modules, which are conceptually software packages that can be loaded and unloaded as needed. The basic commands for working with modules are: • module load <modulename>: set environment variables related to modulename. • module unload <modulename>: unset environment variables related to modulename. • module list: list loaded modules • module avail: list all available modules. Example: module load gcc. Now we can compile C++ programs! ## Jobs High-performance software is not run on the login nodes, but submitted to the computing nodes with a job system. To submit a job use a command like: $ bsub -n 24 -W 08:00 -o output_file ./program_name program_args
This command will submit a job requesting 24 processing cores from a single node and a wall-clock runtime or 8 hours. If the program is still running after 8 hours, it will be terminated. The report of the job, along with the information that would usually appear in the terminal, will be appended to the file output_file, in the folder where the job started.
While one or more jobs are running you can use the command bjobsto get the state and the IDs of submitted jobs.
In order to terminate a job you can use the command bkill <jobID>.
## I/O performance and $SCRATCH Since your simulations might involve a lot of I/O you must never run your software in the $HOME directory, but set up your runs in the $SCRATCH space. The disks associated with this space are especially designed for heavy loads. However, $SCRATCH is not designed for frequent storing. If you are logging temporary results into a file, you should open it once at the beginning of the run, and only flush it occasionally. Note that std::endl not only appends \n but also flushes the stream.
Furthermore, your quota in $HOME is much smaller compared to $SCRATCH, but any files on $SCRATCH older than 15 days will be automatically deleted (see $SCRATCH/__USAGE_RULES__).
## Starting an interactive session
To start an interactive bash session on a compute node, use the -Is flags and point to the bash installation, for example:
[bones@eu-login-02-ng ~]$bsub -n 1 -W 1:00 -Is /bin/bash Generic job. Job <75404124> is submitted to queue <normal.4h>. <<Waiting for dispatch ...>> <<Starting on eu-ms-016-35>> FILE: /sys/fs/cgroup/cpuset/lsf/euler/job.75404124.19517.1539329284/tasks [bones@eu-ms-016-35 ~]$
This allows you to submit several jobs without waiting in queue each time. However, any running jobs will be killed if you log out.
## Requesting specific CPUs
Euler has several clusters, and different CPUs are available. To perform benchmarking, it is usually better to use the same for every run. So, we can specify what processor we would like:
bsub -n 24 -R fullnode -R "select[model=XeonE5_2680v3]" -W 00:10 -Is bash
The following processors are available:
• XeonE5_2697v2
• XeonE5_2680v3
• XeonE7_8867v3
• XeonGold_6150
• XeonGold_5118
|
{}
|
location: Publications → journals
Search results
Search: MSC category 32V25 ( Extension of functions and other analytic objects from CR manifolds )
Expand all Collapse all Results 1 - 2 of 2
1. CMB 2008 (vol 51 pp. 21)
Baracco, Luca
A Remark on Extensions of CR Functions from Hyperplanes In the characterization of the range of the Radon transform, one encounters the problem of the holomorphic extension of functions defined on $\R^2\setminus\Delta_\R$ (where $\Delta_\R$ is the diagonal in $\R^2$) and which extend as separately holomorphic" functions of their two arguments. In particular, these functions extend in fact to $\C^2\setminus \Delta_\C$ where $\Delta_\C$ is the complexification of $\Delta_\R$. We take this theorem from the integral geometry and put it in the more natural context of the CR geometry where it accepts an easier proof and a more general statement. In this new setting it becomes a variant of the celebrated edge of the wedge" theorem of Ajrapetyan and Henkin. Categories:32D10, 32V25
2. CMB 2005 (vol 48 pp. 500)
Baracco, Luca
Extension of Holomorphic Functions From One Side of a Hypersurface We give a new proof of former results by G. Zampieri and the author on extension of holomorphic functions from one side $\Omega$ of a real hypersurface $M$ of $\mathbb{C}^n$ in the presence of an analytic disc tangent to $M$, attached to $\bar\Omega$ but not to $M$. Our method enables us to weaken the regularity assumptions both for the hypersurface and the disc. Keywords:analytic discs, Poisson integral, holomorphic extensionCategories:32D10, 32V25
top of page | contact us | privacy | site map |
|
{}
|
# Maxwell
Le concept de champ. La physique quantique Le concept de champ Dans le cadre de la physique moderne, le concept de champ occupe une place majeure, toute aussi importante par ses conséquences que le quantum d’action.
Il intervient dans toutes les théories de la physique quantique aussi bien que dans les théories avantgardistes comme la théorie des supercordes que nous décrirons un peu plus loin, sans oublier la physique classique. Electromagnetic wave equation. Where.
Electric field. Electric field lines emanating from a point positive electric charge suspended over an infinite sheet of conducting material.
Qualitative description An electric field that changes with time, such as due to the motion of charged particles producing the field, influences the local magnetic field. That is: the electric and magnetic fields are not separate phenomena; what one observer perceives as an electric field, another observer in a different frame of reference perceives as a mixture of electric and magnetic fields.
For this reason, one speaks of "electromagnetism" or "electromagnetic fields". In quantum electrodynamics, disturbances in the electromagnetic fields are called photons. Definition Electric Field Consider a point charge q with position (x,y,z). Notice that the magnitude of the electric field has dimensions of Force/Charge. Magnetic flux. This article is about magnetic flux.
For the magnetic fields "B" (magnetic flux density) and "H", see magnetic field. Inductors. John Hutchinson sur les ondes électromagnetique. A Dynamical Theory of the Electromagnetic Field. "A Dynamical Theory of the Electromagnetic Field" is the third of James Clerk Maxwell's papers regarding electromagnetism, published in 1865.[1] It is the paper in which the original set of four Maxwell's equations first appeared.
The concept of displacement current, which he had introduced in his 1861 paper "On Physical Lines of Force", was utilized for the first time, to derive the electromagnetic wave equation.[2] A Dynamical Theory of the Electromagnetic Field. 4'30 3/5 Cours d'EFT "l'inversion psychologique". Loi de Lenz-Faraday. Un article de Wikipédia, l'encyclopédie libre.
Il s'agit d'une loi de modération, ce qui signifie qu'elle décrit des effets qui s'opposent à leurs causes. Cette modération est un effet relativiste de dilatation du temps appliqué aux particules chargées en mouvement. La première. What's a Tensor? Electromagnetism - Part 2 - A Level Physics. 1. Course Introduction and Newtonian Mechanics. Maxwell's Equations. Fourier series, wikipedia. In mathematics, a Fourier series (English pronunciation: /ˈfɔərieɪ/) decomposes periodic functions or periodic signals into the sum of a (possibly infinite) set of simple oscillating functions, namely sines and cosines (or complex exponentials).
The Discrete-time Fourier transform is a periodic function, often defined in terms of a Fourier series. And the Z-transform reduces to a Fourier series for the important case |z|=1. Fourier series is also central to the original proof of the Nyquist–Shannon sampling theorem. The study of Fourier series is a branch of Fourier analysis. History The Fourier series is named in honour of Jean-Baptiste Joseph Fourier (1768–1830), who made important contributions to the study of trigonometric series, after preliminary investigations by Leonhard Euler, Jean le Rond d'Alembert, and Daniel Bernoulli.
The heat equation is a partial differential equation. Dirac equation. In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928.
In its free form, or including electromagnetic interactions, it describes all spin-½ massive particles, for which parity is a symmetry, such as electrons and quarks, and is consistent with both the principles of quantum mechanics and the theory of special relativity,[1] and was the first theory to account fully for special relativity in the context of quantum mechanics. Although Dirac did not at first fully appreciate the importance of his results, the entailed explanation of spin as a consequence of the union of quantum mechanics and relativity—and the eventual discovery of the positron—represent one of the great triumphs of theoretical physics. Mathematical formulation The Dirac equation in the form originally proposed by Dirac is:[3] where ψ = ψ(x, t) is the wave function for the electron of rest mass m with spacetime coordinates x, t.
14. Maxwell's Equations and Electromagnetic Waves I. Maxwell's equations. Maxwell's equations are a set of partial differential equations that, together with the Lorentz force law, form the foundation of classical electrodynamics, classical optics, and electric circuits.
These fields in turn underlie modern electrical and communications technologies. Maxwell's equations describe how electric and magnetic fields are generated and altered by each other and by charges and currents. They are named after the Scottish physicist and mathematician James Clerk Maxwell, who published an early form of those equations between 1861 and 1862. The equations have two major variants. The "microscopic" set of Maxwell's equations uses total charge and total current, including the complicated charges and currents in materials at the atomic scale; it has universal applicability but may be unfeasible to calculate. Maxwell's Equations - Basic derivation. Electricity - A Level Physics. Electromagnetism - Part 1 - A Level Physics.
Maxwell's Equations. Circular Motion - A Level Physics. Onde acoustique ionique. Un article de Wikipédia, l'encyclopédie libre.
L'onde acoustique ionique est une onde de plasma. Elle fait partie des trois modes propres d'un plasma non-magnétisé avec l'onde de Langmuir et l'onde lumineuse. Elle est caractérisée par de basses fréquences contrairement aux autres modes propres. Cette basse fréquence permet de prendre en compte la réaction des ions au passage de l'onde.
En effet, la pulsation est proche de la pulsation plasma propre aux ions. Sa relation de dispersion est où est une vitesse dite acoustique, et sont les températures électronique et ionique respectivement, k est le nombre d'onde, K est la Constante de Boltzmann, M est la masse des ions, sont les indices adiabatiques électronique et ionique. Comme toute onde acoustique, son aspect corpusculaire est décrite par les phonons. Cours #1 d'introduction à l'électromagnétisme (L2)
|
{}
|
Recursos de colección
Project Euclid (Hosted at Cornell University Library) (192.320 recursos)
Annales de l'Institut Henri Poincaré, Probabilités et Statistiques
1. Homogenization via sprinkling
Benjamini, Itai; Tassion, Vincent
We show that a superposition of an $\varepsilon$-Bernoulli bond percolation and any everywhere percolating subgraph of $\mathbb{Z}^{d}$, $d\ge2$, results in a connected subgraph, which after a renormalization dominates supercritical Bernoulli percolation. This result, which confirms a conjecture from (J. Math. Phys. 41 (2000) 1294–1297), is mainly motivated by obtaining finite volume characterizations of uniqueness for general percolation processes.
2. Strong stationary times for one-dimensional diffusions
Miclo, Laurent
A necessary and sufficient condition is obtained for the existence of strong stationary times for ergodic one-dimensional diffusions, whatever the initial distribution. The strong stationary times are constructed through intertwinings with dual processes, in the Diaconis–Fill sense, taking values in the set of segments of the extended line $\mathbb{R}\sqcup\{-\infty,+\infty\}$. They can be seen as natural Doob transforms of the extensions to the diffusion framework of the evolving sets of Morris–Peres. Starting from a singleton set, the dual process begins by evolving into true segments in the same way a Bessel process of dimension 3 escapes from 0. The strong stationary...
3. Maximal inequalities for stochastic convolutions driven by compensated Poisson random measures in Banach spaces
Zhu, Jiahui; Brzeźniak, Zdzisław; Hausenblas, Erika
We consider a Banach space $(E,\|\cdot\|)$ such that, for some $q\geq2$, the function $x\mapsto\|x\|^{q}$ is of $C^{2}$ class and its $k$th, $k=1,2$, Fréchet derivatives are bounded by some constant multiples of the $(q-k)$th power of the norm. We also consider a $C_{0}$-semigroup $S$ of contraction type on $(E,\|\cdot\|)$. Finally we consider a compensated Poisson random measure $\tilde{N}$ on a measurable space $(Z,\mathcal{Z})$. ¶ We study the following stochastic convolution process ¶ $u(t)=\int_{0}^{t}\!\int_{Z}S(t-s)\xi(s,z)\tilde{N}(\mathrm{d}s,\mathrm{d} z),\quad t\geq0,$ where $\xi:[0,\infty)\times\Omega\times Z\rightarrow E$ is an $\mathbb{F}\otimes\mathcal{Z}$-predictable function. ¶ We prove that there exists a càdlàg modification $\tilde{u}$ of the process $u$ which satisfies the following maximal type inequality ¶ $\mathbb{E}\sup_{0\leq s\leq t}\|\tilde{u}(s)\|^{q^{\prime}}\leq C\mathbb{E}(\int_{0}^{t}\!\int_{Z}\|\xi(s,z)\|^{p}N(\mathrm{d}s,\mathrm{d}z))^{\frac{q^{\prime}}{p}},$...
Hasebe, Takahiro; Sakuma, Noriyoshi
We will prove that: (1) A symmetric free Lévy process is unimodal if and only if its free Lévy measure is unimodal; (2) Every free Lévy process with boundedly supported Lévy measure is unimodal in sufficiently large time. (2) is completely different property from classical Lévy processes. On the other hand, we find a free Lévy process such that its marginal distribution is not unimodal for any time $s>0$ and its free Lévy measure does not have a bounded support. Therefore, we conclude that the boundedness of the support of free Lévy measure in (2) cannot be dropped. For the...
5. Scaling limits of random outerplanar maps with independent link-weights
Stufler, Benedikt
The scaling limit of large simple outerplanar maps was established by Caraceni using a bijection due to Bonichon, Gavoille and Hanusse. The present paper introduces a new bijection between outerplanar maps and trees decorated with ordered sequences of edge-rooted dissections of polygons. We apply this decomposition in order to provide a new, short proof of the scaling limit that also applies to the general setting of first-passage percolation. We obtain sharp tail-bounds for the diameter and recover the asymptotic enumeration formula for outerplanar maps. Our methods also enable us to treat subclasses such as bipartite outerplanar maps.
6. SPDEs on narrow domains and on graphs: An asymptotic approach
Cerrai, Sandra; Freidlin, Mark
We introduce here a class of stochastic partial differential equations defined on a graph and we show how they are obtained as the limit of suitable stochastic partial equations defined in a narrow channel, as the width of the channel goes to zero.
7. Conditional speed of branching Brownian motion, skeleton decomposition and application to random obstacles
Öz, Mehmet; Çağlar, Mine; Engländer, János
We study a branching Brownian motion $Z$ in $\mathbb{R}^{d}$, among obstacles scattered according to a Poisson random measure with a radially decaying intensity. Obstacles are balls with constant radius and each one works as a trap for the whole motion when hit by a particle. Considering a general offspring distribution, we derive the decay rate of the annealed probability that none of the particles of $Z$ hits a trap, asymptotically in time $t$. This proves to be a rich problem motivating the proof of a more general result about the speed of branching Brownian motion conditioned on non-extinction. We provide...
8. Moment asymptotics for parabolic Anderson equation with fractional time-space noise: In Skorokhod regime
Chen, Xia
In this paper, we consider the parabolic Anderson equation that is driven by a Gaussian noise fractional in time and white or fractional in space, and is solved in a mild sense defined by Skorokhod integral. Our objective is the precise moment Lyapunov exponent and high moment asymptotics. As far as the long term asymptotics are concerned, some feature given in our theorems is different from what have been observed in the Stratonovich-regime and in the setting of the white time noise. While the difference disappears when it comes to the high moment asymptotics. To achieve our goal, we introduce...
9. Point process convergence for branching random walks with regularly varying steps
Bhattacharya, Ayan; Hazra, Rajat Subhra; Roy, Parthanil
We consider the limiting behaviour of the point processes associated with a branching random walk with supercritical branching mechanism and balanced regularly varying step size. Assuming that the underlying branching process satisfies Kesten–Stigum condition, it is shown that the point process sequence of properly scaled displacements coming from the $n$th generation converges weakly to a Cox cluster process. In particular, we establish that a conjecture of (J. Stat. Phys. 143 (3) (2011) 420–446) remains valid in this setup, investigate various other issues mentioned in their paper and recover the main result of (Z. Wahrsch. Verw. Gebiete 62 (2) (1983) 165–170)...
10. Supercritical behavior of asymmetric zero-range process with sitewise disorder
We establish necessary and sufficient conditions for weak convergence to the upper invariant measure for one-dimensional asymmetric nearest-neighbour zero-range processes with non-homogeneous jump rates. The class of “environments” considered is close to that considered by (Stochastic Process. Appl. 90 (2000) 67–81), while our class of processes is broader. We also give in arbitrary dimension a simpler proof of the result of (In Asymptotics: Particles, Processes and Inverse Problems (2007) 108–120 Inst. Math. Statist.) with weaker assumptions.
11. One-dependent coloring by finitary factors
Holroyd, Alexander E.
Holroyd and Liggett recently proved the existence of a stationary $1$-dependent $4$-coloring of the integers, the first stationary $k$-dependent $q$-coloring for any $k$ and $q$, and arguably the first natural finitely dependent process that is not a block factor of an i.i.d. process. That proof specifies a consistent family of finite-dimensional distributions, but does not yield a probabilistic construction on the whole integer line. Here we prove that the process can be expressed as a finitary factor of an i.i.d. process. The factor is described explicitly, and its coding radius obeys power-law tail bounds.
12. Typical behavior of the harmonic measure in critical Galton–Watson trees
Lin, Shen
We study the typical behavior of the harmonic measure of balls in large critical Galton–Watson trees whose offspring distribution has finite variance. The harmonic measure considered here refers to the hitting distribution of height $n$ by simple random walk on a critical Galton–Watson tree conditioned to have height greater than $n$. We prove that, with high probability, the mass of the harmonic measure carried by a random vertex uniformly chosen from height $n$ is approximately equal to $n^{-\lambda}$, where the constant $\lambda>1$ does not depend on the offspring distribution. This universal constant $\lambda$ is equal to the first moment of...
13. Any orthonormal basis in high dimension is uniformly distributed over the sphere
Goldstein, Sheldon; Lebowitz, Joel L.; Tumulka, Roderich; Zanghî, Nino
Let $\mathbb{X}^{d}$ be a real or complex Hilbert space of finite but large dimension $d$, let $\mathbb{S}(\mathbb{X}^{d})$ denote the unit sphere of $\mathbb{X}^{d}$, and let $u$ denote the normalized uniform measure on $\mathbb{S}(\mathbb{X}^{d})$. For a finite subset $B$ of $\mathbb{S}(\mathbb{X}^{d})$, we may test whether it is approximately uniformly distributed over the sphere by choosing a partition $A_{1},\ldots,A_{m}$ of $\mathbb{S}(\mathbb{X}^{d})$ and checking whether the fraction of points in $B$ that lie in $A_{k}$ is close to $u(A_{k})$ for each $k=1,\ldots,m$. We show that if $B$ is any orthonormal basis of $\mathbb{X}^{d}$ and $m$ is not too large, then, if we randomize...
14. Poisson approximation of point processes with stochastic intensity, and application to nonlinear Hawkes processes
Torrisi, Giovanni Luca
We give a general inequality for the total variation distance between a Poisson distributed random variable and a first order stochastic integral with respect to a point process with stochastic intensity, constructed by embedding in a bivariate Poisson process. We apply this general inequality to first order stochastic integrals with respect to a class of nonlinear Hawkes processes, which is of interest in queueing theory, providing explicit bounds for the Poisson approximation, a quantitative Poisson limit theorem, confidence intervals and asymptotic estimates of the moments.
15. A dynamical Curie–Weiss model of SOC: The Gaussian case
Gorny, Matthias
In this paper, we introduce a Markov process whose unique invariant distribution is the Curie–Weiss model of self-organized criticality (SOC) we designed and studied in (Ann. Probab. 44(1):444-478, 2016). In the Gaussian case, we prove rigorously that it is a dynamical model of SOC: the fluctuations of the sum $S_{n}(\cdot)$ of the process evolve in a time scale of order $\sqrt{n}$ and in a space scale of order $n^{3/4}$ and the limiting process is the solution of a “critical” stochastic differential equation.
16. Path-dependent infinite-dimensional SDE with non-regular drift: An existence result
Dereudre, David; Rœlly, Sylvie
We establish in this paper the existence of weak solutions of infinite-dimensional shift invariant stochastic differential equations driven by a Brownian term. The drift function is very general, in the sense that it is supposed to be neither bounded or continuous, nor Markov. On the initial law we only assume that it admits a finite specific entropy and a finite second moment. ¶ The originality of our method leads in the use of the specific entropy as a tightness tool and in the description of such infinite-dimensional stochastic process as solution of a variational problem on the path space. Our result clearly...
17. Scaling limits of coalescent processes near time zero
Şengül, Batı
In this paper we obtain scaling limits of $\Lambda$-coalescents near time zero under a regularly varying assumption. In particular this covers the case of Kingman’s coalescent and beta coalescents. The limiting processes are coalescents with infinite mass, obtained geometrically as tangent cones of Evans metric space associated with the coalescent. In the case of Kingman’s coalescent we are able to obtain a simple construction of the limiting space using a two-sided Brownian motion.
18. Simple CLE in doubly connected domains
Sheffield, Scott; Watson, Samuel S.; Wu, Hao
Loop Ensemble ($\operatorname{CLE}_{\kappa}$) in doubly connected domains: annuli, the punctured disc, and the punctured plane. We restrict attention to $\operatorname{CLE}_{\kappa}$ for which the loops are simple, i.e. $\kappa\in(8/3,4]$. In (Ann. of Math. (2) 176 (2012) 1827–1917), simple $\operatorname{CLE}$ in the unit disc is introduced and constructed as the collection of outer boundaries of outermost clusters of the Brownian loop soup. For simple $\operatorname{CLE}$ in the unit disc, any fixed interior point is almost surely surrounded by some loop of $\operatorname{CLE}$. The gasket of the collection of loops in $\operatorname{CLE}$, i.e. the set of points that are not surrounded by any...
19. Decomposition of Lévy trees along their diameter
Duquesne, Thomas; Wang, Minmin
We study the diameter of Lévy trees that are random compact metric spaces obtained as the scaling limits of Galton–Watson trees. Lévy trees have been introduced by Le Gall & Le Jan (Ann. Probab. 26 (1998) 213–252) and they generalise Aldous’ Continuum Random Tree (1991) that corresponds to the Brownian case. We first characterize the law of the diameter of Lévy trees and we prove that it is realized by a unique pair of points. We prove that the law of Lévy trees conditioned to have a fixed diameter $r\in (0,\infty)$ is obtained by glueing at their respective roots two...
20. Rate of convergence to equilibrium of fractional driven stochastic differential equations with some multiplicative noise
Fontbona, Joaquin; Panloup, Fabien
We investigate the problem of the rate of convergence to equilibrium for ergodic stochastic differential equations driven by fractional Brownian motion with Hurst parameter $H>1/2$ and multiplicative noise component $\sigma$. When $\sigma$ is constant and for every $H\in(0,1)$, it was proved by Hairer that, under some mean-reverting assumptions, such a process converges to its equilibrium at a rate of order $t^{-\alpha}$ where $\alpha\in(0,1)$ (depending on $H$). The aim of this paper is to extend such types of results to some multiplicative noise setting. More precisely, we show that we can recover such convergence rates when $H>1/2$ and the inverse of...
Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.
|
{}
|
# How do you divide fractions with whole numbers?
## How do you divide fractions with whole numbers?
We can divide fractions by whole numbers by multiplying the bottom of the fraction by the whole number. We can see that 2 × 3 = 6. / 2 ÷ 3 = 1 / 6 . We divide the fraction by 3 by making the bottom of the fraction 3 times larger.
What is 3/8th as a fraction?
6/16
Decimal and Fraction Conversion Chart
Fraction Equivalent Fractions Decimal
1/8 2/16 .125
3/8 6/16 .375
5/8 10/16 .625
7/8 14/16 .875
### How do you find 1/4 of a fraction?
Answer: To find 1/4 of any number divide the number by 4 to give 80.
What is whole number division?
Division is a mathematical operation, written using the symbol ÷ , that can be thought of in two ways: a÷b is the size of each group when a objects are divided into b groups of equal size, OR a÷b is the number of groups when a objects are divided into groups of b objects each.
#### What is 4 divided by half?
eight
Next, multiply the two numerators. Then, multiply the two denominators. In other words – four divided by one half = eight.
What Can 3/8 be simplified?
Explanation: Yes the fraction 38 is in lowest terms because the numerator is 3, a prime number, and whenever there is a prime number in a fraction in either numerator or denominator, it cannot be cut in lower except when the prime number is a factor of the other. Hope you understood.
## How do multiply fractions?
There are 3 simple steps to multiply fractions
1. Multiply the top numbers (the numerators).
2. Multiply the bottom numbers (the denominators).
3. Simplify the fraction if needed.
What is 3/4 of a whole?
You can write it as a decimal: 0.75.
### How do u divide fractions?
The first step to dividing fractions is to find the reciprocal (reverse the numerator and denominator) of the second fraction. Next, multiply the two numerators. Then, multiply the two denominators. Finally, simplify the fractions if needed.
|
{}
|
# Spoiler code and text doesn't appear at the same time
Code and text don't appear at the same time.
Second First
It's a bit odd, and Code Golf has a better reason to use code in spoiler text than any site on the SE network :)
EDIT: It appears this is not the first problem with spoiler text and code.
That problem was fixed in April 2014.
I also found spoiler transitions are strange here which describes the same problem with transitions, though it was (incorrectly) closed as a duplicate of another question.
• Nifty! Not that I have ever had any need to put code in a spoiler block. What's the use case? – dmckee --- ex-moderator kitten May 25 '14 at 0:03
• @dmckee, I noticed it most recently in this creative answer to "Write a piece of code that quits itself" – Paul Draper May 25 '14 at 0:13
• I've noticed it before but never really paid much attention to it as I assumed it was my computer & not the site. – Kyle Kanos May 25 '14 at 1:15
• It is true that this happens, but I don't think it affects its functionality that much, I mean, code appears faster than text, but so what? – user12205 May 25 '14 at 2:15
• @ace, so it looks bad. A bug, not a catastrophe. Only matters if you care about UI. – Paul Draper May 25 '14 at 3:14
|
{}
|
# Solving Differential Equation
• February 21st 2011, 03:59 PM
sparky
Solving Differential Equation
Hi,
I am struggling to finish this question and would love some guidance please:
Question: Solve for the following differential equation for the original equation: $y' = \frac{-2x-y^2}{2xy-5}$
$y' = \frac{-2x-y^2}{2xy-5}$
$\frac{dy}{dx}= \frac{-2x-y^2}{2xy-5}$
$(2xy-5)dy = (-2x-y^2)dx$
$2xydy - 5dy = -2xdx -y^2dx$ (assuming that I'm correct so far, what do I do now?)
• February 21st 2011, 04:58 PM
topsquark
Quote:
Originally Posted by sparky
Question: Solve for the following differential equation for the original equation: $y' = \frac{-2x-y^2}{2xy-5}$
Are you asked to solve this or put it into a more standard form?
-Dan
• February 21st 2011, 05:16 PM
sparky
Quote:
Originally Posted by topsquark
Are you asked to solve this or put it into a more standard form?
-Dan
Thanks for the reply. I'm asked to solve it "for the original equation". This sounds a little confusing to me.
• February 21st 2011, 05:17 PM
sparky
Quote:
Originally Posted by topsquark
Are you asked to solve this or put it into a more standard form?
-Dan
Thanks for the reply. I'm asked to solve it "for the original equation". This sounds a little confusing to me.
• February 21st 2011, 05:28 PM
topsquark
Quote:
Originally Posted by sparky
Hi,
I am struggling to finish this question and would love some guidance please:
Question: Solve for the following differential equation for the original equation: $y' = \frac{-2x-y^2}{2xy-5}$
$y' = \frac{-2x-y^2}{2xy-5}$
$\frac{dy}{dx}= \frac{-2x-y^2}{2xy-5}$
$(2xy-5)dy = (-2x-y^2)dx$
$2xydy - 5dy = -2xdx -y^2dx$ (assuming that I'm correct so far, what do I do now?)
Okay, I saw a similar example in one of your other posts. You are good for as far as you have gotten. Regroup:
$(2xydy + y^2dx) - 5dy + 2xdx = 0$
The $2xy dy + y^2 dx$ is a perfect differential. What is it?
-Dan
• February 21st 2011, 06:47 PM
sparky
Quote:
Originally Posted by topsquark
You are good for as far as you have gotten.
Ok this is very encouraging.
Quote:
The $2xy dy + y^2 dx$ is a perfect differential. What is it?
I am still trying to figure out why you grouped together $(2xydy + y^2dx)$ rather than $(2xydy-5dy)$. Is it because we of differentiating with respect to y (or is it x, I'm a little confused I must admit)?
• February 21st 2011, 07:20 PM
topsquark
Quote:
Originally Posted by sparky
Ok this is very encouraging.
I am still trying to figure out why you grouped together $(2xydy + y^2dx)$ rather than $(2xydy-5dy)$. Is it because we of differentiating with respect to y (or is it x, I'm a little confused I must admit)?
Acutally I'm going to integrate over something different. Note that
$(2xydy + y^2dx) = d(xy^2)$
So when it is time to take integrals I will do it like this:
$\displaystyle \int d(xy^2) = xy^2 + C$
The reason why I grouped it the way I did was so I could get rid of the y dx and x dy terms.
-Dan
|
{}
|
Today I got confused about Low pass filter transfer function. Let's assume we have a simple low pass RC filter . It is well known that at f=fc we expect -3dB (0.707) drop in Vin.
Everyone knows a LPF transfer function is:
On the other hand we know that :
Which results in :
If we set f=fc we will have:
But I expected 1/√2 =0.707 ! I am sure I am missing something but can not find that because this differs from the formula that I knew:
.
• Your result is correct if the input to the filter is $e^{t/RC}$, a real exponential. – Alfred Centauri Oct 26 '13 at 16:07
The problem has come from the substitution of s for 2*PI*f.
s is a substitution for j*w
thus the equation actually is: 1/(1+j)
If you resolve this back to magnitude: 1+j = sqrt(1^2 + 1^2) = sqrt(2)
resulting in: 1/sqrt(2) = 0.707 as you expect
So take a low-pass filter: RC
simulate this circuit – Schematic created using CircuitLab
$$Vout = Vin*\frac{\frac{1}{j\omega C}}{R + \frac{1}{j\omega C}}$$ $$Vout = Vin*\frac{1}{j\omega RC + 1}$$ $$\frac{Vout}{Vin} = \frac{1}{j\omega RC + 1}$$ $$\frac{Vout}{Vin} = \frac{1}{\sqrt{(\omega RC)^{2} + 1^{2}}}$$ From a magnitude point of view.
• I thought s=jwf not j*w. Is that? – Aug Oct 26 '13 at 14:09
• w (radians per second) is equal to 2*pi*f – JonRB Oct 26 '13 at 14:13
|
{}
|
# The indecomposable projective $\mathbb{F}_pG$-module with $UJ/J\cong \mathbb{F}_p$
Let:
1. $G$ be a finite group;
2. $p$ be prime;
3. $J$ be the Jacobson radical of $\mathbb{F}_pG$.
A paper I'm trying to read mentions the following object:
The indecomposable projective $\mathbb{F}_pG$-module $U$ with $U/UJ\cong\mathbb{F}_p$
It is then also claimed that $U$ is a direct summand of $\mathbb{F}_pG$.
1. Why does this object exist?
2. Why is it unique?
3. Why is it a direct summand of $\mathbb{F}_pG$.
I know all the definitions of the terms mentioned, but not experienced with some of them.
The paper is "The Presentation Rank of a Direct Product of Finite Groups" / Cossey, Gruenberg, Kovacs (Journal Of Alegebra 28, 597-603 (1974)).
EDIT: If relevant, it might be understood from context that $p$ divides $|G|$, but I'm nore sure.
-
Since $\mathbb{F}_p[G]$ is Artinian, $\mathbb{F}_p[G]/J$ is semisimple Artinian. Hence every submodule of $\mathbb{F}_p[G]/J$ is a direct summand. In particular $\mathbb{F}_p\subseteq \mathbb{F}_p[G]/J$ is the image of an idempotent $e\in \mathbb{F}_p[G]/J$. By the descending chain condition and Nakayama's lemma, $J$ is nilpotent. Hence we can lift $e$ to an idempotent $\hat{e}\in \mathbb{F}_p[G]$. The image of $\hat{e}$ is probably the module you want. Unfortunately, it isn't clear how this approach gives uniqueness. – Jeff Tolliver Nov 21 '12 at 1:26
Alternatively: every simple module $S$ has a projective cover $P$ which is a direct summand of $\mathbb{F}_pG$, and is such that $P/PJ \cong S$. Furthermore the multiplicity of $P$ as a direct summand of $\mathbb{F}_pG$ is $\dim S$. I'd recommend you read something like Benson - Representations and Cohomology I or Alperin - Local representation theory or one of the Curtis-Reiner books to learn the relevant theory. – mt_ Nov 21 '12 at 10:29
@mt_: Sorry for the long delay in my reply. Can you point me to the place in one of those books which explains the relevant material to understand your answer? – user3533 May 6 '13 at 8:31
@user3533 Benson chapters 1 (background on the representation theory of algebras, Wedderburn's theorem, existence and uniqueness of projective covers, projectives are summands of free modules,...) and 3 (group representations). – mt_ May 6 '13 at 8:47
@mt_: Thanks. I found the answer in Benson chapter 1, p. 12, in the discussion under Corollary 1.7.4. – user3533 May 11 '13 at 23:10
|
{}
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 29 May 2016, 03:12
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# To qualify for federal funding, a local school district must
Author Message
TAGS:
### Hide Tags
Manager
Joined: 28 May 2009
Posts: 155
Location: United States
Concentration: Strategy, General Management
GMAT Date: 03-22-2013
GPA: 3.57
WE: Information Technology (Consulting)
Followers: 5
Kudos [?]: 164 [0], given: 91
To qualify for federal funding, a local school district must [#permalink]
### Show Tags
07 Mar 2013, 10:02
00:00
Difficulty:
15% (low)
Question Stats:
79% (02:18) correct 21% (01:02) wrong based on 67 sessions
### HideShow timer Statistics
To qualify for federal funding, a local school district must keep their ratio of certified teachers to non-certified teachers above $$9:2$$. If the school district employs a total of 600 teachers, what is the maximum number of non-certified teachers they can employ and qualify for federal funding?
(A) 99
(B) 109
(C) 111
(D) 116
(E) 133
[Reveal] Spoiler: OA
_________________
Magoosh GMAT Instructor
Joined: 28 Dec 2011
Posts: 3070
Followers: 1025
Kudos [?]: 4446 [0], given: 49
Re: To qualify for federal funding, a local school district must [#permalink]
### Show Tags
07 Mar 2013, 10:39
Expert's post
megafan wrote:
To qualify for federal funding, a local school district must keep their ratio of certified teachers to non-certified teachers above $$9:2$$. If the school district employs a total of 600 teachers, what is the maximum number of non-certified teachers they can employ and qualify for federal funding?
(A) 99
(B) 109
(C) 111
(D) 116
(E) 133
Dear megafan:
In all honesty, this is a tricky problem to do without a calculator. Remember, there's no calculator on the GMAT Quant. Dividing 600 by 11 is quite a challenge for folks to do in their heads if they are not mental math pros. It seems to me the ratio could be something like 11/4 and nothing would be lost conceptually in the problem, but it would be a slam-dunk for mental math. That, in my view, is closer to what the GMAT expects.
Mike
_________________
Mike McGarry
Magoosh Test Prep
Manager
Joined: 24 Sep 2012
Posts: 90
Location: United States
GMAT 1: 730 Q50 V39
GPA: 3.2
WE: Education (Education)
Followers: 4
Kudos [?]: 105 [0], given: 3
Re: To qualify for federal funding, a local school district must [#permalink]
### Show Tags
07 Mar 2013, 10:40
Hello,
Let me try helping you with this one.
This problem states that the ratio of certified teachers to non certified teachers must be kept above 9:2 for the school to qualify for federal funding.
The total number of teachers is 600.
We need to find the maximum number of non certified teachers that can be employed so that the school still qualifies for federal funding.
Now, when we try to increase the ratio above the minimum stated ratio, our split up of teachers into the two categories changes so that the number of non certified teachers reduces.
For example, let us take the ratio to be the minimum ratio, 9:2. Thus the number of certified teachers=9x and the number of non certified teachers is 2x.
9x+2x=600
x=54.54
2x~109
If we increase the ratio to 10:2, our total number of teachers becomes=12x and the number of non certified teachers becomes 100.
12x=600
x=50
2x=100
Thus, any increase in the ratio will work towards reducing the number of non certified teachers. The minimum ratio of certified to non-certified teachers needs to be used in order to get the maximum number of non-certified teachers.
Hence, the maximum number of non certified teachers that the school can employ and still qualify for federal funding is 109.
Hope this helps! Let me know in case of any further question.
megafan wrote:
To qualify for federal funding, a local school district must keep their ratio of certified teachers to non-certified teachers above $$9:2$$. If the school district employs a total of 600 teachers, what is the maximum number of non-certified teachers they can employ and qualify for federal funding?
(A) 99
(B) 109
(C) 111
(D) 116
(E) 133
Math Expert
Joined: 02 Sep 2009
Posts: 33059
Followers: 5771
Kudos [?]: 70758 [0], given: 9857
Re: To qualify for federal funding, a local school district must [#permalink]
### Show Tags
08 Mar 2013, 03:39
Expert's post
megafan wrote:
To qualify for federal funding, a local school district must keep their ratio of certified teachers to non-certified teachers above $$9:2$$. If the school district employs a total of 600 teachers, what is the maximum number of non-certified teachers they can employ and qualify for federal funding?
(A) 99
(B) 109
(C) 111
(D) 116
(E) 133
"Stolen" GMAT Prep question:
Quote:
At a certain university, the ratio of the number of teaching assistants to the number of students in any course must always be greater than 3:80. At this university , what is the maximum number of students possible in a course that has 5 teaching assistants?
A. 130
B. 131
C. 132
D. 133
E. 134
Discussed here: at-a-certain-university-the-ratio-of-the-number-of-teaching-79240.html
_________________
Re: To qualify for federal funding, a local school district must [#permalink] 08 Mar 2013, 03:39
Similar topics Replies Last post
Similar
Topics:
In order to fulfill a local school’s request for x cakes, B parents 2 13 Dec 2015, 04:41
2 For an employee to qualify for early retirement at a certain 6 26 Nov 2012, 00:12
7 Every student of a certain school must take one and only one 8 19 Feb 2012, 07:30
7 In a local school district, the high school and middle 19 05 Jul 2010, 13:42
16 If john makes a contribution to a charity fund at school, th 12 22 Jan 2010, 11:16
Display posts from previous: Sort by
|
{}
|
Obtain one vcf file of shared SNPs from input files with different samples using vcf-isec (vcftools)
1
3
Entering edit mode
5.2 years ago
weedy23 ▴ 70
I am new to Linux and programming and am trying to use vcftools. I have 3 vcf files; each one is a different population (i.e. with no shared individuals between the files). I am trying to use vcf-isec to merge the 3 files and end up with one vcf file that contains only the SNPs that are present in all 3 files. I have tried the following code:
vcf-isec -n =3 file1.vcf.gz file2.vcf.gz file3.vcf.gz -f -c > CombinedPops.vcf
and without -c :
vcf-isec -n =3 file1.vcf.gz file2.vcf.gz file3.vcf.gz -f > CombinedPops.vcf
but I keep ending up with one file with only the individuals from the first input file. It also gives me a warning that "the number of sample columns is different", but I read in another post that -f forces vcf-isec to output the file regardless. Could this warning be why I can't get a file with ALL the individuals listed? Can vcf-isec even do this?
Although I have read the vcf-isec documentation, I am still not sure exactly what the difference between the -c and -o commands are, which may be part of my problem.
Any help is greatly appreciated!
vcftools vcf • 5.0k views
2
Entering edit mode
5.2 years ago
venu 6.8k
vcf-isec -n +3 A.vcf.gz B.vcf.gz C.vcf.gz | bgzip -c > out.vcf.gz
Which gives a vcf file containing variants present in all the input vcf files (shared by all 3 VCF files). -f flag should be included to force the program to run over the different column name errors. On the other hand if you want to merge 3 vcf files into single vcf file use vcf-merge.
I don't think this program of vcftools can separate SNPs, Indels but you can use vcf-annotate
zcate file.vcf.gz | vcf-annotate --fill-type | bgzip -c > out.vcf.gz
This program includes variant TYPE field in the last column of your vcf file. Then create a new VCF file with SNPs. And finally I don't find any -o flag with these programs.
1
Entering edit mode
Hi venu, thanks for your help. The vcf-isec code you wrote is basically what I did, but I just specified exactly 3 files rather than 3 or more, and an uncompressed output instead. However, this gave me a file with only the individuals in the first file in it (although it did give me the loci found only in all three files). I have looked at vcf-merge but I think this produces a file with ALL loci? I only want the ones common to all the specified files. I don't have indels, my data is simple SNP data, but I will look further at vcf-annotate. The -o flag is mentioned here: http://vcftools.sourceforge.net/perl_module.html#vcf-isec, under "Read More".
0
Entering edit mode
My bad. I edited. So you need SNPs shared by all 3 files (same chr#, position, base change ..etc)? but not as vcf-isec do?
0
Entering edit mode
Yep exactly.
0
Entering edit mode
Hi weedy23, did you finally manage to solve the issue? I am having the same problem as you do...
1
Entering edit mode
Hi, sorry I only just saw your comment. I ended up using vcf-merge instead. However, it included ALL loci in the output file, not just loci present in all the files. So I had to go through the output file and delete all the loci that were missing for one or more of the populations. Not ideal but it didn't take too long in the end if you sort the file. Good luck!
0
Entering edit mode
how to extract specific variants for A? following command is correct?
vcf-isec -c A.vcf.gz B.vcf.gz C.vcf.gz > specific_for_A.vcf
|
{}
|
## Posts
Showing posts from April, 2013
### Today in History: First Player Out
Jason Collins, an 11-year veteran of the NBA, just became the first openly gay athlete in any of the major US sports. In his self-penned Sports Illustrated article, he writes:
I'm a 34-year-old NBA center. I'm black. And I'm gay. I didn't set out to be the first openly gay athlete playing in a major American team sport. But since I am, I'm happy to start the conversation. I wish I wasn't the kid in the classroom raising his hand and saying, "I'm different." If I had my way, someone else would have already done this. Nobody has, which is why I'm raising my hand.
### MC Monte Carlo - Gridding It Up In the Likeli-Hood
At the end of the term in my Ay117 Astrostats course, the students gave 15-minute oral presentations or poster presentations describing their final projects. The morning of presentations was organized like a Keck Science Meeting, so that students not only learned the primary course material, but also gained valuable practice in giving scientific presentations.
Of the many highlights from this year's session was Scott Barenfeld's performance of his latest single from his upcoming Astrostats hip hop album. It was most certainly the best rap performance of the day.
Griddin' it up in the Likeli-hood From the upcoming debut album Straight Outta Inverse Compton
Scott Barenfeld (CIT 1st year) March 19, 2013
They call me MC Monte Carlo Be runnin' my code 'till tomorrow I take the random walk So don't sit there and squawk If you need my routines, you can borrow. I doin' my Bayesian stat stu ff Comin' up with posteriors off the cu ff It's Bayes' Theorem yo …
### Minerva update: The eagle has landed
On Fri, Apr 12, 2013 at 4:39 PM, Jon Swift wrote:
Status update:
Our CDK700 is safely in place now! Rick and Kevin are
hooking up all the cables and our camera in prep for
tonight. The telescope fit nicely on the mounting bolts
(after a few strikes with a mallet) and the clearance is to
spec. Skies are clear, and if all goes well we'll have a
pointing solution soon after dark. -----------
Telescope 1 was delivered and installed successfully Friday afternoon! That we were able to get on the sky immediately is a testament to the amazing engineering of Planewave, and the wisdom of going with top-of-the-line, yet off-the-shelf telescopes for our project. Telescope 2 will soon be rolling down the assembly line.
For now, testing of Telescope 1, the science camera and the fiber acquisition unit (FAU) have begun in earnest. Kristina Hogstrom (CIT Aerospace Engineering second-year) will be teaching the telescope to operate robotically, while Phil Muirhead and Mike Bottom (CIT Astro third-y…
### What the IAU should have written
At least to Dr. Wright's mind:
http://www.personal.psu.edu/jtw13/blogs/astrowright/2013/04/what-the-iau-should-have-written.html
An excerpt:
In the light of recent events, where the possibility of buying the rights to namenominating or voting on popular names for exoplanets has been advertised, the International Astronomical Union (IAU) wishes to inform the public that such schemes have no bearing on the official naming processastronomers do not use such names and the international astronomical community currently has no plans to do so. The IAU wholeheartedly welcomes the public's interest to be involved in recent discoveries, but would like to strongly stress the importance of having a unified naming procedure for official names and designations.
### Is Uwingu fishy? I really don't think so.
I've read/heard a lot of negative comments from astronomers regarding Uwingu. I suspect this is because they have bad associations for any concept involving money for naming rights of astronomical objects. There are a lot of shady sites out there that supposedly let the public buy names for stars, all for profit and with no scientific interest in mind. But I'd like to assure you, dear readers, that Uwingu is no such organization.
The Uwingu website makes their mission and methods abundantly clear. They are compiling a "baby name book" of unofficial designations for exoplanets that may become unofficial monickers, or even eventually official names if the IAU ever gets into the business of officially sanctioning exoplanet names. But nowhere on the site have I seen evidence that they are misleading the public in how all of this will actually work. Under "About us" they state:
Funding great science and science education doesn’t take a lot of money, but it requ…
### How far away is Mars?
This far (2-minute interactive tour, click arrow at bottom of page): http://www.distancetomars.com/
### Something strange going on in the IAU
Prof. Jason Wright has the story over at his blog. If you care about the actions of the IAU---in particular a potential abuse of procedure---please read the post in full and pass it along.
It all starts with a company called Uwingo that wants to sell the ability to propose exoplanet names (think of an exoplanet baby name book) in order to raise funding for exoplanetary science. As stated on the Uwingo site, "We’re asking the public to create a vast list of planet names for astronomers to choose from."
In a recent press release, the IAU responds:
In the light of recent events, where the possibility of buying the rights to name exoplanets has been advertised, the International Astronomical Union (IAU) wishes to inform the public that such schemes have no bearing on the official naming process. The IAU wholeheartedly welcomes the public’s interest to be involved in recent discoveries, but would like to strongly stress the importance of having a unified naming procedure. Interes…
### Compilation of mental health posts
Several people have requested this list, so here you go. Enjoy! And point me to other sources, please.
Performance Enhancing Drugs
Impostor Syndrome
Work-life balance through working efficiently
- Part 1
- Part 2
- Part 3
Professing with Depression
- In good company: With tenure but not without troubles
- It's not you, it's a disease
Zen and the Art of Astronomy Research
### Email Charter (NNTR)
Email is eating all of our lives. Let's all agree to a few simple ground rules:
http://emailcharter.org/
### I'm a lucky teacher!
Last term I taught Ay117: Statistics and Data Analysis. I wasn't expecting to have to teach the class in the winter; I thought I was teaching in the Spring. So I scheduled a bunch of travel in the Winter, most of it related to my job decision. Fortunately, I had the best TA on campus last term, Aaron Wolf. Aaron really carried me, subbing for me about 1/4 of the lectures. He also did the grading and office hours, and the students absolutely loved him.
Aaron is my favorite type of human: extraordinarily smart, yet humble and extremely personable. These characteristics shone through in his teaching last term. Oh, did I mention he did all this while writing his thesis?! As a last-year grad student, he didn't even have to TA.
From:Registrar's Office REGIS Sent: Tuesday, April 09, 2013 9:00 AM
To: Wolf, Aaron S.Subject:Aaron, thank you for being an excellent TA!
Dear Aaron,
It has come to our attention that you were one of the highest rated TAs in the TQFR for the past term. I hope…
### Planets and planetesimals around kappa CrB
My collaborators and I have just published a paper announcing a newly detected planet and dust disk around the subgiant kappa Coronea Borealis. I've written about kappa CrB previously here. Back then I only knew of one planet orbiting the star. With additional RV measurements, we discovered a second planet in a long-period orbit. The period of the second planet is so long that we only see a portion of the orbit, which looks like a linear RV "trend," or constant acceleration (scientists: think first-order Taylor expansion of a Keplerian orbit).
Additionally, my collaborators Amy Bonsor and Grant Kennedy used the Herschel space telescope to observe the star in the far infrared. At these long wavelengths, the star is very faint, but any warm material around the star will be bright. Amy detected extended emission around the star consistent with a flattened disk of warm dust grains, similar to our Kuiper belt, only much larger and more massive.
Here's the link to the pre…
### This Week's Astro Nutshell: It's full of stars!
Each week I work with first-year grad students Marta and Becky on "order of magnitude" problems at the blackboard. I put that in quotes because we tend to do many more scaling arguments than true OoM. The idea is for them to draw on what they've picked up in class and apply it to common problems that arrise in astronomy.
Suppose you have a magnitude-limited survey such that all stars have magnitudes $m < m_{\rm max}$. What will be the most common type (mass) of star in your survey?
This question is pretty much the same as "What types of stars visible in the night sky are most numerous?" This type of problem was first addressed by Swedish astronomer Gunnar Malmquist back in the 20's, which led to what we now refer to as the Malmquist Bias.
Initially, one might thing: well red dwarfs are the most common stars in the Galaxy, so M dwarfs will be the most common in our survey (or sky). However, M dwarfs are very faint (low luminositi…
### TESS is GO!
The Transiting Exoplanet Sky Survey (TESS) has been selected as NASA's next next Explorer mission. TESS
is like an all-sky Kepler. While the Kepler telescope stares at hundreds of thousands of faint stars in one patch of the sky, TESS will look at an order-of-magnitude more stars (2.5 million!), and it will focus on those much closer to home. This mission is a big part of my future science plans on a ten-year time scale, so I'm extremely excited that it was selected. Go NASA!
-------
WASHINGTON -- NASA's Astrophysics Explorer Program has selected two missions for launch in 2017: a planet-hunting satellite and an International Space Station instrument to observe X-rays from stars.
The Transiting Exoplanet Survey Satellite (TESS) and Neutron Star Interior Composition Explorer (NICER) were among four concept studies submitted in September 2012. NASA determined these two offer the best scientific value and most feasible development plans.
TESS will use an array of telescopes to…
### Quick-twitch muscles: theory and practice
Here are the muscles that help humans jump. Most important are the quick-twitch muscle cells. This is theory.
The video below shows the theory in practice, as seen in last night's Clippers game. I was in the crowd, up near the rafters, but it was still a plenty good view of DeAndre Jordan tearing up the Suns. Poor Jermaine O'Neal at the 1:50 mark. Not quite Mosgov'd, but close...
NOTE: Cover the kids' ears near the 0:30 mark. We heard it in the stadium, too, when the basket mic picked up O'Neal exclaiming "Oh sh*t!" when he saw the lob going up over his head. Guard yo' grill!
### Kepler meets Einstein when a stellar skeleton bends space-time
Gravity-Bending Find Leads to Kepler Meeting Einstein
This is a press release by my postdoc, Dr. Phil Muirhead. Last summer he compiled a list of all of the planet candidates around the M dwarfs (red dwarfs) targeted by the NASA Kepler mission. One of our summer students, Andrew Vanderburg, noticed that the light curve of one of the candidate transiting Jupiters looked very strange. If a hot Jupiter transits a star, it should take about 20 minutes for the planet to move across the limb of the star, causing the light to go from the full, out-of-transit level, to the minium level during a full transit (eclipse). Here's what the light curve of Kepler Object of Interest number 256 looks like (KOI-256):
Where the light level first decreases is called "ingress," and for KOI-256 the ingress time is about a minute, instead of 20 minutes. Weird! After pondering this a bit, Andrew and Phil realized that the ingress time implies an Earth-sized object. But why does an Earth-sized ob…
|
{}
|
# Loss factor of steel pipes
The loss in steel pipes is given in the IEC by two empirical formulae, one for cables where the cores are bound in close trefoil formation and the other where the cores are placed in a more open configuration (cradled) on the bottom of the pipe. We calculate the ratio of the pipe inside diameter to cable outside diameter and if this ratio is larger than 3, cradled configuration is assumed otherwise triangular.
To be noted:
• The loss-factor is per phase.
• The IEC notes that these formulae have been empirically obtained in the United States of America and at present apply only to steel pipe sizes and steel types used in that country.
• The formulae given apply to a frequency of 60 Hz. For lower frequencies, the induced current in the ferromagnetic steel pipe will be lower and for higher frequencies it will be higher. So for other frequencies, each formula is multiplied by a correction factor. For 50 Hz, the factor is 0.760 and for 16.7 Hz it is 0.147.
• Symbol
$\lambda_{3}$
Formulae
$\frac{1.0 \cdot 10^{-5} \left(- 0.001485 Di_{d} + 0.0115 s_{c}\right)}{R_{c}}$ three single-core cables in trefoil in magnetic steel pipe $\frac{1.0 \cdot 10^{-5} \left(0.00226 Di_{d} + 0.00438 s_{c}\right)}{R_{c}}$ three single-core cables in cradled formation in magnetic steel pipe $\frac{\sqrt{15} f^{2} \sqrt{\frac{1}{f}}}{1800}$ multiplication factor for other frequencies than 60 Hz $0$ steel pipe is non-magnetic
Related
$Di_{d}$
$R_{c}$
$s_{c}$
Used in
$f_{SHF}$
$I_{c}$
$T_{eq}$
$W_{h}$
$W_{p}$
|
{}
|
# Generative Adversarial Networks (GAN) in Pytorch
This week is a really interesting week in the Deep Learning library front. There are two new Deep Learning libraries being open sourced: Pytorch and Minpy.
Those two libraries are different from the existing libraries like TensorFlow and Theano in the sense of how we do the computation. In TensorFlow and Theano, we have to symbolically construct our computational graph first before running it. In a sense, it is like writing a whole program before running it. Hence, the degree of freedom that we have in those libraries are limited. For example, doing loop, one need to use tf.while_loop() function in TensorFlow or scan() in Theano. Those approaches are less intuitive compared to imperative programming.
Enter Pytorch. It is a Torch’s port for Python. The programming style of Pytorch is imperative, meaning that if we’ve already familiar using Numpy to code our alogrithm up, then jumping to Pytorch should be a breeze. One does not need to learn symbolic mathematical computation, like in TensorFlow and Theano.
With that being said, let’s try Pytorch by implementing Generative Adversarial Networks (GAN). As a reference point, here is the TensorFlow version.
Let’s start by importing stuffs:
import torch
import torch.nn.functional as nn
import torch.optim as optim
import numpy as np
mb_size = 64
Z_dim = 100
X_dim = mnist.train.images.shape[1]
y_dim = mnist.train.labels.shape[1]
h_dim = 128
lr = 1e-3
Now let’s construct our Generative Network $$G(z)$$:
def xavier_init(size):
in_dim = size[0]
xavier_stddev = 1. / np.sqrt(in_dim / 2.)
Wzh = xavier_init(size=[Z_dim, h_dim])
Whx = xavier_init(size=[h_dim, X_dim])
def G(z):
h = nn.relu(z @ Wzh + bzh.repeat(z.size(0), 1))
X = nn.sigmoid(h @ Whx + bhx.repeat(h.size(0), 1))
return X
It is awfully similar to the TensorFlow version, what is the difference then? It is subtle without more hints, but basically those variables Wzh, bzh, Whx, bhx are real tensor/ndarray, just like in Numpy. That means, if we evaluate it with print(Wzh) the value is immediately shown. Also, the function G(z) is a real function, in the sense that if we input a tensor, we will immediately get the return value back. Try doing those things in TensorFlow or Theano.
Next is the Discriminator Network $$D(X)$$:
Wxh = xavier_init(size=[X_dim, h_dim])
Why = xavier_init(size=[h_dim, 1])
def D(X):
h = nn.relu(X @ Wxh + bxh.repeat(X.size(0), 1))
y = nn.sigmoid(h @ Why + bhy.repeat(h.size(0), 1))
return y
Attentive readers will notice that unlike in TensorFlow or Numpy implementation, adding bias to the equation is non-trivial in Pytorch. It is a workaround since Pytorch has not implemented Numpy-like broadcasting mechanism yet. If we do not use this workaround, the X @ W + b will fail because while X @ W is mb_size x h dimensional tensor, b is only 1 x b vector!
Now let’s define the optimization procedure:
G_params = [Wzh, bzh, Whx, bhx]
D_params = [Wxh, bxh, Why, bhy]
params = G_params + D_params
While at this point, in TensorFlow we just need to run the graph with G_solver and D_solver as the entry points, in Pytorch we need to tell the program what to do with those instances. So, just like in Numpy, we run the “forward-loss-backward-update” loop:
for it in range(100000):
# Sample data
z = Variable(torch.randn(mb_size, Z_dim))
X, _ = mnist.train.next_batch(mb_size)
X = Variable(torch.from_numpy(X))
# Dicriminator forward-loss-backward-update
## some codes
# Generator forward-loss-backward-update
## some codes
So first, let’s define the $$D(X)$$’s “forward-loss-backward-update” step. First, the forward step:
# D(X) forward and loss
G_sample = G(z)
D_real = D(X)
D_fake = D(G_sample)
D_loss_real = nn.binary_cross_entropy(D_real, ones_label)
D_loss_fake = nn.binary_cross_entropy(D_fake, zeros_label)
D_loss = D_loss_real + D_loss_fake
Nothing fancy, it is just a Numpy-like operations. Next, the backward and update step:
D_loss.backward()
D_solver.step()
That is it! Notice, when we were constructing all the Ws and bs, we wrapped them with Variable(..., requires_grad=True). That wrapping is basically telling Pytorch that we cares about the gradient of those variables, and consequently pytorch.autograd module will calculate their gradients automatically, starting from D_loss. We could inspect those gradients by inspecting grad instance of the variables, e.g. Wxh.grad.
Of course we could code up our own optimizer. But Pytorch has built-in optimizers ready in pytorch.optim module. What it does is to abstract the update process and at each iteration, we just need to call D_solver.step() to update our variables, now that grad instance in those variables has been computed by backward() function.
As we have two different optimizers, we need to clear up the computed gradient in our computational graph as we do not need it anymore. Also, it is necessary so that the gradients won’t mix up with the subsequent call of backward() as D_solver shares some subgraphs with G_solver.
def reset_grad():
for p in params:
We do similar things to implement the “forward-loss-backward-update” for $$G(z)$$:
# Housekeeping - reset gradient
# Generator forward-loss-backward-update
z = Variable(torch.randn(mb_size, Z_dim))
G_sample = G(z)
D_fake = D(G_sample)
G_loss = nn.binary_cross_entropy(D_fake, ones_label)
G_loss.backward()
G_solver.step()
In contrast, in imperative computation, we could just use print() function basically anywhere and anytime we want and immediately it will display the value. Doing other “non-trivial” operations like loop and conditional are also become much more easier in Pytorch, just like the good old Python. Hence, one could argue that this way of programming is more “natural”.
|
{}
|
# More On Geometric Langlands (a Grand Unified Theory of Math?)
After mentioning in the last posting that Witten is giving talks in Berkeley and Cambridge this week, I found out about various recent developments in Geometric Langlands, some of which Witten presumably will be talking about.
Edward Frenkel has put a draft version of his new book Langlands Correspondence for Loop Groups on his web-site. In the introduction he describes the Langlands Program as “a kind of Grand Unified Theory of Mathematics”, initially linking number theory and representation theory, now expanding into relations with geometry and quantum field theory. The book is nearly 400 pages long, and to be published by Cambridge University Press. Frenkel also notes that recent developments in geometric Langlands have focused on extending the story from the case of flat connections on a Riemann surface to connections with ramification (i.e. certain point singularities are allowed). He has a new paper out on the arXiv about this, entitled Ramifications of the geometric Langlands program, and he writes that:
in a forthcoming paper [by Gukov and Witten] the geometric Langlands correspondence with tame ramification is studied from the point of view of dimensional reduction of four-dimensional supersymmetric Yang-Mills theory.
The title of the forthcoming Gukov-Witten paper is supposedly “Gauge theory, ramification, and the geometric Langlands program.”
Presumably people attending Witten’s talks in Berkeley and Cambridge will get to hear about this new story for the ramified case. For the rest of us, on his web-site David Ben-Zvi has notes from talks this summer by Witten at Luminy where he describes some of this. Ben-Zvi also has an announcement of a series of lectures on geometric Langlands that he’ll be giving at Oxford next April. The summary of the lectures says that he’ll “describe upcoming work of Gukov and Witten which brings together geometric Langlands and link homology theory.” Link homology theory is also known as Khovanov homology, and I wrote about this two years ago here, advertising Atiyah’s speculation that there may be a 4d TQFT story going on, something I always have found very intriguing. Ben-Zvi has recently lectured on Khovanov homology at Austin, and began his lecture by saying that this material relates “themes in 21st century representation theory” to 4d TQFT. He goes on to cover some of the ideas about 4d TQFT and “categorification” that I was very impressed by when I heard about them from a talk by Igor Frenkel a few months ago (described here).
At first I thought Ed Frenkel’s claim that geometric Langlands was going to give a Grand Unified Theory of mathematics was completely over the top, but seeing how some of these very different and fascinating relations between new kinds of mathematics and quantum field theory seem to be coming together, I’m more and more willing to believe that investigating them will come to dominate mathematical physics in the coming years.
Update: Slides from Witten’s Berkeley lectures are here. And many thanks to David Ben-Zvi for the informative comments!
This entry was posted in Uncategorized. Bookmark the permalink.
### 50 Responses to More On Geometric Langlands (a Grand Unified Theory of Math?)
1. A.J. says:
Hi Peter,
Witten has only delivered one lecture so far, and it was devoted to reviewing background material: mostly S-duality and a few words about topological twisting, all of which can be found in the Kapustin-Witten paper.
2. Peter Woit says:
Thanks A.J.!
It would be great if you could keep us informed about the rest of the lectures…
3. SFB says:
It sounds like they are doing interesting math, but leaving physics to the LQG crowd.
4. atrings says:
I agree with SFB for the “interesting math “,but not for the”LQG crowd”.
5. Richard says:
“At first I thought Ed Frenkel’s claim that geometric Langlands was going to give a Grand Unified Theory of mathematics was completely over the top, but seeing how some of these very different and fascinating relations between new kinds of mathematics and quantum field theory seem to be coming together, I’m more and more willing to believe that investigating them will come to dominate mathematical physics in the coming years.”
Perhaps a domination of mathematical physics, but the claim of a grand unification of mathematics is in fact way over the top unless you believe that mathematics is nothing but mathematical physics. It probably all depends on your own personal values, biases, points of view, and even whom you believe owns mathematics. Recall Lubos’ wild claim that someday mathematics will be completely subsumed by string theory?
6. onymous says:
I expect many of the people who have been working on geometric Langlands for years would be kind of shocked to be called mathematical physicists, Richard. Do they all instantly become mathematical physicists just because Witten got interested in what they’re doing?
7. Richard says:
Onymous – I don’t believe I said that.
8. onymous says:
Apologies, I misread Peter’s original statement — didn’t notice that he specifically singled out mathematical physics — and so misinterpreted your “…unless you believe that mathematics is nothing but mathematical physics” as an implication that geometric Langlands is mathematical physics. Never mind.
9. David Ben-Zvi says:
Hi and thanks for the references! (all notes on my page should be taken with many grains of salt..)
I should point out that the preprint by Gukov and Witten doesn’t actually talk at all about link homology, so my talk description was perhaps premature, but a connection between
geometric Langlands and some kind of link homology is to be expected following their ideas (cf Gukov’s Strings talk).
Cautis and Kamnitzer also have very interesting work in progress on such a relation.
After all, geometric Langlands is a very general categorification
program in representation theory, so one would expect it to
relate to the kinds of categorifications that give rise to Khovanov homology. There just aren’t too many fundamental structures associated with a semisimple Lie group, and they all connect..
Of course it’s a joke to speak of geometric Langlands as
a grand unified theory… but the Langlands duality is certainly
among the broadest themes in math, a kind of nonabelian
generalization of the Fourier transform, and it’s extremely exciting
that we can view it in the geometric setting as electric-magnetic duality in four dimensional gauge theories!
10. relativist says:
David Ben-Zvi says
“it’s extremely exciting that we can view it in the geometric setting as electric-magnetic duality in four dimensional gauge theories!”
Can you expand on that? Sounds very interesting.
11. urs says:
it’s extremely exciting that we can view it in the geometric setting as electric-magnetic duality in four dimensional gauge theories!
Can you expand on that? Sounds very interesting.
This is the insight of the Kapustin-Witten paper.
You can find a summary here.
12. urs says:
a kind of nonabelian generalization of the Fourier transform
Is it a nonabelian generalization, or isn’t it rather a categorification of the Fourier transform?
It seemed to me that much of Langlands can be nicely understood as taking place in categorified linear algebra. I have made remarks on how the Hecke operator looks like a 2-linear map for instance here.
13. urs says:
It would be great if you could keep us informed about the rest of the lectures…
If anyone feels like reporting on interesting lectures online, we have a guest account for that over on the n-Café.
For instance we had David Roberts guest-reporting from a lecture by Brian Wang here, similar to the many guest reports we had # at the string coffee table.
14. David Ben-Zvi says:
Urs:
“Is it a nonabelian generalization, or isn’t it rather a categorification of the Fourier transform?”
well it’s both.. the main difficulty is the nonabelian nature
rather than the categorification, and that is where Langlands tells
us what to do (in the geometric or classical, noncategorified setting).
Categorifications of the Fourier transform have been
used for almost 30 years I think (starting with the Fourier-Deligne
transform, see eg Laumon’s first ICM), and the geometric Langlands
program suggests that one can extend this to nonabelian settings
(G-bundles on curves).
By the way maybe this is an excuse to air one of my pet
peeves, the use of the term “Fourier-Mukai”
to refer to any functor between derived categories given
by an integral kernel.. I would be surprised if an analyst
referred to any map on function spaces given by
integration against a kernel (or any matrix) as a Fourier transform, and the same should hold in the categorified setting — in some
precise sense (due to Toen and which I’m badly paraphrasing)
all functors between derived categories are given by integral kernels!
“Honest” Fourier-Mukai transforms should have additional
structure and properties (for example taking convolution
to tensor product). Similarly not any duality is a T-duality!
15. urs says:
well it’s both.. […] Categorifications of the Fourier transform have been used […] and the geometric Langlands program suggests that one can extend this to nonabelian settings […]
Great, thanks! That’s what I was hoping some expert would say. Probably I just talked to the wrong experts so far!
Because each time I’d ask a question along the lines
“isn’t an eigenbrane just a categorified eigenvector in some 2-vector space”
the answer I’d get would be something like
“no, 2-vector space only appear after we categorify Langlands itself, like Kapranov discussed.”
http://golem.ph.utexas.edu/category/2006/10/quantization_and_cohomology_we_1.html#c005444
16. Peter Orland says:
I don’t know much category theory, but I thought that the
non-Abelian generalization of the Fourier transform is the character expansion (or Plancherel transform in the non-compact case) for functions on non-Abelian groups. Aside from a character formula, that is the simplest generalization.
Obviously, I am missing the point and something deeper is meant. Can anyone explain this to a dumb theoretical physicist?
17. Peter Orland says:
I just wanted to add that the sort of examples I mentioned don’t help much with non-Abelian duality in classical or quantum field theory. To perform a duality transformation, a zero-curvature condition is Fourier transformed and the parameter integrated over is the dual field. This only really works in the Abelian case. There are non-Abelian generalizations of duality done this way, but they are rather messy, and not obviously useful.
18. A.J. says:
Well, Witten finished his lectures, but ran out of time to say much of anything about ramification. There’s just too much information to be covered in (somewhat less than) 3 hours. Most of what he said is pretty well covered in David Ben-Zvi’s notes, and in Urs’s posts on the subject, or in the Kapustin-Witten paper for that matter.
We did get scans of his notes, so perhaps those will be available online one of these days.
19. urs says:
Can anyone explain this to a dumb theoretical physicist?
One way to get an intuition for what is going on with these Hecke operators and similar transformations is to consider the drastically oversimplified baby toy example situation where the underlying spaces are in fact just – finite sets.
A vector bundle over a finite set is then just an array of finitely many vector spaces.
Think of that as a vector whose entries are vector spaces. Such a beast is known as a (Kapranov-Voevodsky) 2-vector.
The categorification involved here is that which takes the monoid of complex numbers and replaces it by the monoidal category of complex vector spaces.
So we can imagine doing linear algebra with these vectors whose entries are vector spaces by replacing sums of complex numbers by direct sums of vector spaces and products of complex numbers by tensor products of vector spaces.
In particular, let X any Y be two finite sets and consider a vector bundle L over X x Y .
By the above, this is now like a |X| x |Y| matrix with entries being vector spaces. Using the above dictionary, we can define the categorified matrix product of L with a 2-vector over Y, simply by using the ordinary prescription for matrix multiplication but replacing sums of numbers by direct sums of vector spaces and products of numbers by tensor products of vector spaces.
One can convince onself, that this categorified action of a 2-matrix on a 2-vector can equivalently be reformulated in a more arrow-theoretic way as follows:
We have projections p1 and p2 from X x Y to X and to Y, respectively. This makes X x Y into a span
http://golem.ph.utexas.edu/category/2006/10/klein_2geometry_vi.html#c005232
Given a 2-vector V -> X over X, we may pull it back along p1 to X x Y, tensor the result componentwise with L and push the result of that back along p2.
This operation produces precisely the naive categorified matrix product that I mentioned above.
But the nice thing is that this pullback-tensor-pushforward along a “correspondence” like X x Y generalizes to vastly more interesting situations.
There is an entire zoo of well-known operations of this kind. The Fourier-Moukai transformation is one example. The Hecke transformation that appears in geometric Langlans is another.
In the above sense, all of these operations can be understodd as linear maps on 2-vector spaces.
A description of what I just said, including some helpful diagrams and links to further material can be found here:
20. urs says:
Concerning the abelian vs. nonabelian categorified Fourier transform:
there is something called the “classical limit” of geometric Langlands, as decribed for instance here:
Pantev on Langlands, II
The Hecke operation in geometric Langlands is a generalization of the categorified Fourier transformation: is a “2-linear map” in the sense of my comment above
http://www.math.columbia.edu/~woit/wordpress/?p=492#comment-19258
such that it coincides with the Fourier-Moukai transformation in this “classical limit”.
In other words, the Hecke operation is a deformation of the Fourier-Moukai transformation.
21. Bert Schroer says:
I never understood what is the relation of elliptic cohomology (not that I don’t know what it presents mathematically since I have followed the area with an ever increasing distance since the days of the Atiyah-Singer index theorem) with particle physics except that Witten has generated a certain enthusiasmus with some particle physicists. Since I have learned to make a distiction between physics and what (some) physicists are doing and since this blog (as Peter’s book) is primarily about the present state of particle physics I think it is a legitimate question to ask about its relation to particle physics. If this is not permitted then this will be my last contribution to this blog.
22. Anon says:
To Peter Orland:
You are correct about the Plancheral theorem. But that tells you that if you know the irreducible representations, and their dimensions/characters, you know how to decompose functions. It doesnt tell you what the characters are.
In the first instance Langlands is a parameterization of irreducible reps, and a determination of their character; roughly they are in bijection with conjugacy classes in another group.
The categorification nonsense is an elaboration of this, to say *all* information you can extract comes from this dual group.
23. Peter Orland says:
Urs and Anon,
Thanks for the responses. I understand that a character formula
of some sort is need to make Plancheral meaningful. What I worry
about is that even with such a character formula, there isn’t
enough for non-Abelian electromagnetic duality. In fact, I am skeptical a USEFUL duality for pure Yang-Mills theorists exists.
To carry out a duality transformation, the Bianchi identity needs
to be imposed by integrating over a new field (in 3+1 dimensions, this field is a one-form). Then we would like to integrate out the
orginal gauge field to obtain a action in this new field. Doing this
in practice is tough. There are tricks for doing it with certain character formulas, but the dual theory is a mess, since the dual
fields are discretely valued (o.k. on the lattice, but without a good
continuum interpretation).
Are these new techniques are somehow better? If so, it would be very interesting.
24. urs says:
In the first instance Langlands is a parameterization of irreducible reps, and a determination of their character; roughly they are in bijection with conjugacy classes in another group.
That’s the original “algebraic” Langlands thing.
The categorification nonsense is an elaboration of this, to say *all*
I think the categorification nonsense comes in when you pass from the original to the geometric Langlands correspondence.
In the original Langlands setup, the Hecke operator is an ordinary linear map, acting on a space of modular forms.
In the geometric version of the theory, it becomes the Hecke operator that acts on derived coherent sheaves on some moduli space. And that guy is no longer an ordinary linear map. But it is a categorified linear map, if you like (and also if you don’t like it).
In particular, in a special limit it is nothing but a certain categorification of the Fourier transformation.
25. urs says:
I never understood what is the relation of elliptic cohomology […] with particle physics
Elliptic cohomology is not about particle physics. It is about string physics.
Elliptic cohomology is to strings like particles are to K-cohomology #.
But what is the direct relation of elliptic cohomology to geometric Langlands, that made you bring this up here?
26. Bert Schroer says:
Interesting, so after all elliptic cohomology isn’t about particle physics it is rather about ST. That’s precisely what I expected.
27. Bert Schroer says:
Urs
Interesting, so after all elliptic cohomology isn’t about particle physics it is rather about ST. That’s precisely what I expected. I guess I got into the Langland’s column by accident, but without this accident I probably would not have received such a precise answer.
28. urs says:
Interesting, so after all elliptic cohomology isn’t about particle physics it is rather about ST.
Yes, check out the table at the beginning of the introduction of those notes.
Generalized cohomology theories are labelled by something called their “chromatic filtration”.
The idea is that a cohomology theory of chromatic level p comes from the physics of “p-particles” – otherwise known as (p-1)-branes.
K-cohomology has filtration 1. It corresponds to 1-particles (0-branes). Ordinary points, that is.
Elliptic cohomology has filtration 2. It corresponds to 2-particles, otherwise known as 1-branes or strings.
Ordinary (singular) cohomology has filtration level 0. There is a precise sense in which it corresponds to 0-particles (or (-1) branes).
I expect this table is open ended. But I have never seen anything about cohomology theories of chromatic filtration larger than 2.
29. urs says:
I am skeptical a USEFUL duality for pure Yang-Mills theorists exists.
It is a famous conjecture that 4-dimensional Yang-Mills theory has a duality called S-duality.
Yang-Mills theories (in a given dimension, for a fixed number of supercharges) are parameterized by a complex number
tau ,
the coupling constant, and a Lie group
G,
the gauge group.
For N=4 supersymmetric Yang-Mills, there is conjectured to be an isomorphism between Yang-Mills theory for
(tau,G)
and that for
(-1/tau , G^L) .
-1/tau is, roughly, the inverted coupling constant (therefore: “weak-strong coupling duality”) and G^L is the Lie group that is Langlands dual to G.
See the first few paragraphs of this, for instance.
That this is indeed an isomorphism of field theories is not a theorem, but it is supported by enough evidence that makes everybody assume it is indeed true. This is the S-duality conjecture.
Since the Langlands dual group appears in this conjecture, it has long been speculated that there is indeed a relation between S-duality and the Langlands program. But until recently nobody could really substantiate this.
The achievement of the Kapustin-Witten work is to show that for the special case that the 4-dimensional Yang-Mills theory is suitably compactified down to two dimensions, the S-dualiy conjecture for Yang-Mills theory is essentially equivalent to the geometric Langlands conjecture.
All the ingredients of geometric Langlands, like those moduli spaces of bundles and the derived coherent sheaves on them, can be understood in terms of field configurations and boundary conditions of compactified N=4 super Yang-Mills theory.
Notice that this amounts to further support for the S-duality conjecture, because it increases the number of people that truest the S-duality conjecture by those mathematicians that trust the geometric Langland conjecture.
But it might also be noteworthy that this suggests that the geometric Langlands duality is only a tiny aspect of a much bigger story – since it is (apparently) just the special case of S-duality applied to a very specific compactification of Yang-Mills theory only.
30. Peter Orland says:
Urs,
Yes, I know about the S-duality conjecture (I would much more interested in a similar conjecture about pure Yang-Mills than N=2 or N=4 Yang-Mills. Theories with adjoint matter are very different from those we know about in nature).
Though a conjecture is nice, to really prove it operator equivalences are needed. The procedure I discussed before, character expansions of the Bianchi identity, etc., is the first step to find such equivalences. In Abelian theories, this is how Kramers-Wannier duality works. There are some non-Abelian constructions due to Sharachandra and Anishetty, they haven’t proved useful yet.
31. A.J. says:
Urs,
Their’s a mild caveat to be added to your statement that
All the ingredients of geometric Langlands, like those moduli spaces of bundles and the derived coherent sheaves on them, can be understood in terms of field configurations and boundary conditions of compactified N=4 super Yang-Mills theory.
The geometric Langlands correspondence is stated in terms of D-modules on the moduli stack of not-necessarily stable G-bundles. Kapustin & Witten’s work doesn’t quite give full information about the moduli stack, but only its semi-stable locus. As I far as I can tell, the relation between N=4 SYM and the Langlands correspondence for D-modules on the full stack hasn’t been completely spelled out.
32. ks says:
Question to Urs. First of all thanks a lot for all your explanations. You work on cool stuff anyway ( though it is a little over my head at this time ). Do I understand your research program correctly when I assume that You try to link the standard model and ST in purely algebraic terms by means of higher category theory? Hence when changing the algebraic setting they do not look much different but are connected through certain higher morphisms?
33. Bert Schroer says:
Peter Orland
Conceptual realism demands to separate Kramers-Wannier duality (and its structural extension the order-disorder issue) from speculative ideas. The o-d duality is a local quantum physical phenomenon which has no known analog in higher dimensions. Whereas o-d is a phenomenon which has a solid operator algebraic intrinsic understanding (if you want I can provide you with recent literature) there is nothing like this for the S conjecture.
By now Wikipedia has more material on wild conjectures than about genuine results. There is the danger that we may be fooled to our own simulacrums and metaphors in particular that conjectures solidify because they comes from somebody with a high status in the community or because they have been hanging around for a long time so that several generations have stepped on them.
34. urs says:
wild conjectures
S-duality is certainly a conjecture, but hardly a wild conjecture.
I mean, that’s the point: S-duality is apparently as wild as geometric Langlands.
35. urs says:
Kapustin & Witten’s work doesn’t quite give full information about the moduli stack, but only its semi-stable locus.
Right, thanks. There are probably a couple of such technicalities. I am not working on this stuff, so it’s hard to keep them all in mind.
So what about that “classical limit” in which, apparently, geometric Langlands is only proven so far. Does compactified SYM exactly coincide with the geometric Langlands data in that limit?
36. David says:
A couple of comments: Kapustin-Witten’s theory does (as far as I
understand) cover the full stack of bundles, not just the semistable locus. The sigma-model/mirror symmetry description
fails outside the semistable locus, but they emphasize
in the paper that the gauge theory sees the entire stack
of bundles — I think the problem is us geometers
have only been able in the past really to process
the classical aspects of the theory (solns of the equations of motion
etc) but quantum gauge theory is a lot smarter than we
are (speaking for myself at least). As far as I know they
can’t completely say what S-duality predicts off the semistable locus, but the important point is it does actually apply there.
The classical limit of Langlands is only proven
generically, missing the hardest locus — it’s a beautiful
result and one of the best in the subject, but saying classical
geometric Langlands is understood is on the same
level as saying you understand (noncompact) Lie groups
when you understand their diagonalizable elements —
the hardest part involved unipotents..
Also I’m not sure I would think of Hecke operators as Fourier transforms – the Hecke operators are the symmetries of moduli
of bundles (and sheaves on them), while the Fourier-Mukai
type transforms relate G and G^ the dual group.
One sense (of many) in which geometric Langlands is a nonabelian
categorified generalization of the Fourier transform is that while Plancherel helps you decompose spaces of functions on a group,
geometric Langlands type results help you decompose the
CATEGORY of all representations of a group —
since these categories are not semisimple
there’s a big difference between listing irreducibles and their characters and actually describing the structure of general
representations. (Geometric Langlands ideas
can be used to study for example the category
of Harish-Chandra modules for a real semisimple Lie group).
37. Peter Orland says:
Bert,
I cannot understand your explanation especially well. In my attempt to translate your statement into simple language, I conclude you mean more conjectures than solid statements dominate our field. I don’t need to be reminded of this, since I have
seen it all over the literature for the last decade or so.
I was asking if the experts on Langlands believe a useful concrete electric-magnetic duality transformation can be constructed from non-Abelian Fourier transforms (character expansions). I suspect the answer is no, since no one gave me a simple “yes”.
38. I’m probably mixing algebraic number theory with analytic number theory but is there a relationship between elliptic cohomology and elliptic Mobius transformations?
39. urs says:
Also I’m not sure I would think of Hecke operators as Fourier transforms – the Hecke operators are the symmetries of moduli
of bundles
Oh, sorry, I misspoke if I said that. The Langlands correspondence is analogous to the Fourier transform, exchanging skyscraper sheaves (analogous to delta-functions) with Hecke-eigensheaves (analogous to plane waves). So, in this analogy, the Hecke operator is like a categorified derivative.
40. Bert Schroer says:
I am afraid the sad truth is the answer is “no”. It is better to live in quantum reality than to become complacent with a Disney version of it.
I was not trying to explain anything in technical terms but only pointing to the obvious observation that Kramers-Wannier on a microscopic level (achieved by Leo Kadanoff) was quantum from the beginning whereas the Seiberg Witten duality is from a physical Disney dreamland which precisely of this is so useful to a large part of mathematics. The kind of mathematics for which it had no use is the operator-algebraic mathematical setting of QT which dates back to von Neumann and has been enriched by the locality principle in AQFT. By the way the manner Kadanoff has extracted (noncommutative) operator commutation relations for the (what we nowadays call) the Ising primary fields from the Euclidean lattice setting (via a partially guessed properties of the transfer matrix formalism) had my deep admiration; the Leitmotiv of all my work with Swieca in the early 70s was related to adapt ate Kadanoff’s order/disorder ideas to the continuous setting of QFT; in many cases we even succeeded to read this back into a continuous functional integrals setting by using an Aharonov-Bohm analog language. Later, when I was working with Rehren on an algebraic approach to chiral conformal QFT I remembered those Kadanoff ideas and we found a completely explicit operator version of an “exchange algebra” for the conformal Ising field theory from which it was possible to compute its n-point Wightman functions. A historical review can be found in
http://br.arxiv.org/abs/hep-th/0504206
but thinking about this now, I should have written much more about Leo Kadanoff’s contributions; he really deserved a Nobel prize together with Wilson.
In those days we also convinced ourselves that this order-disorder idea has no electric-magnetic counterpart in the full QFT setting.
41. Peter Orland says:
Bert,
I also worked extensively on duality. Like you, I concluded that
there is no simple operator equivalence between a non-Abelian
gauge theory and its dual. But there are intriguing exceptions
of systems with non-Abelian systems which do have duality
transformations and disorder operators. In my Ph.D. thesis I found lattice systems with permutation-group $S_{N}$ symmetry which have nontrivial duals. But I will spare people here from a list of more publications on the subject.
Regards,
Peter (O.)
42. Thomas Larsson says:
Conceptual realism demands to separate Kramers-Wannier duality (and its structural extension the order-disorder issue) from speculative ideas. The o-d duality is a local quantum physical phenomenon which has no known analog in higher dimensions.
???
The 3D Ising model on a cubic lattice is Kramers-Wannier dual to Ising gauge theory on the same lattice. Why is this not o-d duality in higher dimensions?
43. urs says:
I concluded that there is no simple operator equivalence between a non-Abelian gauge theory and its dual.
Is this saying that you consider the S-duality conjecture to be in fact false?
If so, I’d be interested in the details of the assumptions that go into this.
I recall that Bert Schroer was (similarly ?) claiming that the AdS/CFT duality conjecture (in the sense of Maldacena) is false, and that the correct duality statement was along the lines of Rehren’s work.
In that case I got the impression that two rather different concepts were being compared, and that in fact Rehren’s work had little relation to the setup considered by Maldacena et al. Compare for instance Jacques Distler’s account.
The crucial difference in this case is that Rehren’s work was based on a fixed and precise axiom set, while Maldacena’s work uses notions of quantum field theory that have not been axiomatized yet.
For people like Bert Schroer this is reason enough to completely reject all QFT that does not fit into the AQFT axioms. For other people, in contrast, the restrictive applicability of the AQFT axioms is reason enough to reject those.
To some extent it is a matter of taste concerning which role of rigour you find useful in physics research. I can easily tolerate both these standpoints. But I would like to know in each case which one is assumed by which participant.
44. woit says:
Urs,
You keep ignoring the fact that Peter Orland is asking about pure YM theory, not N=4 SYM. There’s a beautiful story about duality in non-supersymmetric abelian gauge theories, and many people (including Peter) have tried hard to generalize this to the non-abelian case. I gather that he’s trying to understand whether geometric Langlands gives any insight into that problem, and as far as I can tell, the answer is just no.
45. Peter Orland says:
Urs,
Sorry that I am giving long-winded answers to your questions. I am
mainly interested in advancing methods in asymptotically-free field theories and in constructions which could eventually facilitate calculations. I try to learn other stuff, because I can’t predict what I may need to know in the future. But I am more interested in theoretical, rather than mathematical physics (as people abuse use the term nowadays, to study mathematical techniques, rather than to prove theorems).
I believe (after some years of trying to show the contrary) there is no USEFUL version of Kramers-Wannier duality which is true for PURE non-Abelian gauge theories. There are non-Abelian dualities for some special $S_N$-invariant systems, which I mentioned above (there is also non-Abelian Bosonization in two dimensions).
The general problem for duality in non-Abelian theories is constructing dual fields with local commutation or anti-commutation relations. Supersymmetric or other theories with adoint matter have some sort of charge-monopole duality – but such theories are effectively Abelian. These theories are interesting in their own right, but to my way of thinking, they are not as important as Yang-Mills theories coupled only to fundamental (not adjoint) Fermion color charges, or pure Yang-Mills theories.
There are other notions of duality in QCD. The ‘t Hooft loop is the disorder operator. Unfortunately, there is probably no useful local dual-field-theory formulation for which it is the order parameter.
46. urs says:
You keep ignoring the fact that Peter Orland is asking about pure YM theory, not N=4 SYM.
In as far as I am ignoring anything, it is not on purpose. I’d be glad to be enlightened.
Maybe I found Peter Orland’s statement
I concluded that there is no simple operator equivalence between a non-Abelian gauge theory and its dual.
#
seemed to refer to arbitrary gauge theories.
I gather that he’s trying to understand whether geometric Langlands gives any insight into that problem, and as far as I can tell, the answer is just no.
Hm, maybe here is the source of the misunderstanding. Kapustin-Witten show that geometric Langlands does give insight into the type of duality present in N=4 SYM. So in far as this is different to other types of duality, geometric Langlands apparently does not apply to these.
Supersymmetric or other theories with adoint matter have some sort of charge-monopole duality – but such theories are effectively Abelian.
Could you expand on what you mean by “effectively abelian” here? Thanks!
47. Peter Orland says:
Urs,
By “effectively Abelian”, I mean that that the magnetic-monopole charge is well-defined and quantized. In QCD or pure Yang-Mills, there is no precise definition of magnetic-monopole charge.
In the Georgi-Glashow model (an the related deformation of N=2 supersymmetric gauge theory) a Higgs field breaks the gauge group
down to the Cartan subgroup. Thus there are Abelian monopoles,
with quantized charge, etc. These theories have a confined phase for sufficiently small monopole mass, which goes back to Polyakov’s observations in the 70’s. Duality for such theories is not so different from those of Abelian Wilson lattice gauge theories. They are, however, quite different from QCD.
Now there is an old result made by many people (Fradkin, Shenker, Rabinovici and others) that there is little difference between a Higgs field in a gauge theory and a scalar field in that gauge theory without a Higgs potential. The basic point is that the operator creating a massive vector Boson in the Higgs theory looks just like the operator creating a “meson” built from scalars in the confined phase. From this point of view, any theory with scalar matter
is not so different from a Higgs theory. In particular, it is possible
to define magnetic charge, no matter what the scalar potential happens to be. So in such theories charge-monopole duality is a sensible concept.
The reason why the possibility of duality for Yang-Mills theories is interesting is because it could yield insight into the confinement
phase. Some sort of magnetic condensation occurs, producing confinement and a mass gap, as simulations show, but we want
to know why.
48. anonymous says:
Off-topic mathematical physics fun: Andre LeClair is claiming there’s a physical system, which, on physical grounds, suggests the Riemann hypothesis is true. Are there any experts around to comment on whether it’s plausible?
http://www.arxiv.org/abs/math-ph/0611043
49. relativist says:
For those like me who don’t know much about the Langlands programme but would like to, a useful account is an older one by Frenkel: `Lectures on the Langlands Program and conformal field theory’, at
http://www.arxiv.org/PS_cache/hep-th/pdf/0512/0512172.pdf
Comments are closed.
|
{}
|
# The multiplicative weights update method a meta-algorithm and applications
### A combinatorial primal-dual approach to semidefinite programs
Multiplicative Weights Update A useful addition to an. The Multiplicative Weights Update Method: A Meta-Algorithm and We feel that since this meta-algorithm and its analysis are so simple, and its applications so, Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos method is a ubiquitous meta-algorithm.
### CIS 800 The Algorithmic Foundations of Data Privacy at
PPT Multiplicative Weights Algorithms PowerPoint. The Multiplicative Weights Update Method: a Meta-Algorithm and Applications. Theory of Computing 8.1 (2012): 121-164. [2] A. Gupta. The Multiplicative Weights Algorithm. Lecture Notes, CMU. Accessed 01/05/2018. BГ i tбєp. BГ i tбєp 1: Chб»©ng minh Theorem 3, sб» dụng ГЅ tЖ°б»џng trong chб»©ng minh Theorem 2., The multiplicative weights update method: a meta-algorithm and its applications.
Satyen Kale Curriculum Vitae Thesis: E cient Algorithms Using the Multiplicative Weights Updates Method. a Meta-Algorithm and some Applications S. Arora, 5.1 The multiplicative weights update method Hazan and Kale [AHK06] gave a meta algorithm that puts many It will be important for the applications below to
... and S. Kale. The multiplicative weights update method: a meta-algorithm generated by modern applications is in extracting algorithm that achieves the ... according to the multiplicative weight updates through multiplicative updates, and evolution under method: A meta-algorithm and applications.
The Multiplicative Weights Update Method. Satyen Kale,The Multiplicative Weights Update Method:A Meta-Algorithm and Applications, Algorithms in Action The multiplicative weights update method is an algorithmic technique most commonly used for decision making and prediction, and also widely deployed in game theory and algorithm design.
A combinatorial, primal-dual approach to of the matrix-multiplicative weight update method and its Update Method: a Meta Algorithm and Applications. ... Applications of MWU in Winnow algorithm . The Multiplicative Weights Update Method: a Meta-Algorithm and Applications. a New Linear-threshold Algorithm.
5.1 The multiplicative weights update method Hazan and Kale [AHK06] gave a meta algorithm that puts many It will be important for the applications below to This lecture introduces gradient descent — a meta-algorithm for Mirror Descent and the Multiplicative Weight Update Method. As an application,
The multiplicative weights update method - meta-algorithm and applications. The Multiplicative Weights Update Method - Meta-Algorithm and Applications The multiplicative weights update method: a meta algorithm and applications (2005)
... 2011. Homework 2 out Due The Multiplicative Weights Update Method: a Meta Algorithm and" minimum cost matching * Experts/multiplicative weights algorithm The Multiplicative Weights Update Method: A Meta-Algorithm and Applications set and use the multiplicative update rule to iteratively change these weights.
The Multiplicative Weights Update Method: A Meta-Algorithm and its Applications Sanjeev Arora Princeton University Princeton NJ 08540 arora@cs.princeton.edu Multiplicative Weights Update Method. Weights Update Method: a Meta Algorithm and Applications Multiplicative Weights Update algorithm for T
Lecture 16 The Multiplicative Weights LECTURE 16. THE MULTIPLICATIVE WEIGHTS ALGORITHM 6 The multiplicative weights update method: a meta algorithm and Projects. Unsupervised O4. When does the multiplicative update method work (fail)? The Multiplicative Weights Update Method: a Meta-Algorithm and Applications.
The Multiplicative Weights Update Method: a Meta over a certain set and use the multiplicative update rule to Meta Algorithm and Applications. Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos method is a ubiquitous meta-algorithm
Multiplicative weights method: A meta algorithm with applications to linear The multiplicative weights update method and it’s applications CS 506 Class Syllabus : The Multiplicative Weights Update Method: a Meta Algorithm and Multiplicative Weights, Applications of Multiplicative Weights to
Syllabus Course Home The multiplicative weights update method, and the LLL algorithm. Applications to include solving low-dimensional integer programs and ... 2011. Homework 2 out Due The Multiplicative Weights Update Method: a Meta Algorithm and" minimum cost matching * Experts/multiplicative weights algorithm
Their combined citations are counted The Multiplicative Weights Update Method: a Meta-Algorithm and Improved low-degree testing and its applications. S The Multiplicative Weights Update (MWU) method is a ubiquitous meta-algorithm that works as follows: A distribution is maintained on a certain set, and at each step
The Multiplicative Weights Update Method: a Meta over a certain set and use the multiplicative update rule to Meta Algorithm and Applications. 2012-10-31 · [1] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a meta algorithm and applications. Working Paper, 2005. [2] Christopher J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2:121–167, 1998. [3] Emmanuel J. Candes. Compressive sampling.
... according to the multiplicative weight updates through multiplicative updates, and evolution under method: A meta-algorithm and applications. Deterministic Discrepancy Minimization via the Multiplicative Weight Update Method The multiplicative weight update method is a meta-algorithm that originated in
CS 506 Class Syllabus : The Multiplicative Weights Update Method: a Meta Algorithm and Multiplicative Weights, Applications of Multiplicative Weights to The Multiplicative Weights Update (MWU) method is a The multiplicative weights update method: a meta-algorithm and An inequality with applications to
submodular function, multiplicative weight updates 1. INTRODUCTION The \multiplicative weight updates (MWU) method" has a wide variety of applications in computer science and can be considered a meta-algorithm. The excellent survey of Arora, Hazan and Kale [2] takes this point of view and describes several applications that follow from the basic method and Satyen Kale Curriculum Vitae Thesis: E cient Algorithms Using the Multiplicative Weights Updates Method. a Meta-Algorithm and some Applications S. Arora,
The Multiplicative Weights Update Method. Satyen Kale,The Multiplicative Weights Update Method:A Meta-Algorithm and Applications, Algorithms in Action The multiplicative weights update method - meta-algorithm and applications. The Multiplicative Weights Update Method - Meta-Algorithm and Applications
(2012) The multiplicative weights update method: A meta-algorithm and applications. Theory Comput 8: 121 – 164. Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos method is a ubiquitous meta-algorithm
The Multiplicative Weights Update Method: A Meta-Algorithm and Applications set and use the multiplicative update rule to iteratively change these weights. The Multiplicative Weights Update method The Multiplicative Weights method is a This meta algorithm is a also present some applications of this
Deterministic Discrepancy Minimization via the The multiplicative weights update method: a meta-algorithm and Minimization via the Multiplicative Weight multiplicative weight update method; The Multiplicative Weights Update Method: a Meta-Algorithm and Applications, by Arora, Hazan, Kale.
Private Multiplicative Weights Beyond Linear Queries. The Algorithmic Foundations of Data Privacy The multiplicative weights update method - meta-algorithm and applications Multiplicative updates in, ... 2011. Homework 2 out Due The Multiplicative Weights Update Method: a Meta Algorithm and" minimum cost matching * Experts/multiplicative weights algorithm.
### CIS 800 The Algorithmic Foundations of Data Privacy at
Syllabus Topics in Theoretical Computer Science An. ... and S. Kale. The multiplicative weights update method: a meta-algorithm generated by modern applications is in extracting algorithm that achieves the, References [AHK05] S. Arora, E. Hazan, and S. Kale, The multiplicative weights update method: a meta algorithm and applications, Tech. report, Princeton University, 2005..
CiteSeerX — Citation Query Tracking the best expert. Syllabus Course Home The multiplicative weights update method, and the LLL algorithm. Applications to include solving low-dimensional integer programs and, The Multiplicative Weights Update Method: A Meta-Algorithm and We feel that since this meta-algorithm and its analysis are so simple, and its applications so.
### Optimization II Winter 2009/10 Lecture 5 November 9 5.1
Multiplicative Weights Update with Constant Step arXiv. THE MULTIPLICATIVE WEIGHTS UPDATE METHOD: A META-ALGORITHM AND APPLICATIONS Related work. An algorithm similar in flavor to the Multiplicative Weights algorithm was https://en.m.wikipedia.org/wiki/Geometric_Set_Cover_Problem Adjust all expert weights: and can thus update our algorithm and Satyen Kale, \The multiplicative weights update method: A meta algorithm and its applications.".
• Distributed multiplicative weights methods for DCOP
• Efficient Algorithms Using The Multiplicative Weights
• The multiplicative weights update method meta-algorithm
• The Algorithmic Foundations of Data Privacy The multiplicative weights update method - meta-algorithm and applications Multiplicative updates in The Multiplicative Weights Update Method. Satyen Kale,The Multiplicative Weights Update Method:A Meta-Algorithm and Applications, Algorithms in Action
The Multiplicative Weights Update Method: A Meta-Algorithm and its Applications Sanjeev Arora Princeton University Princeton NJ 08540 arora@cs.princeton.edu The Algorithmic Foundations of Data Privacy The multiplicative weights update method - meta-algorithm and applications Multiplicative updates in
The multiplicative weights update method - meta-algorithm and applications. The Multiplicative Weights Update Method - Meta-Algorithm and Applications We develop a continuous-time framework based on multiplicative weight updates to The multiplicative weights update method: a meta-algorithm and applications
(2012) The multiplicative weights update method: A meta-algorithm and applications. Theory Comput 8: 121 – 164. The Multiplicative Weights Update Method: a Meta-Algorithm We feel that since this meta-algorithm and its analysis are so simple, and its applications so
CS 506 Class Syllabus : The Multiplicative Weights Update Method: a Meta Algorithm and Multiplicative Weights, Applications of Multiplicative Weights to The Multiplicative Weights Update Method: A Meta-Algorithm and We feel that since this meta-algorithm and its analysis are so simple, and its applications so
Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos method is a ubiquitous meta-algorithm Fast approximations to solve packing/covering LPs and The multiplicative weights algorithm is a well The Multiplicative Weights Update Method: a Meta
submodular function, multiplicative weight updates 1. INTRODUCTION The \multiplicative weight updates (MWU) method" has a wide variety of applications in computer science and can be considered a meta-algorithm. The excellent survey of Arora, Hazan and Kale [2] takes this point of view and describes several applications that follow from the basic method and Lecture 16 The Multiplicative Weights LECTURE 16. THE MULTIPLICATIVE WEIGHTS ALGORITHM 6 The multiplicative weights update method: a meta algorithm and
Due October 16, 2018. Last update version hash: c072630, September 25. Homework 1. Due September 11, 2018. Last updated version hash: b2763b3, August 29. References. Approximation Algorithms. V. V. Vazirani. The Multiplicative Weights Update Method: a Meta … ... Applications of MWU in Winnow algorithm . The Multiplicative Weights Update Method: a Meta-Algorithm and Applications. a New Linear-threshold Algorithm.
The Multiplicative Weights Update Method: a Meta-Algorithm and Applications. Theory of Computing 8.1 (2012): 121-164. [2] A. Gupta. The Multiplicative Weights Algorithm. Lecture Notes, CMU. Accessed 01/05/2018. BГ i tбєp. BГ i tбєp 1: Chб»©ng minh Theorem 3, sб» dụng ГЅ tЖ°б»џng trong chб»©ng minh Theorem 2. A combinatorial, primal-dual approach to of the matrix-multiplicative weight update method and its Update Method: a Meta Algorithm and Applications.
The multiplicative weights update method: a meta-algorithm and its applications The geometric set cover problem is the special case of the set cover problem in geometric settings. Using a multiplicative weight algorithm, BrГ¶nnimann and
## Fast approximations to solve packing/covering LPs and
Optimization II Winter 2009/10 Lecture 5 November 9 5.1. The Multiplicative Weights Update Method: a Meta-Algorithm We feel that since this meta-algorithm and its analysis are so simple, and its applications so, 1 Recap In the previous This algorithm has many suprising applications: \The multiplicative weights update method: A meta algorithm and its applications.".
### Algorithms for Convex Optimization Algorithms Nature
Rebecca Hoberg A Polynomial-time LP Algorithm based on. We develop a continuous-time framework based on multiplicative weight updates to The multiplicative weights update method: a meta-algorithm and applications, Lecture 16 The Multiplicative Weights LECTURE 16. THE MULTIPLICATIVE WEIGHTS ALGORITHM 6 The multiplicative weights update method: a meta algorithm and.
References [AHK05] S. Arora, E. Hazan, and S. Kale, The multiplicative weights update method: a meta algorithm and applications, Tech. report, Princeton University, 2005. Lecture 16 The Multiplicative Weights LECTURE 16. THE MULTIPLICATIVE WEIGHTS ALGORITHM 6 The multiplicative weights update method: a meta algorithm and
Vol 8, Article 6 (pp 121-164) [RESEARCH SURVEY] The Multiplicative Weights Update Method: a Meta-Algorithm and Applications by Sanjeev Arora, Elad Hazan, and Satyen Kale Curriculum Vitae Thesis: E cient Algorithms Using the Multiplicative Weights Updates Method. a Meta-Algorithm and some Applications S. Arora,
This lecture introduces gradient descent — a meta-algorithm for Mirror Descent and the Multiplicative Weight Update Method. As an application, The Multiplicative Weights Update Method: a Meta over a certain set and use the multiplicative update rule to Meta Algorithm and Applications.
... 2011. Homework 2 out Due The Multiplicative Weights Update Method: a Meta Algorithm and" minimum cost matching * Experts/multiplicative weights algorithm The Multiplicative Weights Update Method. (PH) Applications to sketching and The Multiplicative Weights Update Method: a Meta-Algorithm and Applications
The Multiplicative Weights Update Method: a Meta-Algorithm We feel that since this meta-algorithm and its analysis are so simple, and its applications so Sanjeev Arora, Elad Hazan & Satyen Kale (2005). The multiplicative weights update method: a meta algorithm and applications. Submitted. Google Scholar
Digression to boosting, experts, dense models, as the matrix multiplicative weights algorithm weights update method: a meta-algorithm and applications. Adjust all expert weights: and can thus update our algorithm and Satyen Kale, \The multiplicative weights update method: A meta algorithm and its applications."
submodular function, multiplicative weight updates 1. INTRODUCTION The \multiplicative weight updates (MWU) method" has a wide variety of applications in computer science and can be considered a meta-algorithm. The excellent survey of Arora, Hazan and Kale [2] takes this point of view and describes several applications that follow from the basic method and ... according to the multiplicative weight updates through multiplicative updates, and evolution under method: A meta-algorithm and applications.
A multiplicative weights update algorithm for a meta-heuristic is an algorithm based on some Algorithm 2 The Multiplicative Weights Update algorithm 1: Tracking the best expert The multiplicative weights update method: a meta algorithm The algorithm employs a multiplicative update rule derived using a
The Multiplicative Weights Update Method: A Meta-Algorithm and We feel that since this meta-algorithm and its analysis are so simple, and its applications so (2012) The multiplicative weights update method: A meta-algorithm and applications. Theory Comput 8: 121 – 164.
5.1 The multiplicative weights update method Hazan and Kale [AHK06] gave a meta algorithm that puts many It will be important for the applications below to The Multiplicative Weights Update method The Multiplicative Weights method is a This meta algorithm is a also present some applications of this
Projects. Unsupervised O4. When does the multiplicative update method work (fail)? The Multiplicative Weights Update Method: a Meta-Algorithm and Applications. Expert's algorithms Application to Min-Max for Weights Update Method: a Meta Algorithm * Experts/multiplicative weights algorithm
Multiplicative Weights Update Method We present a single meta-algorithm which uni 2.4 A brief history of various applications of the Multiplicative Weights Multiplicative Weights the multiplicative weights update method: a meta algorithm weights method: a meta algorithm with applications to linear
Lecture 16 The Multiplicative Weights LECTURE 16. THE MULTIPLICATIVE WEIGHTS ALGORITHM 6 The multiplicative weights update method: a meta algorithm and Adjust all expert weights: and can thus update our algorithm and Satyen Kale, \The multiplicative weights update method: A meta algorithm and its applications."
Their combined citations are counted Update Method: a Meta-Algorithm and Applications. S using the multiplicative weights update method. S The Multiplicative Weights Update Method. (PH) Applications to sketching and The Multiplicative Weights Update Method: a Meta-Algorithm and Applications
The Multiplicative Weights Update Method: a Meta-Algorithm and Applications. by Sanjeev Arora, Elad Hazan, and Satyen Kale. Theory of Computing, Volume 8(6), pp. 121-164, 2012. Bibliography with links to … We show that the multiplicative weight update method provides a simple recipe for The multiplicative weights update method: A meta algorithm and applications.
Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos method is a ubiquitous meta-algorithm Syllabus Course Home The multiplicative weights update method, and the LLL algorithm. Applications to include solving low-dimensional integer programs and
Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos method is a ubiquitous meta-algorithm The Multiplicative Weights Update Method: A Meta-Algorithm and its Applications Sanjeev Arora Princeton University Princeton NJ 08540 arora@cs.princeton.edu
The Multiplicative Weights Update Method: A Meta-Algorithm and Applications set and use the multiplicative update rule to iteratively change these weights. Their combined citations are counted Update Method: a Meta-Algorithm and Applications. S using the multiplicative weights update method. S
### Satyen Kale
CIS 800 The Algorithmic Foundations of Data Privacy at. Projects. Unsupervised O4. When does the multiplicative update method work (fail)? The Multiplicative Weights Update Method: a Meta-Algorithm and Applications., Tracking the best expert The multiplicative weights update method: a meta algorithm The algorithm employs a multiplicative update rule derived using a.
Digression to boosting experts dense models and their. Arora, S.; Hazan, E.; and Kale, S. 2012. The multiplicative weights update method: A meta-algorithm and applications. Theory of Computing 8(1):121-164. Fitzpatrick, S, Vol 8, Article 6 (pp 121-164) [RESEARCH SURVEY] The Multiplicative Weights Update Method: a Meta-Algorithm and Applications by Sanjeev Arora, Elad Hazan, and.
### Multiplicative weight update method Wikipedia
The multiplicative weights update method a meta-algorithm. (2012) The multiplicative weights update method: A meta-algorithm and applications. Theory Comput 8: 121 – 164. https://fr.m.wikipedia.org/wiki/M%C3%A9thode_des_poids_multiplicatifs The Multiplicative Weights Update Method: A Meta-Algorithm and Applications set and use the multiplicative update rule to iteratively change these weights..
The Multiplicative Weights Update Method: A Meta-Algorithm and Applications set and use the multiplicative update rule to iteratively change these weights. The Multiplicative Weights Update Method. (PH) Applications to sketching and The Multiplicative Weights Update Method: a Meta-Algorithm and Applications
Deterministic Discrepancy Minimization via the The multiplicative weights update method: a meta-algorithm and Minimization via the Multiplicative Weight ... A Bayesian Ensemble for Unsupervised Anomaly Detection The Multiplicative Weights Update Method: a Meta Algorithm and with application to event
... 2011. Homework 2 out Due The Multiplicative Weights Update Method: a Meta Algorithm and" minimum cost matching * Experts/multiplicative weights algorithm Abstract: The multiplicative weights update method is a meta-algorithm with varied applications. As Arora, Hazan, and Kale show, applying this method with nonnegative
1 The Multiplicative Weights Update Method [1] 1.1 Setting 2 Applications Reduce weight of well-satisfied constraints !similar in The Multiplicative Weights Update Method: A Meta-Algorithm and its Applications Sanjeev Arora Princeton University Princeton NJ 08540 arora@cs.princeton.edu
2012-10-31 · [1] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a meta algorithm and applications. Working Paper, 2005. [2] Christopher J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2:121–167, 1998. [3] Emmanuel J. Candes. Compressive sampling. The Algorithmic Foundations of Data Privacy The multiplicative weights update method - meta-algorithm and applications Multiplicative updates in
We show that the multiplicative weight update method provides a simple recipe for The multiplicative weights update method: A meta algorithm and applications. Meta Convex optimization with the help of Multiplicative Weights Update Method. etc on the internet about the multiplicative weights algorithm
Lecture 4 1 The multiplicative weights update method The multiplicative weights method is The number of mistakes M made by the experts algorithm with The Multiplicative Weights Update Method. (PH) Applications to sketching and The Multiplicative Weights Update Method: a Meta-Algorithm and Applications
... 2011. Homework 2 out Due The Multiplicative Weights Update Method: a Meta Algorithm and" minimum cost matching * Experts/multiplicative weights algorithm 2014-07-22В В· Algorithms, games, and evolution. genes played according to the multiplicative weight updates update method: A meta-algorithm and applications.
Adjust all expert weights: and can thus update our algorithm and Satyen Kale, \The multiplicative weights update method: A meta algorithm and its applications." Fast approximations to solve packing/covering LPs and The multiplicative weights algorithm is a well The Multiplicative Weights Update Method: a Meta
2012-10-31 · [1] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a meta algorithm and applications. Working Paper, 2005. [2] Christopher J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2:121–167, 1998. [3] Emmanuel J. Candes. Compressive sampling. CS 506 Class Syllabus : The Multiplicative Weights Update Method: a Meta Algorithm and Multiplicative Weights, Applications of Multiplicative Weights to
The Multiplicative Weights Update Method: a Meta-Algorithm and Applications by Sanjeev Arora, Elad Hazan, and Satyen Kale References [AHK05] S. Arora, E. Hazan, and S. Kale, The multiplicative weights update method: a meta algorithm and applications, Tech. report, Princeton University, 2005.
|
{}
|
If A commutes with both of these matrices, then A must be a scalar multiple of the identity matrix
I am working on the following problem:
Let $$A$$ be a $$4 \times 4$$ matrix with entries in a field of characteristic zero. Suppose that $$A$$ commutes with both $$\begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 2 & 0 & 0\\ 0 & 0 & 3 & 0\\ 0 & 0 & 0 & 4 \end{pmatrix}$$ and $$\begin{pmatrix} 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end{pmatrix}$$. Prove that $$A$$ is a scalar multiple of the identity matrix.
I know that $$A$$ is a scalar multiple of the identity matrix if and only if $$AB = BA$$ for all other possible $$4 \times 4$$ matrices $$B$$ with entries in a field of characteristic $$0$$. However, I'm struggling with deducing here that $$A$$ commuting with these specific matrices forces $$A$$ to be a scalar multiple of the identity matrix. Does commuting with these specific matrices force $$A$$ to commute with all $$4 \times 4$$ matrices with entries in a field of characteristic $$0$$? If so, how can I deduce this?
Thanks!
$$A$$ commutes with the first matrix implies that $$A$$ preserves its eigenspaces. This implies that $$A(e_i)=c_ie_i,i=1,2,3,4$$.
$$A$$ commutes with the second matrix $$C$$ implies that $$AC(e_1)=A(e_2)=c_2e_2=C(A(e_1)=C(c_1e_1)=c_1e_2$$ implies $$c_1=c_2$$,...
since $$AC(e_2)=CA(e_2), AC(e_3)=CA(e_3)$$ deduce that $$c_1=c_2=c_3=c_4$$.
Hint:
Left multiplication of a square matrix by $$D=$$ first (diagonal) matrix amounts to multiply its rows by the diagonal elements (for this matrix, the first row is multiplied by $$1$$, the second row by $$2$$, &c.). Right multiplication amounts to multiplying its columns by the diagonal elements. If both results are equal, by identification, you can deduce that $$A$$ is a diagonal matrix.
Commutativity of multiplication by the second matrix will then let you show, by identification, that all elements on the diagonal are equal.
Let $$U,V$$ be the $$2$$ given matrices and $$(e_i)_i$$ be the canonical basis of $$K^4$$.
The invariant proper subspaces of $$U$$ are the $$span(\mathcal{B})$$ where $$\mathcal{B}$$ is any proper subset of $$(e_i)_i$$. For every such $$\mathcal{B}$$, $$span(\mathcal{B})$$ is not $$V$$-stable. then $$U,V$$ have no common proper invariant subspaces. According to the Burnside's theorem (*), the algebra generated by $$U,V$$ is whole $$M_4(K)$$; the required result follows. $$\square$$
My answer can be treated as the supplement to the Loup Blanc's answer, I would like to express similar result in a more elementary way.
Denote both mentioned matrices as $$D$$ and $$P$$.
It is easy to check that if $$A$$ commutes with matrices $$D$$ and $$P$$ then it commutes also with any power of matrices $$D$$ and $$P$$, any polynomial of $$D$$ and $$P$$ and generally with any product or linear combination of these matrices.
For powers of $$D$$ we have results
$$D=\text{diag} ( 1 \ \ 2 \ \ 3 \ \ 4) , \\ D^2= \text{diag} ( 1 \ \ 2^2 \ \ 3^2 \ \ 4 ^2) , \\ D^3=\text{diag} ( 1 \ \ 2^3 \ \ 3^3 \ \ 4^3 ) , \\D^4=\text{diag} ( 1 \ \ 2^4 \ \ 3^4 \ \ 4^4)$$
Four vectors formed from diagonal entries are linearly independent (if they are columns of $$4 \times 4$$ matrix then they form Vandermonde matrix) so linear combination of them can generate any diagonal matrix, denote it generally as $${D_i}$$.
On the other hand $$P= \begin{pmatrix} 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end{pmatrix}$$ is a permutation matrix with its powers
$$P^2= \begin{pmatrix} 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \end{pmatrix}, P^3= \begin{pmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0 \end{pmatrix}, P^4=I$$.
It is visible that with the expression $$D_0+P D_1 + P^2D_2+P^3D_3$$ we can generate any $$4 \times 4$$ matrix (assuming first we generate appropriate diagonal matrices $$D_i$$) and hence the matrix $$A$$ has to commute with all possible $$4 \times 4$$ matrices.
|
{}
|
Next: 5 Performance Up: PLAPACK: High Performance through Previous: 3 LU Factorization (with
Subsections
# 4 Householder QR Factorization
In this section, we discuss the computation of the QR factorization where A is , Q is and R is .Here , Q is unitary ()and R has the form where is an uppertriangular matrix. Partitioning where has width n , we see that the following also holds In our subsequent discussions, we will refer to both of these factorizations as a QR factorization and will explicitly indicate which of the two is being considered.
## 4.1 Basic algorithm
To present a basic algorithm for the QR factorization, we must start by introducing Householder transforms (reflections).
Householder transforms are orthonormal transformations that can be written as where . These transformations, sometimes called reflectors, have a number of interesting properties: ,, and .Of most interest to us is the fact that given a vector , where is of length k-1 , one can find a vector such that where .Indeed, it can be easily verified that has this property. Notice that k here is used to indicate that the first k-1 elements of x are to be left alone, which results in having k-1 zero valued leading elements. The sign that is chosen corresponds to the sign of the first entry of to avoid catastrophic cancellation. Vector can be scaled arbitrarily by adjusting correspondingly. In the subsequent section, we will scale it so that the first nonzero ( k th) entry of is 1 .
To compute the QR factorization of given matrix A , we wish to compute Householder transformations such that where R is uppertriangular.
An algorithm for computing the QR factorization is given by
1.
Partition
2.
Determine corresponding to the vector .
3.
Store v in the now zero part of .
4.
where
5.
Repartition
6.
Continue recursively with
### 4.1.1 Blocked algorithm
In [3,9], it is shown how a blocked (matrix-matrix multiply) based algorithm can be derived , by creating a WY transform, which, when applied, yield the same result as a number of successive applications of Householder transforms: where W and Y are both .Given the k vectors that define the k Householder transforms, these matrices can be computed one column at a time in a relatively straight forward manner.
Given the WY transform discussion above, the blocked version of the algorithm is given in Fig. 4.
## 4.2 PLAPACK Implementation
### A new parallel algorithm
The expert optimization of the QR factorization lies in the ability of PLAPACK of viewing parts of the matrix as a panel that can be redistributed as a multivector on the nodes. A PLAPACK implementation that explores this is given in Fig. 4.
Figure 4: Optimized implementation of QR factorization using PLAPACK
Next: 5 Performance Up: PLAPACK: High Performance through Previous: 3 LU Factorization (with
plapack@cs.utexas.edu
|
{}
|
# DETECTION AND SPECTROSCOPY OF IONS BY LASER INDUCED FLUORESCENCE
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/10318
Files Size Format View
1977-WF-07.jpg 81.83Kb JPEG image
Title: DETECTION AND SPECTROSCOPY OF IONS BY LASER INDUCED FLUORESCENCE Creators: Miller, Terry A.; Bondybey, V. E. Issue Date: 1977 Publisher: Ohio State University Abstract: The spectra of ions have been traditionally very elusive and only a handful of absorption spectra has been reported. We have recently employed the technique of tunable laser induced fluorescence to obtain the spectra of numerous ions. This technique combines several of the advantages of absorption spectroscopy with the sensitivity of emission spectroscopy. Furthermore, it allows the selective excitation of single vibrational and rotational states. Examples of vibrationally and rotationally resolved ion spectra obtained by this technique include $CO^{+}, N_{2}^{+}, CO_{2}^{+}$, etc. Description: Author Institution: Bell Laboratories URI: http://hdl.handle.net/1811/10318 Other Identifiers: 1977-WF-7
|
{}
|
## simulation of rate equation
i have try this rate equation code in MATLAB. but its not working. can anyone help me please!!!!!
mm=0;
for I = 0.1e-3:0.1e-3:4.5e-3;
m=mm+1;
q=1.6e-19;
alpha=2;
Tn=3e-9;
Tp=1e-12;
G0=0.6;
N0 = 1e24;
Gamma=.2;
epsilon=1e1;
Va=3.7e-14;
tini=0;
tfin=4e-9;
dt=1e-12;
N(1)=0;
S(1)=0;
p=G0*N(m)*S(m);
l=1+epsilon*S(m);
y=q*Va;
for t=tini:dt:tfin
delN=(I/y)-(N(m)/Tn)-(p/l);
end
end
tt=t;
tt=tt*1e9;
figure(1);
plot(tt,N);
PhysOrg.com engineering news on PhysOrg.com >> Company pioneering new types of material for 3-D printer 'ink'>> Student-built innovations to help improve and save lives>> Researchers use light projector and single-pixel detectors to create 3-D images
I'm not sure whether Matlab requires that you define your variables before attempting to execute them, but I see that line 4, for example, (I0 = N0*q*Va/Tn;) contains several variables you do not define until later in your routine.
Mentor
Quote by almesba i have try this rate equation code in MATLAB. but its not working. can anyone help me please!!!!! mm=0; for I = 0.1e-3:0.1e-3:4.5e-3; m=mm+1; q=1.6e-19; alpha=2; Tn=3e-9; Tp=1e-12; G0=0.6; N0 = 1e24; Gamma=.2; epsilon=1e1; Va=3.7e-14; tini=0; tfin=4e-9; dt=1e-12; N(1)=0; S(1)=0; p=G0*N(m)*S(m); l=1+epsilon*S(m); y=q*Va; for t=tini:dt:tfin delN=(I/y)-(N(m)/Tn)-(p/l); end end tt=t; tt=tt*1e9; figure(1); plot(tt,N);
Welcome to the PF. What do you mean by "not working"? Does MATLAB throw an error somewhere, or does the program run and not give a reasonable answer?
## simulation of rate equation
Quote by berkeman Welcome to the PF. What do you mean by "not working"? Does MATLAB throw an error somewhere, or does the program run and not give a reasonable answer?
MATLAB isnt throwing any error. NOT WORKING means i am getting a white or blank graph. i have tried with different axis values but still no result.
Recognitions: Science Advisor Code: p=G0*N(m)*S(m); l=1+epsilon*S(m) Both those lines are bound to throw an error on the second iteration because neither N() or S() are defined past an index of 1.
|
{}
|
## Coefficient of performance for a low-dissipation Carnot-like refrigerator with nonadiabatic dissipation [PDF]
Yong Hu, Feifei Wu, Yongli Ma, Jizhou He, Jianhui Wang, A. Calvo Hernandez, J. M. M. Roco
We study the coefficient of performance (COP) and its bounds of the Canot-like refrigerator working between two heat reservoirs at constant temperatures $T_h$ and $T_c$, under two optimization criteria $\chi$ and $\Omega$. In view of the fact that an "adiabatic" process takes finite time and is nonisentropic, the nonadiabatic dissipation and the finite time required for the "adiabatic" processes are taken into account. For given optimization criteria, we find that the lower and upper bounds of the COP are the same as the corresponding ones obtained from the previous idealized models where any adiabatic process undergoes instantaneously with constant entropy. When the dissipations of two "isothermal" and two "adiabatic" processes are symmetric, respectively, our theoretical predictions match the observed COP's of real refrigerators more closely than the ones derived in the previous models, providing a strong argument in favor of our approach.
View original: http://arxiv.org/abs/1307.0175
|
{}
|
# Order of RC circuit
In the given RC circuit, there are 3 capacitors, out of which the voltages of 2 capacitors can be uniquely determined.
This implies that it is a second-order circuit. But when I found the expression, I got only one pole i.e first-order response.
Where am I wrong in determining the order of the circuit?
• Looks right to me.
– jonk
Sep 1 at 18:54
• I've noticed that all questions previously raised have not attracted any formal answer acceptance. I've also noticed that at least one question hasn't had its answer followed by either a comment or an upvote. If other folk also notice this, it might be a reason for what seems to be a decline in help offered on more recent questions. People give help for free and it's not clear-cut how those people will react in the face of no prospect of upvotes and no prospect of answer acceptance. Just saying. Sep 1 at 19:00
• @Andyaka My mistake. Sorry for that. I didn't think much about that and I was new to this platform. So, I don't know that this would have an impact. Thanks for letting me know. Sep 1 at 19:34
• @jonk Which is right? First order or second? Sep 1 at 19:35
• @prashanth I thought you wondered if your simplification was valid, treating all of the R and C values as equal. If they are exactly equal, then I do find your simplification through cancellation. If, however, you want to see the full transfer function in all its detail without cancellation, then yes the result is 2nd order and has both high pass and low pass terms.
– jonk
Sep 2 at 2:06
The schematic with symbols and nothing assumed about equality of any of them is:
simulate this circuit – Schematic created using CircuitLab
Using freely available SymPy find, through KVL, the following steps towards a solution:
var('r1 r2 c1 c2 r3 c3 vi vop vom s')
eq1 = Eq( vop/r1 + vop/r2 + vop/(1/s/c1) + vop/(1/s/c2), vi/r1 + vom/r2 + vom/(1/s/c2) )
eq2 = Eq( vom/r3 + vom/r2 + vom/(1/s/c3) + vom/(1/s/c2), vi/(1/s/c3) + vop/r2 + vop/(1/s/c2) )
a2 = solve( [ eq1, eq2 ], [ vom, vop ] )
tf2( simplify( (a2[vop]-a2[vom]) / vi ) )
The tf2 function yields the following 2nd order result:
\begin{align*} \omega_{_0} =& \frac{\sqrt{R_1 + R_2 + R_3}}{\sqrt{R_1\,R_2\,R_3}\sqrt{C_1\,C_2 + C_1\,C_3 + C_2\,C_3}} \\\\ \zeta =&\frac12\cdot\frac{R_1\,C_1\left(R_2 + R_3\right) + R_2\,C_2\left(R_1 + R_3\right) + R_3\,C_3\left(R_1 + R_2\right)}{\sqrt{R_1 + R_2 + R_3}\sqrt{R_1\,R_2\,R_3}\sqrt{C_1\,C_2 + C_1\,C_3 + C_2\,C_3}} \\\\ \frac{v_{_\text{OUT}}}{v_{_\text{IN}}}=G_s =& -\left[\frac{1}{1+C_2\left(\frac{1}{C_1}+\frac{1}{C_3}\right)}\right]\frac{\left(\frac{s}{\omega_{_0}}\right)^2}{\left(\frac{s}{\omega_{_0}}\right)^2+2\zeta\left(\frac{s}{\omega_{_0}}\right)+1}\\\\&+\left[\frac{R_2}{R_1 + R_2 + R_3}\right]\frac{1}{\left(\frac{s}{\omega_{_0}}\right)^2+2\zeta\left(\frac{s}{\omega_{_0}}\right)+1} \end{align*}
If you set $$\R=R_1=R_2=R_3\$$ and $$\C=C_1=C_2=C_3\$$ then, technically, you wind up with:
\begin{align*} \omega_{_0} =& \frac{1}{R\,C} \\\\ \zeta =&1 \\\\ \frac{v_{_\text{OUT}}}{v_{_\text{IN}}}=G_s =& -\left[\frac{1}{3}\right]\frac{\left(\frac{s}{\omega_{_0}}\right)^2}{\left(\frac{s}{\omega_{_0}}\right)^2+2\zeta\left(\frac{s}{\omega_{_0}}\right)+1}\\\\&+\left[\frac{1}{3}\right]\frac{1}{\left(\frac{s}{\omega_{_0}}\right)^2+2\zeta\left(\frac{s}{\omega_{_0}}\right)+1} \end{align*}
In this case, if you plot your reduced result against the fuller 2nd order result the plots should be the same. (They should overlap.) The reason is simple to see if you set $$\R=1\$$ and $$\C=1\$$. Then you have $$\\left(1-s\right)\left(1+s\right)\$$ divided by $$\\left(1+s\right)^2\$$ and the factor $$\\left(1+s\right)\$$ found in both numerator and denominator can then cancel, effectively leaving your result.
|
{}
|
# covariance of random variables
Suppose X, Y, W are independent random variables such that X ∼ GAM(2,3), Y ∼ N(1,4) and W ∼ BIN(10,1/4). Let U = 2X − 3Y and V = Y − W . Find cov(U, V ).
I know that cov(U, V) = E(U, V) - E(U)E(V). I've found E(U) and E(V) but I don't know how to find E(U, V).
• You know, or can look up, the mean and variance of the various distributions. But in the second simpler way in the answer, all we need is the cov of $Y$ and $Y$, which is the variance of $Y$. We were told that $Y$ has variance $4$, so there is no work to do. – André Nicolas Apr 19 '16 at 6:11
You mean that you don't know how to find $E(UV)$. This is $E((2X-3Y)(Y-W))$. Expand the product, and use the linearity of expectation. We find that $$E(UV)=2E(XY)-2E(XW)-3E(Y^2)+3E(YW).$$ Now you can use independence and your knowledge of the various distributions to finish the calculation of $E(UV)$.
However, this is not the best way to find the covariance of $U$ and $V$. Instead, use the bilinearity of covariance. We have $$\text{Cov}(UV)=2\text{Cov}(X,Y)-2\text{Cov}(X,W)-3\text{Cov}(Y,Y)+3\text{Cov}(Y,W).$$ Almost all these covariances are $0$!
• The Cov of $Y$ and $Y$ is not $0$, it is the variance of $Y$. The other three are $0$. – André Nicolas Apr 19 '16 at 6:13
|
{}
|
# Discounts and markups?
## What are they? How do you do em?
Aug 29, 2017
See below!
#### Explanation:
Discounts:
Discounts mostly used by stores so that we can buy their products. Like you might see a sign that says,"15% Off of All Items!". To find out if it is a good deal or not, we calculate how much we'll save. It's just like calculating tax, but instead of confusing you more, lets just get to the examples...
A new pair of sunglasses cost $10 But then you see a sign that says "25% Off All Items". Step 1: Change the percent to a decimal 25%=0.25 Step 2: Multiply the decimal by the original amount to get the discount (the discount is the money that we save) 0.25times$10=$2.5 Step 3: Subtract the discount from the original price $10-2.5=7.5
The new price of the sunglasses is $7.50! Markups: Well, if stores offer discount all the time, they'll run out of business! So that's why once in a while they increase the price. They're called markups. We do the same steps as the discount but only Step 3 changes. Instead of subtracting, we add. Lets see the example... You want to buy this AWESOME game for your Xbox. Once you get to the shop, you see that there's a markup of 20%. What is the new price of the game? Step 1: Change the percent discount to a decimal 20%=0.20 Step 2: Multiply the decimal by he original cost (this is the markup) 0.20times$40-$8 Step 3: Add the markup price to the original cost $40+$8=$48
The new price of the board game is $48! I hope that this answer help the ones who need it! My source is my mind! :) Aug 29, 2017 See explanation below. #### Explanation: As an example, let's say that the price of some object is $100.
A "discount" is an amount that is taken off, or "discounted", from the price of some good or service.
Now, if there is a 25% discount on the $100 object, it will mean that 25% of $100 will be taken off from the $100 price. Rightarrow "Discount amount" =$100 times 0.25 = $25 Therefore, after a 25% discount, the price of the object becomes $75.
"Mark-ups", which are amounts added to the price, can be worked out in the same way as discounts.
If there were a 50% mark-up added to the price, 50% of $100 would be added to the $100 price.
Rightarrow "Mark-up amount" = $100 times 0.50 =$50
So, the final price of the object would become \$150.
|
{}
|
# Building a complete home security perimeter system...
Status
Not open for further replies.
#### RaetherEnt
##### New Member
Hi all!
This is my first post to the board, but despite my "newbieness", I'm hoping you all could help me out with this.
Im moving to the Mountains soon to a fairly remote property. It is approximately 11 acres and is completely surrounded by woods. For our location, a home security system would be rather useless in that if someone were to actually break into our home, it would take at least 30-45 minutes minimum for authorities to arrive.
Therefore, I would like to build a monitoring system that will alert me immediately if someone were to enter the perimeter of the property. I'm looking to build a system that will do a number of things, to include:
1. Detect any movement in the woods
2. When movement is detected, turn on tree mounted flood lights and video camera (CCTV or wireless), alert me inside the house and let me view what is going on with my TV or other monitor.
Obviously I am going to need NUMEROUS sensors and lights, but I would like to keep only one monitor, so obviously I will also need some sort of control unit to make all this happen.
Anyone have any experience like this? I would be VERY, VERY appreciative of any info any of you might have.
Sounds extreme I know, however, recently I woke up one night to my dog barking, and had a laser light coming through my window...SO, I'm getting out of the city, and I don't want anyone else anywhere near that close to my house again!!!
THANKS!
Mike
Thanks!
#### Squintz
##### New Member
Im looking into Home Automation stuff like this right now. If you have about $2000 or$3000 you can buil youself a nice security system which you can tell it how to respond. For instance if a motion detector were set off it could call upto 8 differnt number and send out e-mail if you connect it to the internet. Also with the right software you can view security cameras setup arround the house from a remote location. And with x10 technology you can control stuff like your lights tv and all kinds of stuff. With Home Automation you can control just about anything. If you want to start the water in the tub on your way home then call your house with the cell and tell the system to turn on the water and it will do so. You can control the heat and air and just about everything imaginable.
www.smarthomes.com
www.x10.com
www.HomeSeer.com
http://www.homeautomationforum.com/
Thats just a few i found in the last few days
If you get any information or better forum links then post them here so i can see them. Your system can range from simple lighting control to hundreds of controls and sensors around the house. Im looking at the OmniTouch LCD control interface. Its a cool flush mount control panel that you can arm disarm your security sytem and control temp and lights and all kinds of stuff from it.
#### trikebuilder
##### New Member
remote alarm systems for rural locations
a simple but effective system uses a pair of wire running along your fence, one system for each side. The system is simple, the bottom wire (about a foot below the top wire), is hooked to a simple oscillator in free running mode. The second wire has a frequency about 1,000-1,500 hz difference in it. You balance the second oscillator with a null-bridge so that there is no influence from the lower wire signal. Anyone climbing over the fence inductively couples the bottom wire signal to the the upper wire signal and causes a shift in the output freq. of the top (sensor) wire signal. This causes a shift from null state, and this is detected and triggers the alarm. With four sub-systems, you can tell not only that they crossed, but which side of the property, and this can automatically aim the camera in a sweep on that side. [A point here, the intensity of that shift is proportional the the mass of the intruder, (larger shift, bigger mass), and with some calibration, you could almost determine if the intruder is small or large.] Power each sub-system from a small simple solar cell/battery unit and you have 24/7 service for little money invested.
#### john1
##### Active Member
Interesting ...
i'm looking for something that can tell if its a cat ...
I will have to experiment, that sound like it might do it.
My PIR sensors keep responding to my cat ...
#### tansis
##### New Member
I liked trikebuilder's suggestion , run a search of the web for "Theramin" circuits. Very usefull for guarding perimeter.Either as a single wire threshold or a modular detection cells to give an intruders approxomate position
PIR in the countryside sadly will allways be prone to the local wildlife tripping the alarm, special lenses are available that do not focus a foot or so above the ground to allow pets safe passage but mounting position is crucial. Dual technology sensors , combined PIR and microwave are also worth experimenting with.
Other recent developments from the high tech end of the market are auto tracking cameras , these use software to detect changes in the normal picture from the ccd camera and instruct the servo system to follow. A bitt over the top as it can be done cheaper with a pair of pir's and a bit of logic.
Sadly these Remote Sentry Guns are not available .... yet! :evil:
#### Attachments
• 46.4 KB Views: 859
Status
Not open for further replies.
|
{}
|
Proposition 10.89.3. Let $M$ be an $R$-module. The following are equivalent:
1. $M$ is finitely presented.
2. For every family $(Q_{\alpha })_{\alpha \in A}$ of $R$-modules, the canonical map $M \otimes _ R \left( \prod _{\alpha } Q_{\alpha } \right) \to \prod _{\alpha } (M \otimes _ R Q_{\alpha })$ is bijective.
3. For every $R$-module $Q$ and every set $A$, the canonical map $M \otimes _ R Q^{A} \to (M \otimes _ R Q)^{A}$ is bijective.
4. For every set $A$, the canonical map $M \otimes _ R R^{A} \to M^{A}$ is bijective.
Proof. First we prove (1) implies (2). Choose a presentation $R^ m \to R^ n \to M$ and consider the commutative diagram
$\xymatrix{ R^ m \otimes _ R (\prod _{\alpha } Q_{\alpha }) \ar[r] \ar[d]^{\cong } & R^ m \otimes _ R (\prod _{\alpha } Q_{\alpha }) \ar[r] \ar[d]^{\cong } & M \otimes _ R (\prod _{\alpha } Q_{\alpha }) \ar[r] \ar[d] & 0 \\ \prod _{\alpha } (R^ m \otimes _ R Q_{\alpha }) \ar[r] & \prod _{\alpha } (R^ n \otimes _ R Q_{\alpha }) \ar[r] & \prod _{\alpha } (M \otimes _ R Q_{\alpha }) \ar[r] & 0. }$
The first two vertical arrows are isomorphisms and the rows are exact. This implies that the map $M \otimes _ R (\prod _{\alpha } Q_{\alpha }) \to \prod _{\alpha } ( M \otimes _ R Q_{\alpha })$ is surjective and, by a diagram chase, also injective. Hence (2) holds.
Obviously (2) implies (3) implies (4), so it remains to prove (4) implies (1). From Proposition 10.89.2, if (4) holds we already know that $M$ is finitely generated. So we can choose a surjection $F \to M$ where $F$ is free and finite. Let $K$ be the kernel. We must show $K$ is finitely generated. For any set $A$, we have a commutative diagram
$\xymatrix{ & K \otimes _ R R^ A \ar[r] \ar[d]_{f_3} & F \otimes _ R R^ A \ar[r] \ar[d]_{f_2}^{\cong } & M \otimes _ R R^ A \ar[r] \ar[d]_{f_1}^{\cong } & 0 \\ 0 \ar[r] & K^ A \ar[r] & F^ A \ar[r] & M^ A \ar[r] & 0 . }$
The map $f_1$ is an isomorphism by assumption, the map $f_2$ is a isomorphism since $F$ is free and finite, and the rows are exact. A diagram chase shows that $f_3$ is surjective, hence by Proposition 10.89.2 we get that $K$ is finitely generated. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
{}
|
Writing a Simple Text Editor
I recently decided to make a version of "Notepad" for my OS, and it's mostly done now.
To take a deeper look into using the OS's API, I'll walk you through how you could recreate the program.
(Note: the API is still under constant revision, so any information presented in this blog could change at any time...)
We can start by creating a manifest file. The build system should automatically find it.
Let's add a definition for the program.
text_editor/text_editor.manifest
1 2 3 4 [program] name = "Text Editor"; shortName = "text_editor"; systemMessageCallback = ProcessSystemMessage;
This specifies the name of the program, its internal name, and the subroutine that will receive system messages. We'll take a look at this later.
Next we include information about how to build the program.
1 2 3 4 5 [build] output = "Text Editor.esx"; source = "text_editor/main.cpp"; installationFolder = "/Programs/Text Editor/"; Install;
This gives the source file, the output executable, and the installation folder.
It also tells the build system to automatically install it. This just means a few lines will get added to "bin/OS/Installed Programs.dat".
Program installation is nothing fancy in the OS.
Now let's define the menus for the program.
We will be using the default file and edit menus, so we don't need to define those. Just a search menu, and the menubar.
1 2 3 4 5 6 7 8 9 10 11 12 [menu menuSearch] name = "Search"; commandFind; commandReplace; Separator; commandFindNext; commandFindPrevious; [menu mainMenubar] osMenuFile; osMenuEdit; menuSearch;
This should be fairly self explanatory.
Now let's define the commands in the search menu.
I'll explain exactly what a "command" is in a bit.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [command commandFind] label = "Find..."; shortcut = "Ctrl+F"; [command commandReplace] label = "Replace..."; shortcut = "Ctrl+H"; [command commandFindNext] label = "Find next"; shortcut = "F3"; defaultDisabled = true; [command commandFindPrevious] label = "Find previous"; shortcut = "Shift+F3"; defaultDisabled = true;
This is a fairly convenient way to define the labels and keyboard shortcuts for different commands.
All the commands we define in the manifest are also automatically added to the default command group, in which they are assigned an index starting from 0.
We disable the find next/previous commands by default, as we only want to enable these once the user has put some text in the find/replace textboxes in the dialogs we'll be making.
There are just a few more commands we'll need.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [command commandReplaceNext] label = "Replace next"; defaultDisabled = true; [command commandReplaceAll] label = "Replace all"; defaultDisabled = true; dangerous = true; [command commandCloseDialog] label = "Close"; [command commandMatchCase] label = "Match case"; checkable = true;
These will be needed for the find/replace dialogs.
We mark the "replace all" command as "dangerous". This means buttons bound to that command will glow red when the user hovers over them.
It's purely a visual difference - but it should help the user to understand that the single button will cause a lot of modifications to their document.
The "match case" command is marked as checkable - this is because we'll be using it to create a checkbox. You could also put this command in the search menu, where it would become a checkable menu item.
At the end of our manifest file, we can include the templates for our windows and dialogs.
Currently there is no way to specify the contents of windows in the manifest file, so we'll have to do this in the code.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [window mainWindow] width = 330; height = 300; minimumWidth = 250; minimumHeight = 250; title = "Text Editor"; menubar = mainMenubar; [window dialogFind] width = 360; height = 100; resizable = false; title = "Find"; [window dialogReplace] width = 360; height = 130; resizable = false; title = "Replace";
As you can see, we can however define the dimensions and titles of the windows/dialogs.
One convenience of the manifest files is that we can exclude any properties we don't care about, and they will be automatically set to sane defaults.
This is why we don't have to mark the mainWindow as resizable - this is the default.
Okay, on to the C++!
text_editor/main.cpp
1 2 3 4 #include "../api/os.h" #define OS_MANIFEST_DEFINITIONS #include "../bin/Programs/Text Editor/manifest.h"
We start by including the API's header file, and the automatically-generated definitions from our manifest.
Now, since the program will handle multiple documents from a single process, we need to make a structure to store all the information about one "Instance" of the program.
1 2 3 4 5 6 7 struct Instance { OSObject window, textbox, findDialog, replaceDialog, findTextbox, replaceTextbox; OSString findString, replaceString; OSCommand *commands; };
The API provides opaque handles to its objects, such as windows and controls through the type "OSObject".
The instance will store a handle to the window, the main textbox, the find/replace dialogs, and the textboxes within the find/replace dialogs.
We also have two strings (a buffer + byte count), for the current find query, and the replacement text.
Finally, we store an array of commands. When each instance is created we will create this array, and in it the state of each command can be stored. This includes whether the command is disabled, checked, the notification callback, etc.
It's also important to note at this point that the API also has a notion of instances.
It will have its own structure for each instance of the program that is started, where it keeps its own information. This will include, for example, the builtin commands (copy, paste, save file, etc.). Hopefully it's clear whether the API's instance object or the program's instance structure is being referred to when they're used. Now, onto the first of the two subroutines that make up this program.
1 2 3 4 OSCallbackResponse ProcessSystemMessage(OSObject, OSMessage *message) { ... return OS_CALLBACK_NOT_HANDLED; }
This is the subroutine in which the API will send system messages to us.
The name of this subroutine is determined in the program section of the manifest, as shown above.
We only need to handle one system message.
1 2 3 4 if (message->type == OS_MESSAGE_CREATE_INSTANCE) { ... return OS_CALLBACK_HANDLED; }
This is sent when we need to start a new instance of our program.
For example, the program executable might be double-clicked in the file manager, or a text file gets opened (not implemented yet).
Before we can create the window, we should start an allocation block.
1 2 3 OSStartGUIAllocationBlock(16384); ... OSEndGUIAllocationBlock();
This will allocate a 16KB chunk of memory. This will then be split up as we create the window and its contents.
When the window is closed, this memory chunk will get deallocated.
Okay, let's allocate an Instance.
1 Instance *instance = (Instance *) OSHeapAllocate(sizeof(Instance), true /*clear to zero*/);
...and all the commands in the default command group (those are the ones in our manifest).
1 2 instance->commands = OSCreateCommands(osDefaultCommandGroup); OSSetCommandGroupNotificationCallback(instance->commands, ProcessNotification);
We also set their notification callback.
A notification callback is the subroutine to which "notifications" are sent. Notifications range from things like invoking a command, to a list view wanting a row repainted.
The "context" parameter of the notification is automatically assigned to the index of the command in the command array.
We'll see how this works in a moment.
Here's where we create the API instance - "instanceObject".
1 OSObject instanceObject = OSCreateInstance(instance, message, instance->commands);
We pass in a pointer to our Instance structure - "instance". This is so whenever we receive a notification we will get a pointer to our Instance structure in the "instanceContext" field.
We also pass in the create instance message structure we received. This contains flags and other information that the creating process set.
Finally we pass in a pointer to our command array. This is so the API is aware of our commands - including useful things like making keyboard shortcuts work.
Great! Now we can make our window.
1 instance->window = OSCreateWindow(mainWindow, instanceObject);
We pass in the window template from our manifest, and the API instance object.
In the window we need a textbox, so let's make that.
1 instance->textbox = OSCreateTextbox((OSTextboxStyle) (OS_TEXTBOX_STYLE_MULTILINE | OS_TEXTBOX_STYLE_NO_BORDER), OS_TEXTBOX_WRAP_MODE_NONE);
We add it to a 1x1 grid so that it fills it ("OS_CELL_FILL").
1 2 OSObject grid = OSCreateGrid(1 /*columns*/, 1 /*rows*/, OS_GRID_STYLE_LAYOUT /*no border or padding between cells*/); OSAddControl(grid, 0 /*column*/, 0 /*row*/, instance->textbox, OS_CELL_FILL);
We then set the grid as the root of the window, and then make the textbox the default focused control.
1 2 OSSetRootGrid(window, grid); OSSetFocusedControl(textbox, true /*default*/);
By setting it as the default focused control, if the user presses the "escape" key in the window, the keyboard focus will go to the textbox.
Now, our program uses a single subroutine - ProcessNotification - to handle all of the notifications.
As noted earlier, we set the callback context for our commands to be their index into the command group.
We want to receive notifications from other sources, such as the textbox, so we need to come up with values to use, so we can identify them as the notification source in the callback.
Let's use negative values so they don't conflict with the commands.
1 2 3 4 5 #define WINDOW_NOTIFICATION (-1) #define TEXTBOX_NOTIFICATION (-2) #define INSTANCE_NOTIFICATION (-3) #define FIND_TEXTBOX_NOTIFICATION (-4) #define REPLACE_TEXTBOX_NOTIFICATION (-5)
Then we can actually set the notification callbacks...
1 2 3 OSSetObjectNotificationCallback(window, OS_MAKE_NOTIFICATION_CALLBACK(ProcessNotification, (intptr_t) WINDOW_NOTIFICATION)); OSSetObjectNotificationCallback(textbox, OS_MAKE_NOTIFICATION_CALLBACK(ProcessNotification, (intptr_t) TEXTBOX_NOTIFICATION)); OSSetObjectNotificationCallback(instanceObject, OS_MAKE_NOTIFICATION_CALLBACK(ProcessNotification, (intptr_t) INSTANCE_NOTIFICATION));
We'll take a look at what notifications we get from these sources in a bit.
Finally we need to set the window title.
1 OSSetWindowTitle(window, OSLiteral("Untitled"));
This will be prefixed to the title we defined in the manifest.
Okay! We're making some good progress.
We now need to make the notification callback itself.
1 2 3 4 OSCallbackResponse ProcessNotification(OSNotification *notification) { ... return OS_CALLBACK_NOT_HANDLED; }
For our convenience, we first extract some information from the "notification" structure.
1 2 bool isCommand = notification->type == OS_NOTIFICATION_COMMAND; Instance *instance = (Instance *) notification->instanceContext;
"isCommand" will indicate whether this notification was sent because of a command invocation (e.g. pressing a button, menu item or keyboard shortcut).
"instance" will contain a pointer to our instance structure. "notification->instance" will contain the API instance object.
Now we can switch on the notification source:
1 2 switch ((intptr_t) notification->context) { }
The first notification we need to handle is when the contents of the textbox is modified.
We will need to mark the instance as modified, so that the "Save" command will be enabled, and attempting to use the "Open" command will prompt the user to save.
1 2 3 4 5 6 case TEXTBOX_NOTIFICATION: { if (notification->type == OS_NOTIFICATION_MODIFIED) { OSMarkInstanceModified(notification->instance); return OS_CALLBACK_HANDLED; } } break;
We don't actually have to do any save/open confirmation dialog nonsense, since it's all handled by the API through the builtin commands.
It will just send us notifications telling exactly what to do.
Speaking of which...
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 case INSTANCE_NOTIFICATION: { if (notification->type == OS_NOTIFICATION_NEW_FILE) { // Clear the text in the textbox. OSSetText(instance->textbox, OSLiteral(""), OS_RESIZE_MODE_IGNORE); // Move the caret to the start of the textbox. OSTextboxSetSelection(instance->textbox, 0, 0); } else if (notification->type == OS_NOTIFICATION_OPEN_FILE) { // Load the file. size_t fileSize; char *text = (char *) OSReadEntireFile(notification->fileDialog.path, notification->fileDialog.pathBytes, &fileSize); if (text) { // Set the textbox's text. OSSetText(instance->textbox, text, fileSize, OS_RESIZE_MODE_IGNORE); // Move the caret to the start of the textbox. OSTextboxSetSelection(instance->textbox, 0, 0); // And deallocate the text. OSHeapFree(text); } else { // If we couldn't read the file, cause a error dialog to appear. return OS_CALLBACK_REJECTED; } } else if (notification->type == OS_NOTIFICATION_SAVE_FILE) { // Get the text in the textbox. OSString text; OSGetText(instance->textbox, &text); // Open the file we'll be saving to. OSNodeInformation node; OSError error = OSOpenNode((char *) notification->fileDialog.path, notification->fileDialog.pathBytes, OS_OPEN_NODE_WRITE_EXCLUSIVE | OS_OPEN_NODE_RESIZE_EXCLUSIVE | OS_OPEN_NODE_CREATE_DIRECTORIES, &node); if (error == OS_SUCCESS) { // Write the contents of the textbox and close the file handle. error = text.bytes == OSWriteFileSync(node.handle, 0, text.bytes, text.buffer) ? OS_SUCCESS : OS_ERROR_UNKNOWN_OPERATION_FAILURE; OSCloseHandle(node.handle); } if (error != OS_SUCCESS) { // If there was an error, cause a error dialog to appear. notification->fileDialog.error = error; return OS_CALLBACK_REJECTED; } } else { return OS_CALLBACK_NOT_HANDLED; } return OS_CALLBACK_HANDLED; } break;
1 2 3 4 5 6 7 8 9 10 case WINDOW_NOTIFICATION: { if (notification->type == OS_NOTIFICATION_WINDOW_CLOSE) { OSDestroyCommands(instance->commands); OSDestroyInstance(notification->instance); OSHeapFree(instance->findString.buffer); OSHeapFree(instance->replaceString.buffer); OSHeapFree(instance); return OS_CALLBACK_HANDLED; } } break;
We destroy the command array, and the API instance object.
And then we free our Instance structure and its contents.
The finer details of window/instance closing isn't really finished yet, but it works for the most part at the moment.
Right. Now let's handle the commandFind and commandReplace commands. This is where we're create the find/replace dialogs.
Since commandFind is just a simpler version of commandReplace, we'll just focus on commandReplace for now.
Here's a reference of how we will layout the dialog.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 case commandReplace: { // Is this a command notification? if (isCommand) { // If the replace dialog was already open, set it as the focused window. if (instance->replaceDialog) { OSSetFocusedWindow(instance->replaceDialog); return OS_CALLBACK_HANDLED; } // If the find dialog was open, close it. if (instance->findDialog) { OSCloseWindow(instance->findDialog); instance->findDialog = nullptr; } OSStartGUIAllocationBlock(16384); // Create a dialog using the dialogReplace template. instance->replaceDialog = OSCreateDialog(notification->instance, nullptr /*don't block input to the main window*/, dialogReplace); // Create a 1x2 grid at the root of the dialog. OSObject root = OSCreateGrid(1 /*columns*/, 2 /*rows*/, OS_GRID_STYLE_LAYOUT); OSSetRootGrid(instance->replaceDialog, root); // At the top of the grid, put the options. This is another 1x2 grid. OSObject options = OSCreateGrid(1, 2, OS_GRID_STYLE_CONTAINER /*include a border and padding between cells*/); OSAddGrid(root, 0 /*column*/, 0 /*row*/, options, OS_CELL_FILL); // Fill the dialog as much as possible. // Add the match case checkbox to the options grid. OSAddControl(options, 0, 1, OSCreateButton(instance->commands + commandMatchCase, OS_BUTTON_STYLE_NORMAL), OS_CELL_H_LEFT); // Create the textboxes grid, 2x2, and add it to the options grid. OSObject textboxes = OSCreateGrid(2, 2, OS_GRID_STYLE_CONTAINER_WITHOUT_BORDER); OSAddGrid(options, 0, 0, textboxes, OS_CELL_H_FILL); // Create the find textbox, set its notification callback, and add it with its label to the grid. instance->findTextbox = OSCreateTextbox(OS_TEXTBOX_STYLE_NORMAL, OS_TEXTBOX_WRAP_MODE_NONE); OSSetObjectNotificationCallback(instance->findTextbox, OS_MAKE_NOTIFICATION_CALLBACK(ProcessNotification, (intptr_t) FIND_TEXTBOX_NOTIFICATION)); OSAddControl(textboxes, 0, 0, OSCreateLabel(OSLiteral("Find what:"), false, false), OS_CELL_H_RIGHT /*Align the label to the right*/); OSAddControl(textboxes, 1, 0, instance->findTextbox, OS_CELL_H_FILL /*Fill the dialog horizontally with the textbox*/); // Repeat for the replace textbox. instance->replaceTextbox = OSCreateTextbox(OS_TEXTBOX_STYLE_NORMAL, OS_TEXTBOX_WRAP_MODE_NONE); OSSetObjectNotificationCallback(instance->replaceTextbox, OS_MAKE_NOTIFICATION_CALLBACK(ProcessNotification, (intptr_t) REPLACE_TEXTBOX_NOTIFICATION)); OSAddControl(textboxes, 0, 1, OSCreateLabel(OSLiteral("Replace with:"), false, false), OS_CELL_H_RIGHT); OSAddControl(textboxes, 1, 1, instance->replaceTextbox, OS_CELL_H_FILL); // Create the commands grid, 4x1. OSObject commands = OSCreateGrid(4, 1, OS_GRID_STYLE_CONTAINER_ALT /*use the alternate style*/); OSAddGrid(root, 0, 1, commands, OS_CELL_H_FILL); // Add a spacer at the start so that the buttons are all aligned to the right. OSAddControl(commands, 0, 0, OSCreateSpacer(0, 0), OS_CELL_H_FILL); // Create the buttons and add them to the grid. // By specifying the command for the button, they will automatically be labelled and disabled/enabled with the command. OSObject defaultButton = OSCreateButton(instance->commands + commandReplaceNext, OS_BUTTON_STYLE_NORMAL); OSAddControl(commands, 1, 0, defaultButton, OS_FLAGS_DEFAULT); OSAddControl(commands, 2, 0, OSCreateButton(instance->commands + commandReplaceAll, OS_BUTTON_STYLE_NORMAL), OS_FLAGS_DEFAULT); OSAddControl(commands, 3, 0, OSCreateButton(instance->commands + commandCloseDialog, OS_BUTTON_STYLE_NORMAL), OS_FLAGS_DEFAULT); // Set the default button and the focused control (the find textbox). OSSetFocusedControl(defaultButton, false); OSSetFocusedControl(instance->findTextbox, false); // If the user presses the "escape" key in the dialog, route it to do the same thing as commandCloseDialog. OSSetCommandNotificationCallback(OSGetDialogCommands(instance->replaceDialog) + osDialogStandardCancel, OS_MAKE_NOTIFICATION_CALLBACK(ProcessNotification, (intptr_t) commandCloseDialog)); OSEndGUIAllocationBlock(); return OS_CALLBACK_HANDLED; } } break;
Let's handle the commandCloseDialog command.
This just closes the open dialog when invoked.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 case commandCloseDialog: { if (isCommand) { if (instance->replaceDialog) { OSCloseWindow(instance->replaceDialog); instance->replaceDialog = nullptr; } if (instance->findDialog) { OSCloseWindow(instance->findDialog); instance->findDialog = nullptr; } return OS_CALLBACK_HANDLED; } } break;
When the find/replace textboxes are modified, we need to copy their contents into our strings, and also enable/disable the relevant commands.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 case FIND_TEXTBOX_NOTIFICATION: { if (notification->type == OS_NOTIFICATION_MODIFIED) { OSString text; OSGetText(instance->findTextbox, &text); OSHeapFree(instance->findString.buffer); instance->findString.buffer = (char *) OSHeapAllocate(text.bytes, false); OSCopyMemory(instance->findString.buffer, text.buffer, text.bytes); instance->findString.bytes = text.bytes; OSEnableCommand(instance->commands + commandFindNext, instance->findString.bytes); OSEnableCommand(instance->commands + commandFindPrevious, instance->findString.bytes); OSEnableCommand(instance->commands + commandReplaceNext, instance->findString.bytes && instance->replaceString.bytes); OSEnableCommand(instance->commands + commandReplaceAll, instance->findString.bytes && instance->replaceString.bytes); } } break; case REPLACE_TEXTBOX_NOTIFICATION: { if (notification->type == OS_NOTIFICATION_MODIFIED) { OSString text; OSGetText(instance->replaceTextbox, &text); OSHeapFree(instance->replaceString.buffer); instance->replaceString.buffer = (char *) OSHeapAllocate(text.bytes, false); OSCopyMemory(instance->replaceString.buffer, text.buffer, text.bytes); instance->replaceString.bytes = text.bytes; OSEnableCommand(instance->commands + commandReplaceNext, instance->findString.bytes && instance->replaceString.bytes); OSEnableCommand(instance->commands + commandReplaceAll, instance->findString.bytes && instance->replaceString.bytes); } } break;
Finally, when we get a find/replace command, we need to actually do it......
I won't explain this as it isn't particularly relevant to this blog post. But rest assured it does finding and replacing :)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 case commandReplaceAll: case commandReplaceNext: case commandFindPrevious: case commandFindNext: { if (isCommand) { OSString text; OSGetText(instance->textbox, &text); bool firstPass = true; bool matchCase = OSGetCommandCheck(instance->commands + commandMatchCase); bool foundMatch = false; bool backwards = (intptr_t) notification->context == commandFindPrevious; bool replace = (intptr_t) notification->context == commandReplaceNext || (intptr_t) notification->context == commandReplaceAll; bool stopAfterMatch = (intptr_t) notification->context != commandReplaceAll; OSString query = instance->findString; OSString replacement = instance->replaceString; uintptr_t byte, byte2, start; OSTextboxGetSelection(instance->textbox, &byte, &byte2); if (!backwards && byte2 > byte) byte = byte2; else if (backwards && byte > byte2) byte = byte2; if (backwards) byte--; start = byte; while (true) { if (!backwards) { if (firstPass && byte + query.bytes > text.bytes) { firstPass = false; byte = 0; } if (!firstPass && (byte > start || byte + query.bytes > text.bytes)) { break; } } else { if (firstPass && byte > text.bytes) { firstPass = false; byte = text.bytes - query.bytes; } if (!firstPass && (byte <= start || byte > text.bytes)) { break; } } bool fail = false; for (uintptr_t j = 0; j < query.bytes; j++) { uint8_t a = query.buffer[j]; uint8_t b = text.buffer[j + byte]; if (!matchCase) { if (a >= 'a' && a <= 'z') a += 'A' - 'a'; if (b >= 'a' && b <= 'z') b += 'A' - 'a'; } if (a != b) { fail = true; break; } } if (!fail) { foundMatch = true; OSTextboxSetSelection(instance->textbox, byte, byte + query.bytes); if (replace) { OSTextboxRemove(instance->textbox); OSTextboxInsert(instance->textbox, replacement.buffer, replacement.bytes); } if (stopAfterMatch) { break; } } if (!backwards) byte++; else byte--; } if (!foundMatch && stopAfterMatch) { OSShowDialogAlert(OSLiteral("Find"), OSLiteral("The find query could not be found in the current document."), OSLiteral("Make sure that the query is correctly spelt."), notification->instance, OS_ICON_WARNING, instance->window); } return OS_CALLBACK_HANDLED; } } break;
And that's it!
You've written a simple text editor for my OS!
The API is still in development, so it's a bit rough around the edges, but I'm very happy with how it seems to be going.
It may seem a bit complicated, but once you understand it writing GUI programs becomes fairly easy, I've found.
Simon Anciaux,
The more things you add to the manifest files, the more I think it's a bad idea (But you already address that in a previous comment).
Could you post the complete code and manifest so we can have a better view of the wall program ?
If I understood correctly, there are two callbacks, one for system message and one for the commands/ui.
Have you thought about having no callback, but instead a user defined message loop. Similar to using GetMessage/PeekMessage on Windows, except every (as in there is no special case that wouldn't follow the rule) message would go through that. So we could choose when and how to handle every message.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 int main( void ) { while ( running ){ // Messages while ( PeekMessage( message ) ){ switch( message ){ .... } } // Update // Render } }
I'm no sure I understand correctly why an "API instance" is needed. I think API is the wrong term here. It seems that it's just "application state". Couldn't we just pass the parameter you give to OSCreateInstance to OSCreateWindow to achieve the same think ?
Is it safe to do nothing in the close window notification, return from main and assume the os will release the handles and memory ?
@mrmixer
The more things you add to the manifest files, the more I think it's a bad idea
What don't you like about them? They're mostly just used for automatic structure definitions, and a few build-related things.
Could you post the complete code and manifest so we can have a better view of the wall program ?
Bitbucket link, see the "text_editor" folder.
If I understood correctly, there are two callbacks, one for system message and one for the commands/ui.
Have you thought about having no callback, but instead a user defined message loop. Similar to using GetMessage/PeekMessage on Windows, except every (as in there is no special case that wouldn't follow the rule) message would go through that. So we could choose when and how to handle every message.
I have a slightly different way of doing game loops. After calling "OSSendIdleMessages(true)", whenever the message queue is empty, an idle message will be sent. The program can then do whatever it wants, and then can return to let the API to check and process the message queue again. I decided to do it like this so that the message loop is completely standardised between programs.
I'm no sure I understand correctly why an "API instance" is needed. I think API is the wrong term here. It seems that it's just "application state". Couldn't we just pass the parameter you give to OSCreateInstance to OSCreateWindow to achieve the same think ?
Hopefully this system will make more sense when I introduce tab based programs.
Is it safe to do nothing in the close window notification, return from main and assume the os will release the handles and memory ?
When a process exits the OS will release all its resources. However, because this one process will handle multiple windows and instances, we have to make sure that we clean up manually when a window is closed.
*
I'm fully aware that a lot of the features/ideas in the API are quite different to the hands-off approach of other operating systems. However, having written several programs for the OS so far, I'm finding it much easier to do everything than it other GUI frameworks I've used. Some features may have to be scrapped or changed, but I really like how it's working out. YMMV, I guess.
Simon Anciaux,
The [program] and [build] part of the manifest I don't mind. It looks like things for the build system/compiler so I'm ok with that.
The rest looks to me like an unnecessary level of indirection between me and the code. Most of it could be done in code with approximately the same amount of lines, and you could get/modify the values at runtime.
1 2 3 Type thing = GetDefault( ); thing.width = screenWidth / 2; thing.param = value;
nakst
I have a slightly different way of doing game loops. After calling "OSSendIdleMessages(true)", whenever the message queue is empty, an idle message will be sent. The program can then do whatever it wants, and then can return to let the API to check and process the message queue again. I decided to do it like this so that the message loop is completely standardised between programs.
This was not a game loop, I use that same loop for application, but use GetMessage to wait for the first message and then PeekMessage to process the queue so that I process every pending messages before going to update and render.
The full code for the text editor doesn't contain a main function or any rendering call. How is that done ? Do you render after every processNotification ? Can you process every message before rendering ?
Also my question/suggestion was that I would like to have a unified way of processing messages (system and user messages in the same place) without using a callback so that I control when and how it's happening.
Could you show (in pseudo code) a game loop for your os ?
The rest looks to me like an unnecessary level of indirection between me and the code. Most of it could be done in code with approximately the same amount of lines, and you could get/modify the values at runtime.
It is possible to make the definitions in code like you show.
However, I don't know how well it will play with configuration files, once I implement them. I ideally want the user to have full control over the contents of everything, without needing to recompile your program. This is why I decided to introduce the layer of indirection.
Furthermore, I find it quicker to update the manifest file. For example, forward references work just fine in the manifests unlike in C! It's just metaprogramming, which I think Is better than some crazy x-macros setup in C.
The full code for the text editor doesn't contain a main function or any rendering call. How is that done ? Do you render after every processNotification ? Can you process every message before rendering ?
There is no main function, because the text editor does not use the c runtime library. The API contains the program's entry point and will call into the ProcessSystemMessage subroutine in your program as necessary. If any global initialisation must be done, then it can be done when the first system message is received.
To understand when rendering happens, I need to explain how the messaging system works in greater detail.
Let's say the user clicks a button. The window manager will send a OS_MESSAGE_MOUSE_LEFT_RELEASED message to your program, which the api will receive in its internal message loop. It will then translate this into a OS_NOTIFICATION_COMMAND notification which is sent to the button's notification callback. All the controls that need to be repainted set the repaint flag in their header, and the repaint-descendent flag in all their parent gui objects. Before the api processes the next message from the window manager, it will then do the actual repainting. It will find all the controls with the repaint flag set, and draw them to the window's frame buffer. Once everything is painted to the frame buffer, it will tell the window manager to redraw the modified areas of the frame buffer to the screen.
I do not wait until the message queue is empty to repaint, because I think it will make the program more responsive, but I'm not sure this is best strategy..?
Also my question/suggestion was that I would like to have a unified way of processing messages (system and user messages in the same place) without using a callback so that I control when and how it's happening.
I'm not sure I want it to work this way. There is a lot of special handling that the api does in its message loop that I wouldn't want to make the users do. However I will consider changing it, as you have suggested it.
But in my opinion, the message loop is part of the platform, not the program.
Could you show (in pseudo code) a game loop for your os ?
When you receive the OS_MESSAGE_CREATE_INSTANCE message, call OSSendIdleMessages(true). Then when you receive OS_MESSAGE_IDLE messages you can update your game, using the microsecondsSinceLastIdleMessage field in the OSMessage structure. You can also limit the framerate by calling OSSleep(uint64_t milliseconds).
Simon Anciaux,
nakst
I do not wait until the message queue is empty to repaint, because I think it will make the program more responsive, but I'm not sure this is best strategy..?
I think it's making it less responsive.
If you have 3 messages in the queue, the duration to get the final frame is 3 * message processing + 3 * rendering, with rendering being generally heavier than message processing and the intermediate state probably won't be visible unless you flip the frame buffer each time (and in that case, with vsync you would have a lot of lag). If you process the 3 messages at the same time the duration is 3 * message processing + 1 * rendering.
nakst
There is a lot of special handling that the api does in its message loop that I wouldn't want to make the users do. However I will consider changing it, as you have suggested it.
As a user I don't want to have to do a lot to get messages. If the OS can create a message queue that I can consume as I want, the special handling could still be done by the OS ?
nakst
But in my opinion, the message loop is part of the platform, not the program.
I disagree with that. I think the while(PeekMessage( )) loop is simple enough and let me be the one controlling the program as opposed to being a service to the OS API. I can sort of see how your way of doing things can work for simple applications. Maybe two level of API would work, the "low" level with the PeekMessage loop, and the "high" level with the processNotification callback ?
nakst
Then when you receive OS_MESSAGE_IDLE messages you can update your game, using the microsecondsSinceLastIdleMessage field in the OSMessage structure. You can also limit the framerate by calling OSSleep(uint64_t milliseconds).
I suppose there will be no rendering between messages (I guess rendering is only for the OS gui layer). But this way of handling messages for a game seems weird to me. And a sleep function with a granularity of milliseconds is useless for games in my opinion (not that I use Sleep in games).
|
{}
|
## December 26, 2019
### Schrödinger’s Unified Field Theory
#### Posted by John Baez
Erwin Schrödinger fled Austria during World War II. In 1940 he found a position in the newly founded Dublin Institute for Advanced Studies. This allowed him to think again. He started publishing papers on unified field theories in 1943, based on earlier work of Eddington and Einstein. He was trying to unify gravity, electromagnetism and a scalar ‘meson field’… all in the context of classical field theory, nothing quantum.
Then he had a new idea. He got very excited about it, and January of 1947 he wrote:
At my age I had completely abandoned all hope of ever again making a really big important contribution to science. It is a totally unhoped-for gift from God. One could become a believer or superstitious [gläubig oder abergläubig], e.g., could think that the Old Gentleman had ordered me specifically to go to Ireland to live in 1939, the only place in the world where a person like me would be able to live comfortably and without any direct obligations, free to follow all his fancies.
He even thought he might get a second Nobel prize.
He called a press conference… and the story of how it all unraveled is a bit funny and a bit sad. But what was his theory, actually?
Someone must have written a nice paper about Schrödinger’s theory in modern differential-geometric language, even if the conclusion is that it’s a complete mess. If you know of such a paper, please let me know! I’m getting my information from some sources that use old-fashioned index notation, which makes it harder for me to tell what’s really going on. Namely, this very nice paper:
together with Schödinger’s book:
• Erwin Schrödinger, Space-Time Structure, Cambridge U. Press, Cambridge, 1950. Chapter XII: Generalizations of Einstein’s theory.
and his first paper on this theory:
The idea seems to be this. He starts with a 4-manifold $M$. The only field in his theory is a linear connection $D$ on the tangent bundle of $M$.
(For some reason everyone calls this an ‘affine’ connection — I’ve never understood why. I would imagine that an affine connection is one where we take the structure group to be the affine group, but here it’s the group of linear transformations of the tangent space.)
He defines the Riemann curvature tensor of $D$ in the usual way and contracts two indices to get the Ricci tensor $R_{\mu\nu}$. You don’t need a metric to do this. When $D$ is the Levi-Civita connection of a Riemannian metric, $R_{\mu \nu}$ is symmetric, but in the situation at hand it needn’t be.
His field theory then has the Lagrangian
$L = \frac{2}{\lambda} \sqrt{ - det R }$
where $\lambda$ is some constant.
This makes me nervous: what’s really going on here? Of course we often see the expression
$\sqrt{ - det g }$
in general relativity, but that has a nice geometrical explanation: the 4-form
$\sqrt{ - det g } \; d^4 x$
is the volume form associated to the metric $g_{\mu \nu}$. If the Ricci tensor $R_{\mu \nu}$ were a symmetric tensor I could happily pretend it’s a metric — perhaps not positive definite, perhaps degenerate — and treat the action in Schrödinger’s theory
$S \; = \; \int L \; d^4 x \; = \; \frac{2}{\lambda} \int_M \sqrt{ - det R } \; d^4 x$
as the volume of $M$ computed using the volume form associated to this metric. But what’s the deal when the Ricci tensor is not symmetric?
Schrödinger got his ideas from previous work of Eddington, Einstein and Straus, all of whom had been studying variants of general relativity where the Riemannian metric is replaced by a not-necessarily-symmetric tensor $g_{\mu \nu}$. So, he could have been using some body of wisdom on ‘Riemannian geometry with non-symmetric metrics’ that I’m missing out on. Or, it could be that this whole line of thought died out precisely because ‘Riemannian geometry with non-symmetric metrics’ turned out to be a thoroughly unworkable idea. And if so, I’d like to know why.
Anyway, starting from his Ricci tensor, Schrödinger then proceeds to define a not-necessarily-symmetric tensor $g_{\mu \nu}$ by
$g_{\mu \nu} = \frac{1}{\lambda} R_{\mu \nu}$
Well, he doesn’t proceed exactly this way, but that’s the upshot. He then works out some field equations from his Lagrangian. To write them in a way he likes, he introduces a new connection whose Christoffel symbols ${}^\bullet \Gamma^\lambda_{\mu \nu}$ are related to the Christoffel symbols $\Gamma^\lambda_{\mu \nu}$ of his original connection $D$ as follows:
${}^\bullet \Gamma^\lambda_{\mu \nu} = \Gamma^\lambda_{\mu \nu} + \frac{2}{3} \delta^\lambda_\mu \Gamma_\nu$
where
$\Gamma_\nu = \frac{1}{2} \left( \Gamma^\lambda_{\nu \lambda} - \Gamma^\lambda_{\lambda \nu} \right)$
He says that the field equations are simpler to understand using this new connection and its Ricci tensor, which he calls ${}^\bullet R$.
What’s going on here?
On January 27, 1947, Schrödinger gave a lecture on his new theory. He even called a press conference to announce it! From what he said to the reporters, you can tell that he was in the grip of grandiosity:
The nearer one approaches truth, the simpler things become. I have the honour of laying before you today the keystone of the Affine Field Theory and thereby the solution of a 30 year problem: the competent generalization of Einstein’s great theory of 1915. The solution is
$\delta \int \mathcal{L} d\tau = 0 \quad with \quad \mathcal{L} = \sqrt{- \mathrm{det} R_{i k} }$
$R_{i k} = - \frac{\partial \Gamma^\sigma_{i k}}{\partial x_\sigma} + \frac{\partial \Gamma^\sigma_{i\sigma}}{\partial x_k} + \Gamma^\sigma_{i\tau}\Gamma^\tau_{\rho k} - \Gamma^\sigma_{\rho\sigma}\Gamma^\rho_{i k}$
where $\Gamma$ is a general affinity with 64 components. That is all. From these three lines my friends would reconstruct the theory, supposing the paper I am handing in got hopelessly lost, and I died on my way home.
The story of the great discovery was quickly telegraphed around the world, and the science editor of the New York Times interview Einstein to see what he thought.
Einstein was not impressed. In a carefully prepared statement he said:
Schrödinger’s latest effort […] can be judged only on the basis of mathematical-formal qualities, but not from the point of view of ‘truth’ (i.e., agreement with the facts of experience). Even from this point of view I can see no special advantages over the theoretical possibilities known before, rather the opposite. As an incidental remark I want to stress the following. It seems undesirable to me to present such preliminary attempts to the public in any form. It is even worse when the impression is create that one is dealing with definite discoveries concerning physical reality. Such communiqués given in sensational terms give the lay public misleading ideas about the character of research. The reader gets the impression that every five minutes there is a revolution in science, somewhat like the coup d’état in some of the smaller unstable republics.
Ouch. Wise words even now!
But Einstein didn’t claim Schrödinger’s theory was nonsense. It seems part of Einstein’s irritation was due to how similar Schrödinger’s work was to his own work with Straus! Indeed, Schödinger starts the first paper on his theory with the following curious remark:
The reason it has taken me so long to find out the correct Lagrangian is, that it is the most obvious one and had been tried more than once by others.
So, I feel there could be at least something interesting about Schrödinger’s field equations. But they’re complicated enough, and I understand them so poorly, that I don’t even want to copy them down here. If you want to see them, check out The final affine field laws I.
Posted at December 26, 2019 7:07 PM UTC
TrackBack URL for this Entry: https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3181
### Re: Schrödinger’s Unified Field Theory
It is little-known outside Germany (I read quite a number of books on the history of physics and never found out until I did a postdoc in Germany and my PI told me the story; you can see it in the German Wikipedia article on him but not on the English one) that Heisenberg also had a failed attempt at a unified field theory (called a ‘Weltformel’) which received a lot of media attention at the time it was presented, in 1958 (live on TV if I remember well). You can see more here:
https://www.spektrum.de/news/heisenbergs-weltformel/1542469
Posted by: Andrei on December 26, 2019 10:51 PM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
Thanks!
I know just a little about Heisenberg’s theory. For example, I know Dirac told him this in a letter sent on March 6, 1967:
My main objection to your work is that I do not think your basic (non-linear field) equation has sufficient mathematical beauty to be a fundamental equation of physics. The correct equation, when it is discovered, will probably involve some new kind of mathematics and will excite great interest among the pure mathematicians, just like Einstein’s theory of the gravitational field did (and still does). The existing mathematical formalism just seems to me inadequate.
Perhaps every really ambitious physicist needs to take a try at a ‘theory of everything’ at some point in their life.
Posted by: John Baez on December 27, 2019 1:08 AM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
That really is a sad story. Is there any indication of how he managed to kid himself into thinking he’d made a huge discovery?
Presumably he had other physicists to talk to at the Dublin Institute for Advanced Studies. But maybe no one else understood what he was doing, or maybe they understood but didn’t dare tell him they weren’t convinced, or maybe they told him but he disagreed.
Or just maybe Schrõdinger was right, and there’s more to his theory than anyone has ever appreciated… I guess John is on a mission to find out!
Posted by: Tom Leinster on December 26, 2019 11:06 PM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
Schrödinger was in regular correspondence with Einstein and other physicists, so if he fooled himself it wasn’t for lack of feedback.
After his press conference, which resulted in some newspaper articles, and Einstein’s tart public response, he sent a rather lame apology to Einstein, saying he’d done it just to get a pay raise:
I had to indulge in a little hot air in the present somewhat precarious position…. In the first place, our basic salaries have not been increased since 1940.
But there’s evidence (like the letter I quoted at the start of my post) that Schrödinger really did think he was on to something big.
Einstein wrote a curt response saying Schrödinger’s theory was essentially equivalent to one Einstein had already proposed with Straus. Schrödinger replied… but Einstein didn’t write to him again for 3 years.
I doubt there’s anything really great about this theory, but it’s sufficiently different from the sort of field theories people talk about today that I’m having trouble understanding it… yet it’s simple enough that I feel I should.
I suppose I should think about what Einstein and Straus did, but that too seems a bit mysterious.
Posted by: John Baez on December 27, 2019 1:02 AM | Permalink | Reply to this
There’s an extended discussion of the conflict between Einstein and Schrödinger over this issue in Paul Halpern’s book.
The subsequent history is described in sections 9 and 10 of the Goenner review mentioned previously. In brief, to the extent that these theories make any predictions at all, they don’t agree with experiment.
Posted by: Phil Harmsworth on December 29, 2019 3:24 AM | Permalink | Reply to this
### Re: Schrödingers Unified Field Theory
I’d like to read Halpern’s book!
I read Goenner’s paper but I have trouble following the logic of the arguments concerning Schrödinger’s theory. I’m willing to accept that if it agreed with experiment, everyone would know about this theory. I’m more interested in the geometrical meaning of the theory and whether it’s mathematically well-behaved at all. I’d never thought of such a simple Lagrangian built simply from a connection on the tangent bundle! The fact that nobody talks about it anymore suggests that there’s something horrible about it. I was confused about what the Lagrangian $\sqrt{| det R |}$ even meant (in a coordinate-free way) until Rogier explained it.
Posted by: John Baez on December 29, 2019 7:44 AM | Permalink | Reply to this
There’s an alternative treatment (in classical notation) in the second appendix (“Relativistic Theory of the Non-Symmetric Field”) of Einstein’s book, The Meaning of Relativity.
Much more recently, James Shifflett has proposed a modification to the Einstein-Schrödinger theory that apparently overcomes several of the unphysical features of the original - https://einstein-schrodinger.com/einstein-schrodinger.html.
Posted by: Phil Harmsworth on December 31, 2019 1:16 PM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
For some reason everyone calls this an ‘affine’ connection — I’ve never understood why. I would imagine that an affine connection is one where we take the structure group to be the affine group, but here it’s the group of linear transformations of the tangent space.
Take this with a grain of salt: first, I don’t know anything about the naming history – I imagine it goes back a long ways; second, I disclaim any expertise in the subject area. But two things come to mind:
• Back in the very old days, when manifolds were thought of living inside an ambient Euclidean space $E$ rather than having an autonomous existence via a structure of maximal atlas, the tangent spaces of the manifold would be identified with affine subspaces of $E$, and the parallel transport induced by a connection would thus be identified with a collection of affine transformations between these affine subspaces. Perhaps the “affinity” would be most keenly felt in the case where the manifold is $E$ itself.
• In general, the space of affine connections on a manifold is actually an affine space (i.e., thinking in terms of covariant derivatives, a convex combination of connections is a connection, but general linear combinations are not since they generally fail the Leibniz rule).
Posted by: Todd Trimble on December 27, 2019 7:16 PM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
I like the first theory for why affine connections are called that. In his book Space-Time Structure, which by the way is written in a very pleasant conversational style, Schrödinger calls them “affine connexions” or “affinities”.
I remember being very relieved to hear that connections on a principal bundle form an affine space. In learning general relativity one focuses on tensor fields, and it’s upsetting to learn that the Christoffel symbols, while important, are not a tensor field. The mathematical approach to connections helps settle what’s going on, but understanding the structure of the set of all connections helped nail everything down. I think most mathematical physicists who care about gauge fields or gravity become keenly aware that connections form an affine space.
Somehow the affine space of connections feels like it came after the term ‘affine connection’. But maybe you’re suggesting that it was lurking in there from the start. After all, the lack of a distinguished origin precisely says there’s no god-given ‘best’ way to parallel transport a tangent vector along a curve.
Posted by: John Baez on December 27, 2019 8:15 PM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
One way it could perhaps have been lurking in there from the start is that the transformation law for the Christoffel symbols is an affine equation, in contrast to the transformation law for tensors which is a linear equation.
Posted by: Mike Shulman on December 27, 2019 9:48 PM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
I always imagined that it's because the structure of an affine connection makes the manifold into an affine space … if the connection is flat and torsion-free, that is. I never had any particular evidence for this, but the discussion at https://en.wikipedia.org/wiki/Affine_connection#Affine_connections_as_Cartan_connections seems relevant to both this idea and to Todd's ideas.
Posted by: Toby Bartels on January 5, 2020 1:59 AM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
So, one question is this: if we have a not-necessarily-symmetric tensor $A_{\mu \nu}$ on an oriented 4-manifold $M$, does
$\sqrt{ det A } \; d^4 x$
define a 4-form on this manifold in a coordinate-independent way?
If $A$ is symmetric and nondegenerate, we use it to define a nonzero 4-form on each tangent space $T_x M$, and that’s what
$\sqrt{ det A } \; d^4 x$
is. I think this also works if $A$ is antisymmetric. But what if $A$ is neither symmetric nor antisymmetric? Is there some way that $A$ defines a 4-form on each tangent space $T_x M$, which in coordinates looks like
$\sqrt{ det A} \, d^4 x ?$
The fact that nobody talks about this idea suggest there’s something wrong with it. I’ve seen nice theories of Hodge duals for vector spaces with nondegenerate symmetric or antisymmetric bilinear forms.
Posted by: John Baez on December 27, 2019 8:01 PM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
So, he could have been using some body of wisdom on ‘Riemannian geometry with non-symmetric metrics’ that I’m missing out on
These kind of things have received some attention in the past. As far as I remember, a lot of differential geometry carries over without modification to the non-symmetric setting (I think e.g. Lang’s book “Differential and Riemannian Geometry” covers some of this). Especially the case where the metric is asymmetric (+ some additional properties) is interesting as this is the setting of symplectic manifolds (and by extension, classical mechanics).
Posted by: Matty Wacksen on December 27, 2019 9:29 PM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
Thanks!
What I really need is to see how, or whether, a nondegenerate bilinear form $A$ on an $n$-dimensional real vector space $V$ gives a ‘volume form’: an element of the exterior power $\Lambda^n V^*$ where $n = dim(V)$. I know how to do it when $A$ is either symmetric or skew-symmetric (= antisymmetric).
Posted by: John Baez on December 28, 2019 2:02 AM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
This is an interesting old idea! I apologize in advance because I am going to cite my own work in this comment, by this topic is actually related.
The first thing I though when I saw the equations was Lovelock Lagrangians. They have some structure in common, but Lovelock makes more sense at first glance. My colleagues Leandro Salomone and Santiago Capriotti had geometrized the Lovelock Lagrangians in a clever way, using the natural structures of the bundle of connections (https://arxiv.org/abs/1911.07278). I think these structure and ideas are a good place to start in order to geometrize Schrödinger field equations. But I will need to look it in more detail.
I find interesting that he doesn’t use a metric. In my Ph.D I showed that it is possible to formalize General Relativity (in the Palatini setting), without a metric (https://arxiv.org/abs/1804.06181). In the Hamiltonian setting, using a clever use of constraints, the momenta of the connection becomes the metric. At this point, I think of it as just a hand trick, reorganizing the variables so that the metric is hidden. But it opens the idea that you only need a connection to formalize GR. And Schrödinger proposal enters this idea of only using a connection. As before, I will need to think more about it, and probably it’s just a coincidence.
As a last note, the new connection he proposes also appears naturally in the constraints of the Palatini Lagrangian. It appears essentially from the part of the curvature that depends on the connection (and not its derivatives). It is also related to the gauge freedom of the theory (as Einstein noticed in his paper of 1921 about Palatini gravity), which is related to the linear part (on the derivatives of the connection) of the curvature. So it’s not a surprise that such an expression is considered.
Thank you for digging up and sharing such compelling ideas!
Posted by: Jordi Gaset on December 28, 2019 1:16 PM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
Thanks! I’ll look at those references.
I think Schrödinger was building upon previous work of Einstein and Straus that used a Palatini-like formalism but for non-symmetric ‘metrics’. It’s explained fairly well in Chapter XII of Schrödinger’s book Space-Time Structure, so I recommend a look at that. Unlike most scientists, Schrödinger can write quite well, so this chapter is fun to read if one already knows general relativity. The worst thing about it is that each section builds on the previous one, and he just describes what’s new rather than retelling the whole story, so you’re forced to read the whole thing. But if you do that, it’s not bad.
He starts by describing the Palatini formalism for gravity, where the metric and connection are treated as independent fields, and shows how the vanishing of torsion arises from the equations of motion.
Then he describes the “Einstein–Straus theory”, saying
The singular merit of Palatini’s derivation is that it can be extended straightaway without ambiguity to a non-symmetric $g_{i k}$.
I guess I need to study this section more carefully.
Then he introduces his own theory, “the purely affine theory”, saying
Can we not avoid introducing, with Palatini, two basic connexions of the space-time manifold, a quasi-metrical one by the $g_{i k}$ and an affinity $\Gamma^i_{j k}$? Can one not go a step beyond Palatini and base a theory on affine connexion alone […]?
(I think here he’s using ‘connexion’ in some old way that includes both the metric and what we would call the connection.)
He says Eddington tried this idea in 1921, using the square root of the determinant of the Ricci tensor of an affine connection as the Lagrangian — but assuming the affine connection was torsion-free. He wants to drop this assumption.
(Beware: he calls the Ricci tensor the ‘Einstein tensor’.)
He says Einstein also had studied such a theory, again assuming the connection was torsion-free.
It’s sort of funny that back then, these various formulations and generalizations of gravity were being studied as possible ‘theories of everything’, worthy of press conferences and announcements in newspapers. (Even Einstein was guilty of the latter.) I’m thinking that perhaps the calculations were so difficult back then that only the lure of a ‘unified field theory’ provided people with the mental energy to carry them out.
Posted by: John Baez on December 29, 2019 1:30 AM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
Any bilinear form $B: V\otimes V \to \mathbb{R}$ gives a (in fact two) maps $V \to V^*$. Hence taking top wedge powers a gives map
(1)$\wedge^n V \to \wedge^n V^*$
or equivalently, a well defined element
(2)$\det(B): 1 \to (\wedge^n V^*)^{ \otimes 2} := L$
The representation of $GL(V)$ that the 1 dimensional vector space $L$ carries is
(3)$g \mapsto \det(g)^{-2}$
Hence while it does not have a natural trivialisation, it does have a natural orientation: the direction of tensor squares is positive. In particular if $B$ is non degenerate it makes sense to say that $\det(B)$ is positive or negative.
The representation $L$ has a positive square root, the representation $|L|^{1/2}$ with representation $g \to |\det(g)|^{-1}$, and $\sqrt{|\det(B)|} \in |L|^{1/2}$ naturally lives there.
Remark 1: The bilinear form $B$ gives two maps from $V$ to its dual but one map is the dual of the other. By functoriality this means that the two maps $\wedge^n V \to \wedge^n V^*$ are dual to each other. I didn’t check this, but once we have 1 dimensional vector spaces that should make $\det(B)$ the same for the two choices for the same reason that for a matrix $\det(A) = \det(A^t)$.
Remark 2: If $V = T_x M$ this all works out nicely bundle wise for a smooth bilinear form $B$, and we should indeed get a well defined section $\sqrt{|\det(B)|}$ that lives canonically in the bundle of densities $|\wedge^n| M$. It is smooth if $B$ is non degenerate so we avoid passing through 0 and have to deal with the singularity created by taking absolute value.
Remark 3: Suppose that $B$ is symmetric, and we have a symmetric non degenerate form $g$ on V. We can choose a $g$ orthonormal basis that diagonalises $B(-, -) = g(\mathrm{diag}(b_1, .., b_n) - , - )$ (at least if $g$ is definite). Now $\det(B) = \prod_i b_i \mathrm{vol}_g^2$. That does not seem to be directly related to $\mathrm{tr}_g(B) = \sum b_i$, however, that would give the scalar curvature if $B = Ric$ for a Levi-Civita connection.
Posted by: Rogier Brussee on December 28, 2019 4:42 PM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
Brilliant! This is just the sort of modern approach that I was wanting! Thanks ever so much.
Minor remark: it’s nice how your analysis reveals that $\sqrt{|det B|}$ is a density, so we can integrate it without choosing an orientation; often physicists try to make their Lagrangians be $n$-forms on an $n$-manifold, but then you need an orientation to integrate them.
Your Remark 3 makes me want to study Schrödinger’s theory perturbatively: that is, fixing a connection $D_0$ on the tangent bundle of $M$, writing some other connection $D$ as
$D = D_0 + \epsilon \Gamma$
for some $End(T M)$-valued 1-form $\Gamma$, and Taylor-expanding
$L = \textstyle{ \sqrt{ |det \, Ric_{D}| }}$
in powers of $\epsilon$.
Your remark, and way the trace shows up when you differentiate the determinant, makes me hope something like the Ricci scalar will appear, at least in situations where we can find coordinates where $Ric_{D}$ is the matrix $\delta_{\mu \nu}$. In general something like Jacobi’s formula could be required.
But I guess I’m getting ahead of myself: the first question is what are the critical points of the action
$S = \int_M L ?$
That is, what are the equations of motion? Schrödinger worked them out, but in a way that seems cryptic to me. As I mentioned in my blog article, he prefers to express them in terms of a connection other than the original connection $D$. But I don’t understand the meaning of this other connection.
Posted by: John Baez on December 29, 2019 1:05 AM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
Now that I have an understanding of the geometrical meaning of $\sqrt{| det R |}$, I’m more willing to just compute with it in coordinates.
First, for any square matrix $A$,
$det(1 + s A) = 1 + s tr(A) + O(s^2)$
Thus
$\begin{array}{ccl} \displaystyle{ \frac{d}{d s} det(A + s B) \Big|_{s = 0}} &=& \displaystyle{ \frac{d}{d s} det(A) det(1 + s A^{-1} B) \Big|_{s = 0}} \\ &=& det(A) tr(A^{-1} B) \end{array}$
so in terms of variational derivatives
$\delta det(R) = det(R) tr(R^{-1} \delta R)$
This is a first step. Of course it only works if $R$ is invertible.
Posted by: John Baez on January 1, 2020 7:35 AM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
A non-symmetric metric seems like an intrinsically weird thing. You can always decompose it into symmetric and anti-symmetric pieces. From an effective field theory point of view, there’s no reason (that I know of) for those two pieces to only and ever appear in some particular combination. You should just write down all the Lagrangian terms consistent with the symmetries. And (again, as far as I know) there’s no symmetry that keeps the antisymmetric part massless, unlike with the symmetric part. So we should expect it to be a very massive field with no phenomenological consequences.
I am not super familiar with non-symmetric metrics, but we worked out the analogous story for torsion here.
Posted by: Sean Carroll on January 2, 2020 1:17 AM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
As far as I know, Schrödinger and Einstein never considered quantizing their unified field theories. (Einstein once had his assistant Valentine Bargmann try to teach him quantum field theory, but lost interest in a month.) But in March 1946, Einstein admitted a problem that’s slightly related to the effective field theory issue you mention:
the non-symmetric tensor is not the most simple structure that is covariant with respect to the group, but decomposes into the independently transforming parts $g_{(i k)}$ and $g_{[i k]}$; the consequence of this is that one can obtain a nondescript number of systems of second-order equations.
In November 1946, Pauli wrote to Einstein:
I also believe that each tensor, e.g., the contracted curvature tensor, immediately must be split into a symmetric and a skew part (in general: tensors into their irreducible symmetry classes) and to avoid every adding sign between them. What God did separate, humans must not join.
I am completely uninterested in Schrödinger’s theory as a viable theory of physics, and I don’t want to quantize it. I just want to know what sort of classical field equations you get when you take $\sqrt{|det R|}$ as a Lagrangian, where $R$ is the Ricci tensor either of an arbitrary connection on the tangent bundle (as in Schrödinger’s theory) or of a torsion-free connection (as apparently in some work of Einstein). I’m having trouble understanding what Schrödinger said about this, because he introduces another connection to formulate the field equations, and I don’t understand what this means. It’s either a bad idea or there’s some interesting geometry behind it.
Posted by: John Baez on January 2, 2020 5:37 AM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
Einstein and Pauli were smart cookies.
Posted by: Sean Carroll on January 2, 2020 10:46 PM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
This idea of adding a symmetric and antisymmetric tensor also appears in the Born-Infeld theory. There, if the symmetric part is the standard Minkowski metric and the antisymmetric part is intepreted as the EMG tensor, the determinant can be expanded as in this comment, but up to all orders of the antisymmetric part (the trace would of course vanish).
Posted by: Jan on January 6, 2020 5:54 PM | Permalink | Reply to this
### Bruce
It doesn’t exactly fit, but the following is related. The mathematics of “optical cloaking via metamaterials” (i.e. building an invisibility cloak) is called transformation optics. It’s fascinating because you can “do general relativity in the lab”. It turns out that the macroscopic Maxwell’s equations for how electromagnetism works inside a material can be interpreted as doing general relativity with two metrics - the “electric metric” and the “magnetic metric”. (You can also think of it as having two different Hodge star operators on the manifold). In a metamaterial, you can change the electric and magnetic properties of the material at will, so you can tune these metrics to anything you like - you can even create a black hole in the lab!
Usually people assume the material is “impedance matched” (this means that the electric metric equals the magnetic metric). This makes them happier, because it means it corresponds to the usual general relativity setup.
But I often wondered what it means, in terms of modern differential geometry, when the electric metric is not equal to the magnetic metric. It means we have a spacetime manifold with two metrics. I haven’t seen such a thing crop up in math, but there it is, lurking in Maxwell’s equations.
Having “two metrics” is not the same as having a “non-symmetric metric”, but I couldn’t help mentioning this.
Posted by: Bruce Bartlett on February 4, 2020 8:38 PM | Permalink | Reply to this
### Re: Bruce
Hi, Bruce! That’s cool!
Theories of gravity with two metrics instead of one — called ‘bimetric gravity theories’ — have actually been studied. They go back to a 1940 paper by Nathan Rosen. You may know about the famous ‘Einstein–Podolsky-Rosen’ and ‘Einstein–Rosen’ papers: the first was about spooky action at a distance in quantum mechanics, while the second was about wormholes, which are kind of like spooky action at a distance in general relativity. These are all the same Nathan Rosen.
More recently, people have used bimetric gravity to try to understand dark matter. Since 2010 there’s been a renaissance of interest in bimetric gravity as a way to get massive gravitons.
The Wikipedia article I just linked to is a fairly painless way to get a taste of these theories.
Posted by: John Baez on February 5, 2020 7:57 AM | Permalink | Reply to this
### Bimetric gravity
Ok, many thanks for informing me about these theories John.
Posted by: Bruce Bartlett on February 5, 2020 8:22 AM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
Hello all,
Hello John,
Might be of some interest:
Source: The American Mathematical Monthly, Vol. 37, No. 1 (Jan., 1930), pp. 32-34. http://www.jstor.org/stable/2299987
[…] The new matrix theory was very complicated, its physical meaning very obscure. But in 1926 a quite new development was put forward by Schrödinger, at that time professor at Zurich. Inspired by the ideas of Louis de Broglie’s Paris thesis of 1924, in which a wave was associated with every material particle, Schrödinger assumed that the dynamics of an electron can not be those of a point as in classical theory, but must be those of a wave.
This wave obeys a linear partial differential equation of the second order. (Newton mechanics leads to a partial differential equation of the first order and second degree.) Such a differential equation admits continuous uniform bounded solutions only for certain discrete values of a parameter (the “energy”), the so-called characteristic values. The critical test of the hydrogen lines was also satisfied in Schrödinger’s case.
The reason that Schrödinger’s calculus was not victorious was the difficulty in the interpretation. […]
I have not accessed the content of that book, cannot tell more than that. If you already know about this, it’s just redundant information. No spam intended.
Regarding George Birtwistle, here is one interesting read:
https://www.mprl-series.mpg.de/studies/2/10/index.html
Posted by: tolga on March 15, 2022 4:17 PM | Permalink | Reply to this
### Re: Schrödinger’s Unified Field Theory
Hi, thanks! For some reason I’m just seeing this now. These papers look interesting!
Posted by: John Baez on March 22, 2022 4:34 AM | Permalink | Reply to this
Post a New Comment
|
{}
|
Intended for healthcare professionals
Research Methods & Reporting
# The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration
BMJ 2009; 339 (Published 21 July 2009) Cite this as: BMJ 2009;339:b2700
1. Alessandro Liberati12,
2. Douglas G Altman3,
3. Jennifer Tetzlaff4,
4. Cynthia Mulrow5,
5. Peter C Gøtzsche6,
6. John P A Ioannidis7,
7. Mike Clarke89,
8. P J Devereaux10,
9. Jos Kleijnen1112,
10. David Moher413
1. 1Università di Modena e Reggio Emilia, Modena, Italy
2. 2Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy
3. 3Centre for Statistics in Medicine, University of Oxford, Oxford
4. 4Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
5. 5Annals of Internal Medicine, Philadelphia, Pennsylvania, USA
6. 6Nordic Cochrane Centre, Copenhagen, Denmark
7. 7Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece
8. 8UK Cochrane Centre, Oxford
9. 9School of Nursing and Midwifery, Trinity College, Dublin, Republic of Ireland
10. 10Departments of Medicine, Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada
11. 11Kleijnen Systematic Reviews, York
12. 12School for Public Health and Primary Care (CAPHRI), University of Maastricht, Maastricht, Netherlands
13. 13Department of Epidemiology and Community Medicine, Faculty of Medicine, Ottawa, Ontario, Canada
1. Correspondence to: alesslib{at}mailbase.it
• Accepted 5 June 2009
## Abstract
Systematic reviews and meta-analyses are essential to summarise evidence relating to efficacy and safety of healthcare interventions accurately and reliably. The clarity and transparency of these reports, however, are not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users.
Since the development of the QUOROM (quality of reporting of meta-analysis) statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realising these issues, an international group that included experienced authors and methodologists developed PRISMA (preferred reporting items for systematic reviews and meta-analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions.
The PRISMA statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this explanation and elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA statement, this document, and the associated website (www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.
## Introduction
Systematic reviews and meta-analyses are essential tools for summarising evidence accurately and reliably. They help clinicians keep up to date; provide evidence for policy makers to judge risks, benefits, and harms of healthcare behaviours and interventions; gather together and summarise related research for patients and their carers; provide a starting point for clinical practice guideline developers; provide summaries of previous research for funders wishing to support new research;1 and help editors judge the merits of publishing reports of new studies.2 Recent data suggest that at least 2500 new systematic reviews reported in English are indexed in Medline annually.3
Unfortunately, there is considerable evidence that key information is often poorly reported in systematic reviews, thus diminishing their potential usefulness.3 4 5 6 As is true for all research, systematic reviews should be reported fully and transparently to allow readers to assess the strengths and weaknesses of the investigation.7 That rationale led to the development of the QUOROM (quality of reporting of meta-analysis) statement; those detailed reporting recommendations were published in 1999.8 In this paper we describe the updating of that guidance. Our aim is to ensure clear presentation of what was planned, done, and found in a systematic review.
Terminology used to describe systematic reviews and meta-analyses has evolved over time and varies across different groups of researchers and authors (see box 1 at end of document). In this document we adopt the definitions used by the Cochrane Collaboration.9 A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria to answer a specific research question. It uses explicit, systematic methods that are selected to minimise bias, thus providing reliable findings from which conclusions can be drawn and decisions made. Meta-analysis is the use of statistical methods to summarise and combine the results of independent studies. Many systematic reviews contain meta-analyses, but not all.
## The QUOROM statement and its evolution into PRISMA
The QUOROM statement, developed in 1996 and published in 1999,8 was conceived as a reporting guidance for authors reporting a meta-analysis of randomised trials. Since then, much has happened. First, knowledge about the conduct and reporting of systematic reviews has expanded considerably. For example, the Cochrane Library’s Methodology Register (which includes reports of studies relevant to the methods for systematic reviews) now contains more than 11 000 entries (March 2009). Second, there have been many conceptual advances, such as “outcome-level” assessments of the risk of bias,10 11 that apply to systematic reviews. Third, authors have increasingly used systematic reviews to summarise evidence other than that provided by randomised trials.
However, despite advances, the quality of the conduct and reporting of systematic reviews remains well short of ideal.3 4 5 6 All of these issues prompted the need for an update and expansion of the QUOROM statement. Of note, recognising that the updated statement now addresses the above conceptual and methodological issues and may also have broader applicability than the original QUOROM statement, we changed the name of the reporting guidance to PRISMA (preferred reporting items for systematic reviews and meta-analyses).
## Development of PRISMA
The PRISMA statement was developed by a group of 29 review authors, methodologists, clinicians, medical editors, and consumers.12 They attended a three day meeting in 2005 and participated in extensive post-meeting electronic correspondence. A consensus process that was informed by evidence, whenever possible, was used to develop a 27-item checklist (table 1) and a four-phase flow diagram (fig 1) (also available as extra items on bmj.com for researchers to download and re-use). Items deemed essential for transparent reporting of a systematic review were included in the checklist. The flow diagram originally proposed by QUOROM was also modified to show numbers of identified records, excluded articles, and included studies. After 11 revisions the group approved the checklist, flow diagram, and this explanatory paper.
Fig 1 Flow of information through the different phases of a systematic review.
Table 1
Checklist of items to include when reporting a systematic review or meta-analysis
View this table:
The PRISMA statement itself provides further details regarding its background and development.12 This accompanying explanation and elaboration document explains the meaning and rationale for each checklist item. A few PRISMA Group participants volunteered to help draft specific items for this document, and four of these (DGA, AL, DM, and JT) met on several occasions to further refine the document, which was circulated and ultimately approved by the larger PRISMA Group.
## Scope of PRISMA
PRISMA focuses on ways in which authors can ensure the transparent and complete reporting of systematic reviews and meta-analyses. It does not address directly or in a detailed manner the conduct of systematic reviews, for which other guides are available.13 14 15 16
We developed the PRISMA statement and this explanatory document to help authors report a wide array of systematic reviews to assess the benefits and harms of a healthcare intervention. We consider most of the checklist items relevant when reporting systematic reviews of non-randomised studies assessing the benefits and harms of interventions. However, we recognise that authors who address questions relating to aetiology, diagnosis, or prognosis, for example, and who review epidemiological or diagnostic accuracy studies may need to modify or incorporate additional items for their systematic reviews.
## How to use this paper
We modeled this explanation and elaboration document after those prepared for other reporting guidelines.17 18 19 To maximise the benefit of this document, we encourage people to read it in conjunction with the PRISMA statement.11
We present each checklist item and follow it with a published exemplar of good reporting for that item. (We edited some examples by removing citations or web addresses, or by spelling out abbreviations.) We then explain the pertinent issue, the rationale for including the item, and relevant evidence from the literature, whenever possible. No systematic search was carried out to identify exemplars and evidence. We also include seven boxes at the end of the document that provide a more comprehensive explanation of certain thematic aspects of the methodology and conduct of systematic reviews.
Although we focus on a minimal list of items to consider when reporting a systematic review, we indicate places where additional information is desirable to improve transparency of the review process. We present the items numerically from 1 to 27; however, authors need not address items in this particular order in their reports. Rather, what is important is that the information for each item is given somewhere within the report.
## The PRISMA checklist
### Title and abstract
#### Item 1: Title
Identify the report as a systematic review, meta-analysis, or both.
Examples “Recurrence rates of video-assisted thoracoscopic versus open surgery in the prevention of recurrent pneumothoraces: a systematic review of randomised and non-randomised trials”20
“Mortality in randomised trials of antioxidant supplements for primary and secondary prevention: systematic review and meta-analysis”21
Explanation Authors should identify their report as a systematic review or meta-analysis. Terms such as “review” or “overview” do not describe for readers whether the review was systematic or whether a meta-analysis was performed. A recent survey found that 50% of 300 authors did not mention the terms “systematic review” or “meta-analysis” in the title or abstract of their systematic review.3 Although sensitive search strategies have been developed to identify systematic reviews,22 inclusion of the terms systematic review or meta-analysis in the title may improve indexing and identification.
We advise authors to use informative titles that make key information easily accessible to readers. Ideally, a title reflecting the PICOS approach (participants, interventions, comparators, outcomes, and study design) (see item 11 and box 2) may help readers as it provides key information about the scope of the review. Specifying the design(s) of the studies included, as shown in the examples, may also help some readers and those searching databases.
Some journals recommend “indicative titles” that indicate the topic matter of the review, while others require declarative titles that give the review’s main conclusion. Busy practitioners may prefer to see the conclusion of the review in the title, but declarative titles can oversimplify or exaggerate findings. Thus, many journals and methodologists prefer indicative titles as used in the examples above.
#### Item 2: Structured summary
Provide a structured summary including, as applicable, background; objectives; data sources; study eligibility criteria, participants, and interventions; study appraisal and synthesis methods; results; limitations; conclusions and implications of key findings; funding for the systematic review; and systematic review registration number.
ExampleContext: The role and dose of oral vitamin D supplementation in nonvertebral fracture prevention have not been well established.
Objective: To estimate the effectiveness of vitamin D supplementation in preventing hip and nonvertebral fractures in older persons.
Data Sources: A systematic review of English and non-English articles using MEDLINE and the Cochrane Controlled Trials Register (1960-2005), and EMBASE (1991-2005). Additional studies were identified by contacting clinical experts and searching bibliographies and abstracts presented at the American Society for Bone and Mineral Research (1995-2004). Search terms included randomised controlled trial (RCT), controlled clinical trial, random allocation, double-blind method, cholecalciferol, ergocalciferol, 25-hydroxyvitamin D, fractures, humans, elderly, falls, and bone density.
Study Selection: Only double-blind RCTs of oral vitamin D supplementation (cholecalciferol, ergocalciferol) with or without calcium supplementation vs calcium supplementation or placebo in older persons (>60 years) that examined hip or nonvertebral fractures were included.
Data Extraction: Independent extraction of articles by 2 authors using predefined data fields, including study quality indicators.
Data Synthesis: All pooled analyses were based on random-effects models. Five RCTs for hip fracture (n=9294) and 7 RCTs for nonvertebral fracture risk (n=9820) met our inclusion criteria. All trials used cholecalciferol. Heterogeneity among studies for both hip and nonvertebral fracture prevention was observed, which disappeared after pooling RCTs with low-dose (400 IU/d) and higher-dose vitamin D (700-800 IU/d), separately. A vitamin D dose of 700 to 800 IU/d reduced the relative risk (RR) of hip fracture by 26% (3 RCTs with 5572 persons; pooled RR, 0.74; 95% confidence interval [CI], 0.61-0.88) and any nonvertebral fracture by 23% (5 RCTs with 6098 persons; pooled RR, 0.77; 95% CI, 0.68-0.87) vs calcium or placebo. No significant benefit was observed for RCTs with 400 IU/d vitamin D (2 RCTs with 3722 persons; pooled RR for hip fracture, 1.15; 95% CI, 0.88-1.50; and pooled RR for any nonvertebral fracture, 1.03; 95% CI, 0.86-1.24).
Conclusions: Oral vitamin D supplementation between 700 to 800 IU/d appears to reduce the risk of hip and any nonvertebral fractures in ambulatory or institutionalised elderly persons. An oral vitamin D dose of 400 IU/d is not sufficient for fracture prevention.”23
Explanation Abstracts provide key information that enables readers to understand the scope, processes, and findings of a review and to decide whether to read the full report. The abstract may be all that is readily available to a reader, for example, in a bibliographic database. The abstract should present a balanced and realistic assessment of the review’s findings that mirrors, albeit briefly, the main text of the report.
We agree with others that the quality of reporting in abstracts presented at conferences and in journal publications needs improvement.24 25 While we do not uniformly favour a specific format over another, we generally recommend structured abstracts. Structured abstracts provide readers with a series of headings pertaining to the purpose, conduct, findings, and conclusions of the systematic review being reported.26 27 They give readers more complete information and facilitate finding information more easily than unstructured abstracts.28 29 30 31 32
A highly structured abstract of a systematic review could include the following headings: Context (or Background); Objective (or Purpose); Data sources; Study selection (or Eligibility criteria); Study appraisal and Synthesis methods (or Data extraction and Data synthesis); Results; Limitations; and Conclusions (or Implications). Alternatively, a simpler structure could cover but collapse some of the above headings (such as label Study selection and Study appraisal as Review methods) or omit some headings such as Background and Limitations.
In the highly structured abstract mentioned above, authors use the Background heading to set the context for readers and explain the importance of the review question. Under the Objectives heading, they ideally use elements of PICOS (see box 2) to state the primary objective of the review. Under a Data sources heading, they summarise sources that were searched, any language or publication type restrictions, and the start and end dates of searches. Study selection statements then ideally describe who selected studies using what inclusion criteria. Data extraction methods statements describe appraisal methods during data abstraction and the methods used to integrate or summarise the data. The Data synthesis section is where the main results of the review are reported. If the review includes meta-analyses, authors should provide numerical results with confidence intervals for the most important outcomes. Ideally, they should specify the amount of evidence in these analyses (numbers of studies and numbers of participants). Under a Limitations heading, authors might describe the most important weaknesses of included studies as well as limitations of the review process. Then authors should provide clear and balanced Conclusions that are closely linked to the objective and findings of the review. Additionally, it would be helpful if authors included some information about funding for the review. Finally, although protocol registration for systematic reviews is still not common practice, if authors have registered their review or received a registration number, we recommend providing the registration information at the end of the abstract.
Taking all the above considerations into account, the intrinsic tension between the goal of completeness of the abstract and its keeping into the space limit often set by journal editors is recognised as a major challenge.
### Introduction
#### Item 3: Rationale
Describe the rationale for the review in the context of what is already known.
Example “Reversing the trend of increasing weight for height in children has proven difficult. It is widely accepted that increasing energy expenditure and reducing energy intake form the theoretical basis for management. Therefore, interventions aiming to increase physical activity and improve diet are the foundation of efforts to prevent and treat childhood obesity. Such lifestyle interventions have been supported by recent systematic reviews, as well as by the Canadian Paediatric Society, the Royal College of Paediatrics and Child Health, and the American Academy of Pediatrics. However, these interventions are fraught with poor adherence. Thus, school-based interventions are theoretically appealing because adherence with interventions can be improved. Consequently, many local governments have enacted or are considering policies that mandate increased physical activity in schools, although the effect of such interventions on body composition has not been assessed.”33
Explanation Readers need to understand the rationale behind the study and what the systematic review may add to what is already known. Authors should tell readers whether their report is a new systematic review or an update of an existing one. If the review is an update, authors should state reasons for the update, including what has been added to the evidence base since the previous version of the review.
An ideal background or introduction that sets context for readers might include the following. First, authors might define the importance of the review question from different perspectives (such as public health, individual patient, or health policy). Second, authors might briefly mention the current state of knowledge and its limitations. As in the above example, information about the effects of several different interventions may be available that helps readers understand why potential relative benefits or harms of particular interventions need review. Third, authors might whet readers’ appetites by clearly stating what the review aims to add. They also could discuss the extent to which the limitations of the existing evidence base may be overcome by the review.
#### Item 4: Objectives
Provide an explicit statement of questions being addressed with reference to participants, interventions, comparisons, outcomes, and study design (PICOS).
Example “To examine whether topical or intraluminal antibiotics reduce catheter-related bloodstream infection, we reviewed randomised, controlled trials that assessed the efficacy of these antibiotics for primary prophylaxis against catheter-related bloodstream infection and mortality compared with no antibiotic therapy in adults undergoing hemodialysis.”34
Explanation The questions being addressed, and the rationale for them, are one of the most critical parts of a systematic review. They should be stated precisely and explicitly so that readers can understand quickly the review’s scope and the potential applicability of the review to their interests.35 Framing questions so that they include the following five “PICOS” components may improve the explicitness of review questions: (1) the patient population or disease being addressed (P), (2) the interventions or exposure of interest (I), (3) the comparators (C), (4) the main outcome or endpoint of interest (O), and (5) the study designs chosen (S). For more detail regarding PICOS, see box 2.
Good review questions may be narrowly focused or broad, depending on the overall objectives of the review. Sometimes broad questions might increase the applicability of the results and facilitate detection of bias, exploratory analyses, and sensitivity analyses.35 36 Whether narrowly focused or broad, precisely stated review objectives are critical as they help define other components of the review process such as the eligibility criteria (item 6) and the search for relevant literature (items 7 and 8).
### Methods
#### Item 5: Protocol and registration
Indicate if a review protocol exists, if and where it can be accessed (such as a web address), and, if available, provide registration information including the registration number.
Example “Methods of the analysis and inclusion criteria were specified in advance and documented in a protocol.”37
Explanation A protocol is important because it pre-specifies the objectives and methods of the systematic review. For instance, a protocol specifies outcomes of primary interest, how reviewers will extract information about those outcomes, and methods that reviewers might use to quantitatively summarise the outcome data (see item 13). Having a protocol can help restrict the likelihood of biased post hoc decisions in review methods, such as selective outcome reporting. Several sources provide guidance about elements to include in the protocol for a systematic review.16 38 39 For meta-analyses of individual patient-level data, we advise authors to describe whether a protocol was explicitly designed and whether, when, and how participating collaborators endorsed it.40 41
Authors may modify protocols during the research, and readers should not automatically consider such modifications inappropriate. For example, legitimate modifications may extend the period of searches to include older or newer studies, broaden eligibility criteria that proved too narrow, or add analyses if the primary analyses suggest that additional ones are warranted. Authors should, however, describe the modifications and explain their rationale.
Although worthwhile protocol amendments are common, one must consider the effects that protocol modifications may have on the results of a systematic review, especially if the primary outcome is changed. Bias from selective outcome reporting in randomised trials has been well documented.42 43 An examination of 47 Cochrane reviews revealed indirect evidence for possible selective reporting bias for systematic reviews. Almost all (n=43) contained a major change, such as the addition or deletion of outcomes, between the protocol and the full publication.44 Whether (or to what extent) the changes reflected bias, however, was not clear. For example, it has been rather common not to describe outcomes that were not presented in any of the included studies.
Registration of a systematic review, typically with a protocol and registration number, is not yet common, but some opportunities exist.45 46 Registration may possibly reduce the risk of multiple reviews addressing the same question,45 46 47 48 reduce publication bias, and provide greater transparency when updating systematic reviews. Of note, a survey of systematic reviews indexed in Medline in November 2004 found that reports of protocol use had increased to about 46%3 from 8% noted in previous surveys.49 The improvement was due mostly to Cochrane reviews, which, by requirement, have a published protocol.3
#### Item 6: Eligibility criteria
Specify study characteristics (such as PICOS, length of follow-up) and report characteristics (such as years considered, language, publication status) used as criteria for eligibility, giving rationale.
Examples Types of studies: “Randomised clinical trials studying the administration of hepatitis B vaccine to CRF [chronic renal failure] patients, with or without dialysis. No language, publication date, or publication status restrictions were imposed…”
Types of participants: “Participants of any age with CRF or receiving dialysis (haemodialysis or peritoneal dialysis) were considered. CRF was defined as serum creatinine greater than 200 µmol/L for a period of more than six months or individuals receiving dialysis (haemodialysis or peritoneal dialysis)…Renal transplant patients were excluded from this review as these individuals are immunosuppressed and are receiving immunosuppressant agents to prevent rejection of their transplanted organs, and they have essentially normal renal function...”
Types of intervention: “Trials comparing the beneficial and harmful effects of hepatitis B vaccines with adjuvant or cytokine co-interventions [and] trials comparing the beneficial and harmful effects of immunoglobulin prophylaxis. This review was limited to studies looking at active immunisation. Hepatitis B vaccines (plasma or recombinant (yeast) derived) of all types, dose, and regimens versus placebo, control vaccine, or no vaccine…”
Types of outcome measures: “Primary outcome measures: Seroconversion, ie, proportion of patients with adequate anti-HBs response (>10 IU/L or Sample Ratio Units). Hepatitis B infections (as measured by hepatitis B core antigen (HBcAg) positivity or persistent HBsAg positivity), both acute and chronic. Acute (primary) HBV [hepatitis B virus] infections were defined as seroconversion to HBsAg positivity or development of IgM anti-HBc. Chronic HBV infections were defined as the persistence of HBsAg for more than six months or HBsAg positivity and liver biopsy compatible with a diagnosis or chronic hepatitis B. Secondary outcome measures: Adverse events of hepatitis B vaccinations…[and]…mortality.”50
Explanation Knowledge of the eligibility criteria is essential in appraising the validity, applicability, and comprehensiveness of a review. Thus, authors should unambiguously specify eligibility criteria used in the review. Carefully defined eligibility criteria inform various steps of the review methodology. They influence the development of the search strategy and serve to ensure that studies are selected in a systematic and unbiased manner.
A study may be described in multiple reports, and one report may describe multiple studies. Therefore, we separate eligibility criteria into the following two components: study characteristics and report characteristics. Both need to be reported. Study eligibility criteria are likely to include the populations, interventions, comparators, outcomes, and study designs of interest (PICOS, see box 2), as well as other study-specific elements, such as specifying a minimum length of follow-up. Authors should state whether studies will be excluded because they do not include (or report) specific outcomes to help readers ascertain whether the systematic review may be biased as a consequence of selective reporting.42 43
Report eligibility criteria are likely to include language of publication, publication status (such as inclusion of unpublished material and abstracts), and year of publication. Inclusion or not of non-English language literature,51 52 53 54 55 unpublished data, or older data can influence the effect estimates in meta-analyses.56 57 58 59 Caution may need to be exercised in including all identified studies due to potential differences in the risk of bias such as, for example, selective reporting in abstracts.60 61 62
#### Item 7: Information sources
Describe all information sources in the search (such as databases with dates of coverage, contact with study authors to identify additional studies) and date last searched.
Example “Studies were identified by searching electronic databases, scanning reference lists of articles and consultation with experts in the field and drug companies…No limits were applied for language and foreign papers were translated. This search was applied to Medline (1966 - Present), CancerLit (1975 - Present), and adapted for Embase (1980 - Present), Science Citation Index Expanded (1981 - Present) and Pre-Medline electronic databases. Cochrane and DARE (Database of Abstracts of Reviews of Effectiveness) databases were reviewed…The last search was run on 19 June 2001. In addition, we handsearched contents pages of Journal of Clinical Oncology 2001, European Journal of Cancer 2001 and Bone 2001, together with abstracts printed in these journals 1999 - 2001. A limited update literature search was performed from 19 June 2001 to 31 December 2003.”63
Explanation The National Library of Medicine’s Medline database is one of the most comprehensive sources of healthcare information in the world. Like any database, however, its coverage is not complete and varies according to the field. Retrieval from any single database, even by an experienced searcher, may be imperfect, which is why detailed reporting is important within the systematic review.
At a minimum, for each database searched, authors should report the database, platform, or provider (such as Ovid, Dialog, PubMed) and the start and end dates for the search of each database. This information lets readers assess the currency of the review, which is important because the publication time-lag outdates the results of some reviews.64 This information should also make updating more efficient.65 Authors should also report who developed and conducted the search.66
In addition to searching databases, authors should report the use of supplementary approaches to identify studies, such as hand searching of journals, checking reference lists, searching trials registries or regulatory agency websites,67 contacting manufacturers, or contacting authors. Authors should also report if they attempted to acquire any missing information (such as on study methods or results) from investigators or sponsors; it is useful to describe briefly who was contacted and what unpublished information was obtained.
#### Item 8: Search
Present the full electronic search strategy for at least one major database, including any limits used, such that it could be repeated.
Examples In text: “We used the following search terms to search all trials registers and databases: immunoglobulin*; IVIG; sepsis; septic shock; septicaemia; and septicemia…”68
In appendix: “Search strategy: MEDLINE (OVID)
• 01. immunoglobulins/
• 02. immunoglobulin$.tw. • 03. ivig.tw. • 04. 1 or 2 or 3 • 05. sepsis/ • 06. sepsis.tw. • 07. septic shock/ • 08. septic shock.tw. • 09. septicemia/ • 10. septicaemia.tw. • 11. septicemia.tw. • 12. 5 or 6 or 7 or 8 or 9 or 10 or 11 • 13. 4 and 12 • 14. randomised controlled trials/ • 15. randomised-controlled-trial.pt. • 16. controlled-clinical-trial.pt. • 17. random allocation/ • 18. double-blind method/ • 19. single-blind method/ • 20. 14 or 15 or 16 or 17 or 18 or 19 • 21. exp clinical trials/ • 22. clinical-trial.pt. • 23. (clin$ adj trial$).ti,ab. • 24. ((singl$ or doubl$or trebl$ or tripl$) adj (blind$)).ti,ab.
• 25. placebos/
• 26. placebo$.ti,ab. • 27. random$.ti,ab.
• 28. 21 or 22 or 23 or 24 or 25 or 26 or 27
• 29. research design/
• 30. comparative study/
• 31. exp evaluation studies/
• 32. follow-up studies/
• 33. prospective studies/
• 34. (control$or prospective$ or volunteer\$).ti,ab.
• 35. 30 or 31 or 32 or 33 or 34
• 36. 20 or 28 or 29 or 35
• 37. 13 and 36”68
Explanation The search strategy is an essential part of the report of any systematic review. Searches may be complicated and iterative, particularly when reviewers search unfamiliar databases or their review is addressing a broad or new topic. Perusing the search strategy allows interested readers to assess the comprehensiveness and completeness of the search, and to replicate it. Thus, we advise authors to report their full electronic search strategy for at least one major database. As an alternative to presenting search strategies for all databases, authors could indicate how the search took into account other databases searched, as index terms vary across databases. If different searches are used for different parts of a wider question (such as questions relating to benefits and questions relating to harms), we recommend authors provide at least one example of a strategy for each part of the objective.69 We also encourage authors to state whether search strategies were peer reviewed as part of the systematic review process.70
We realise that journal restrictions vary and that having the search strategy in the text of the report is not always feasible. We strongly encourage all journals, however, to find ways—such as a “web extra,” appendix, or electronic link to an archive—to make search strategies accessible to readers. We also advise all authors to archive their searches so that (1) others may access and review them (such as replicate them or understand why their review of a similar topic did not identify the same reports), and (2) future updates of their review are facilitated.
Several sources provide guidance on developing search strategies.71 72 73 Most searches have constraints, such as relating to limited time or financial resources, inaccessible or inadequately indexed reports and databases, unavailability of experts with particular language or database searching skills, or review questions for which pertinent evidence is not easy to find. Authors should be straightforward in describing their search constraints. Apart from the keywords used to identify or exclude records, they should report any additional limitations relevant to the search, such as language and date restrictions (see also eligibility criteria, item 6).51
#### Item 9: Study selection
State the process for selecting studies (that is, for screening, for determining eligibility, for inclusion in the systematic review, and, if applicable, for inclusion in the meta-analysis).
Example “Eligibility assessment…[was] performed independently in an unblinded standardized manner by 2 reviewers…Disagreements between reviewers were resolved by consensus.”74
Explanation There is no standard process for selecting studies to include in a systematic review. Authors usually start with a large number of identified records from their search and sequentially exclude records according to eligibility criteria. We advise authors to report how they screened the retrieved records (typically a title and abstract), how often it was necessary to review the full text publication, and if any types of record (such as letters to the editor) were excluded. We also advise using the PRISMA flow diagram to summarise study selection processes (see item 17 and box 3).
Efforts to enhance objectivity and avoid mistakes in study selection are important. Thus authors should report whether each stage was carried out by one or several people, who these people were, and, whenever multiple independent investigators performed the selection, what the process was for resolving disagreements. The use of at least two investigators may reduce the possibility of rejecting relevant reports.75 The benefit may be greatest for topics where selection or rejection of an article requires difficult judgments.76 For these topics, authors should ideally tell readers the level of inter-rater agreement, how commonly arbitration about selection was required, and what efforts were made to resolve disagreements (such as by contact with the authors of the original studies).
#### Item 10: Data collection process
Describe the method of data extraction from reports (such as piloted forms, independently by two reviewers) and any processes for obtaining and confirming data from investigators.
Example “We developed a data extraction sheet (based on the Cochrane Consumers and Communication Review Group’s data extraction template), pilot-tested it on ten randomly-selected included studies, and refined it accordingly. One review author extracted the following data from included studies and the second author checked the extracted data…Disagreements were resolved by discussion between the two review authors; if no agreement could be reached, it was planned a third author would decide. We contacted five authors for further information. All responded and one provided numerical data that had only been presented graphically in the published paper.”77
Explanation Reviewers extract information from each included study so that they can critique, present, and summarise evidence in a systematic review. They might also contact authors of included studies for information that has not been, or is unclearly, reported. In meta-analysis of individual patient data, this phase involves collection and scrutiny of detailed raw databases. The authors should describe these methods, including any steps taken to reduce bias and mistakes during data collection and data extraction.78 (See box 3)
Some systematic reviewers use a data extraction form that could be reported as an appendix or “Web extra” to their report. These forms could show the reader what information reviewers sought (see item 11) and how they extracted it. Authors could tell readers if the form was piloted. Regardless, we advise authors to tell readers who extracted what data, whether any extractions were completed in duplicate, and, if so, whether duplicate abstraction was done independently and how disagreements were resolved.
Published reports of the included studies may not provide all the information required for the review. Reviewers should describe any actions they took to seek additional information from the original researchers (see item 7). The description might include how they attempted to contact researchers, what they asked for, and their success in obtaining the necessary information. Authors should also tell readers when individual patient data were sought from the original researchers.41 (see item 11) and indicate the studies for which such data were used in the analyses. The reviewers ideally should also state whether they confirmed the accuracy of the information included in their review with the original researchers, for example, by sending them a copy of the draft review.79
Some studies are published more than once. Duplicate publications may be difficult to ascertain, and their inclusion may introduce bias.80 81 We advise authors to describe any steps they used to avoid double counting and piece together data from multiple reports of the same study (such as juxtaposing author names, treatment comparisons, sample sizes, or outcomes). We also advise authors to indicate whether all reports on a study were considered, as inconsistencies may reveal important limitations. For example, a review of multiple publications of drug trials showed that reported study characteristics may differ from report to report, including the description of the design, number of patients analysed, chosen significance level, and outcomes.82 Authors ideally should present any algorithm that they used to select data from overlapping reports and any efforts they used to solve logical inconsistencies across reports.
#### Item 11: Data items
List and define all variables for which data were sought (such as PICOS, funding sources) and any assumptions and simplifications made.
Examples “Information was extracted from each included trial on: (1) characteristics of trial participants (including age, stage and severity of disease, and method of diagnosis), and the trial’s inclusion and exclusion criteria; (2) type of intervention (including type, dose, duration and frequency of the NSAID [non-steroidal anti-inflammatory drug]; versus placebo or versus the type, dose, duration and frequency of another NSAID; or versus another pain management drug; or versus no treatment); (3) type of outcome measure (including the level of pain reduction, improvement in quality of life score (using a validated scale), effect on daily activities, absence from work or school, length of follow up, unintended effects of treatment, number of women requiring more invasive treatment).”83
Explanation It is important for readers to know what information review authors sought, even if some of this information was not available.84 If the review is limited to reporting only those variables that were obtained, rather than those that were deemed important but could not be obtained, bias might be introduced and the reader might be misled. It is therefore helpful if authors can refer readers to the protocol (see item 5) and archive their extraction forms (see item 10), including definitions of variables. The published systematic review should include a description of the processes used with, if relevant, specification of how readers can get access to additional materials.
We encourage authors to report whether some variables were added after the review started. Such variables might include those found in the studies that the reviewers identified (such as important outcome measures that the reviewers initially overlooked). Authors should describe the reasons for adding any variables to those already pre-specified in the protocol so that readers can understand the review process.
We advise authors to report any assumptions they made about missing or unclear information and to explain those processes. For example, in studies of women aged 50 or older it is reasonable to assume that none were pregnant, even if this is not reported. Likewise, review authors might make assumptions about the route of administration of drugs assessed. However, special care should be taken in making assumptions about qualitative information. For example, the upper age limit for “children” can vary from 15 years to 21 years, “intense” physiotherapy might mean very different things to different researchers at different times and for different patients, and the volume of blood associated with “heavy” blood loss might vary widely depending on the setting.
#### Item 12: Risk of bias in individual studies
Describe methods used for assessing risk of bias in individual studies (including specification of whether this was done at the study or outcome level, or both), and how this information is to be used in any data synthesis.
Example “To ascertain the validity of eligible randomized trials, pairs of reviewers working independently and with adequate reliability determined the adequacy of randomization and concealment of allocation, blinding of patients, health care providers, data collectors, and outcome assessors; and extent of loss to follow-up (i.e. proportion of patients in whom the investigators were not able to ascertain outcomes).”85
“To explore variability in study results (heterogeneity) we specified the following hypotheses before conducting the analysis. We hypothesised that effect size may differ according to the methodological quality of the studies.”86
Explanation The likelihood that the treatment effect reported in a systematic review approximates the truth depends on the validity of the included studies, as certain methodological characteristics may be associated with effect sizes.87 88 For example, trials without reported adequate allocation concealment exaggerate treatment effects on average compared with those with adequate concealment.88 Therefore, it is important for authors to describe any methods that they used to gauge the risk of bias in the included studies and how that information was used.89 Additionally, authors should provide a rationale if no assessment of risk of bias was undertaken. The most popular term to describe the issues relevant to this item is “quality,” but for the reasons that are elaborated in box 4 we prefer to name this item as “assessment of risk of bias.”
Many methods exist to assess the overall risk of bias in included studies, including scales, checklists, and individual components.90 91 As discussed in box 4, scales that numerically summarise multiple components into a single number are misleading and unhelpful.92 93 Rather, authors should specify the methodological components that they assessed. Common markers of validity for randomised trials include the following: appropriate generation of random allocation sequence;94 concealment of the allocation sequence;93 blinding of participants, health care providers, data collectors, and outcome adjudicators;95 96 97 98 proportion of patients lost to follow-up;99 100 stopping of trials early for benefit;101 and whether the analysis followed the intention-to-treat principle.100 102 The ultimate decision regarding which methodological features to evaluate requires consideration of the strength of the empiric data, theoretical rationale, and the unique circumstances of the included studies.
Authors should report how they assessed risk of bias; whether it was in a blind manner; and if assessments were completed by more than one person, and if so, whether they were completed independently.103 104 Similarly, we encourage authors to report any calibration exercises among review team members that were done. Finally, authors need to report how their assessments of risk of bias are used subsequently in the data synthesis (see item 16). Despite the often difficult task of assessing the risk of bias in included studies, authors are sometimes silent on what they did with the resultant assessments.89 If authors exclude studies from the review or any subsequent analyses on the basis of the risk of bias, they should tell readers which studies they excluded and explain the reasons for those exclusions (see item 6). Authors should also describe any planned sensitivity or subgroup analyses related to bias assessments (see item 16).
#### Item 13: Summary measures
State the principal summary measures (such as risk ratio, difference in means).
Examples “Relative risk of mortality reduction was the primary measure of treatment effect.”105
“The meta-analyses were performed by computing relative risks (RRs) using random-effects model. Quantitative analyses were performed on an intention-to-treat basis and were confined to data derived from the period of follow-up. RR and 95% confidence intervals for each side effect (and all side effects) were calculated.”106
“The primary outcome measure was the mean difference in log10 HIV-1 viral load comparing zinc supplementation to placebo...”107
Explanation When planning a systematic review, it is generally desirable that authors pre-specify the outcomes of primary interest (see item 5) as well as the intended summary effect measure for each outcome. The chosen summary effect measure may differ from that used in some of the included studies. If possible the choice of effect measures should be explained, though it is not always easy to judge in advance which measure is the most appropriate.
For binary outcomes, the most common summary measures are the risk ratio, odds ratio, and risk difference.108 Relative effects are more consistent across studies than absolute effects,109 110 although absolute differences are important when interpreting findings (see item 24).
For continuous outcomes, the natural effect measure is the difference in means.108 Its use is appropriate when outcome measurements in all studies are made on the same scale. The standardised difference in means is used when the studies do not yield directly comparable data. Usually this occurs when all studies assess the same outcome but measure it in a variety of ways (such as different scales to measure depression).
For time-to-event outcomes, the hazard ratio is the most common summary measure. Reviewers need the log hazard ratio and its standard error for a study to be included in a meta-analysis.111 This information may not be given for all studies, but methods are available for estimating the desired quantities from other reported information.111 Risk ratio and odds ratio (in relation to events occurring by a fixed time) are not equivalent to the hazard ratio, and median survival times are not a reliable basis for meta-analysis.112 If authors have used these measures they should describe their methods in the report.
#### Item 14: Planned methods of analysis
Describe the methods of handling data and combining results of studies, if done, including measures of consistency (such as I2) for each meta-analysis.
Examples “We tested for heterogeneity with the Breslow-Day test, and used the method proposed by Higgins et al. to measure inconsistency (the percentage of total variation across studies due to heterogeneity) of effects across lipid-lowering interventions. The advantages of this measure of inconsistency (termed I2) are that it does not inherently depend on the number of studies and is accompanied by an uncertainty interval.”113
“In very few instances, estimates of baseline mean or mean QOL [Quality of life] responses were obtained without corresponding estimates of variance (standard deviation [SD] or standard error). In these instances, an SD was imputed from the mean of the known SDs. In a number of cases, the response data available were the mean and variance in a pre study condition and after therapy. The within-patient variance in these cases could not be calculated directly and was approximated by assuming independence.”114
Explanation The data extracted from the studies in the review may need some transformation (processing) before they are suitable for analysis or for presentation in an evidence table. Although such data handling may facilitate meta-analyses, it is sometimes needed even when meta-analyses are not done. For example, in trials with more than two intervention groups it may be necessary to combine results for two or more groups (such as receiving similar but non-identical interventions), or it may be desirable to include only a subset of the data to match the review’s inclusion criteria. When several different scales (such as for depression) are used across studies, the sign of some scores may need to be reversed to ensure that all scales are aligned (such as so low values represent good health on all scales). Standard deviations may have to be reconstructed from other statistics such as P values and t statistics,115 116 or occasionally they may be imputed from the standard deviations observed in other studies.117 Time-to-event data also usually need careful conversions to a consistent format.111 Authors should report details of any such data processing.
Statistical combination of data from two or more separate studies in a meta-analysis may be neither necessary nor desirable (see box 5 and item 21). Regardless of the decision to combine individual study results, authors should report how they planned to evaluate between-study variability (heterogeneity or inconsistency) (box 6). The consistency of results across trials may influence the decision of whether to combine trial results in a meta-analysis.
When meta-analysis is done, authors should specify the effect measure (such as relative risk or mean difference) (see item 13), the statistical method (such as inverse variance), and whether a fixed-effects or random-effects approach, or some other method (such as Bayesian) was used (see box 6). If possible, authors should explain the reasons for those choices.
#### Item 15: Risk of bias across studies
Specify any assessment of risk of bias that may affect the cumulative evidence (such as publication bias, selective reporting within studies).
Examples “For each trial we plotted the effect by the inverse of its standard error. The symmetry of such ‘funnel plots’ was assessed both visually, and formally with Egger’s test, to see if the effect decreased with increasing sample size.”118
“We assessed the possibility of publication bias by evaluating a funnel plot of the trial mean differences for asymmetry, which can result from the non publication of small trials with negative results…Because graphical evaluation can be subjective, we also conducted an adjusted rank correlation test and a regression asymmetry test as formal statistical tests for publication bias...We acknowledge that other factors, such as differences in trial quality or true study heterogeneity, could produce asymmetry in funnel plots.”119
Explanation Reviewers should explore the possibility that the available data are biased. They may examine results from the available studies for clues that suggest there may be missing studies (publication bias) or missing data from the included studies (selective reporting bias) (see box 7). Authors should report in detail any methods used to investigate possible bias across studies.
It is difficult to assess whether within-study selective reporting is present in a systematic review. If a protocol of an individual study is available, the outcomes in the protocol and the published report can be compared. Even in the absence of a protocol, outcomes listed in the methods section of the published report can be compared with those for which results are presented.120 In only half of 196 trial reports describing comparisons of two drugs in arthritis were all the effect variables in the methods and results sections the same.82 In other cases, knowledge of the clinical area may suggest that it is likely that the outcome was measured even if it was not reported. For example, in a particular disease, if one of two linked outcomes is reported but the other is not, then one should question whether the latter has been selectively omitted.121 122
Only 36% (76 of 212) of therapeutic systematic reviews published in November 2004 reported that study publication bias was considered, and only a quarter of those intended to carry out a formal assessment for that bias.3 Of 60 meta-analyses in 24 articles published in 2005 in which formal assessments were reported, most were based on fewer than 10 studies; most displayed statistically significant heterogeneity; and many reviewers misinterpreted the results of the tests employed.123 A review of trials of antidepressants found that meta-analysis of only the published trials gave effect estimates 32% larger on average than when all trials sent to the drug agency were analysed.67
Describe methods of additional analyses (such as sensitivity or subgroup analyses, meta-regression), if done, indicating which were pre-specified.
Example “Sensitivity analyses were pre-specified. The treatment effects were examined according to quality components (concealed treatment allocation, blinding of patients and caregivers, blinded outcome assessment), time to initiation of statins, and the type of statin. One post-hoc sensitivity analysis was conducted including unpublished data from a trial using cerivastatin.”124
Explanation Authors may perform additional analyses to help understand whether the results of their review are robust, all of which should be reported. Such analyses include sensitivity analysis, subgroup analysis, and meta-regression.125
Sensitivity analyses are used to explore the degree to which the main findings of a systematic review are affected by changes in its methods or in the data used from individual studies (such as study inclusion criteria, results of risk of bias assessment). Subgroup analyses address whether the summary effects vary in relation to specific (usually clinical) characteristics of the included studies or their participants. Meta-regression extends the idea of subgroup analysis to the examination of the quantitative influence of study characteristics on the effect size.126 Meta-regression also allows authors to examine the contribution of different variables to the heterogeneity in study findings. Readers of systematic reviews should be aware that meta-regression has many limitations, including a danger of over-interpretation of findings.127 128
Even with limited data, many additional analyses can be undertaken. The choice of which analysis to undertake will depend on the aims of the review. None of these analyses, however, is exempt from producing potentially misleading results. It is important to inform readers whether these analyses were performed, their rationale, and which were pre-specified.
### Results
#### Item 17: Study selection
Give numbers of studies screened, assessed for eligibility, and included in the review, with reasons for exclusions at each stage, ideally with a flow diagram.
Examples In text: “A total of 10 studies involving 13 trials were identified for inclusion in the review. The search of Medline, PsycInfo and Cinahl databases provided a total of 584 citations. After adjusting for duplicates 509 remained. Of these, 479 studies were discarded because after reviewing the abstracts it appeared that these papers clearly did not meet the criteria. Three additional studies…were discarded because full text of the study was not available or the paper could not be feasibly translated into English. The full text of the remaining 27 citations was examined in more detail. It appeared that 22 studies did not meet the inclusion criteria as described. Five studies…met the inclusion criteria and were included in the systematic review. An additional five studies...that met the criteria for inclusion were identified by checking the references of located, relevant papers and searching for studies that have cited these papers. No unpublished relevant studies were obtained.”129
See flow diagram in fig 2.
Fig 2 Example flow diagram of study selection. DDW = Digestive Disease Week; UEGW = United European Gastroenterology Week. Adapted from Fuccio et al130
Explanation Authors should report, ideally with a flow diagram, the total number of records identified from electronic bibliographic sources (including specialised database or registry searches), hand searches of various sources, reference lists, citation indices, and experts. It is useful if authors delineate for readers the number of selected articles that were identified from the different sources so that they can see, for example, whether most articles were identified through electronic bibliographic sources or from references or experts. Literature identified primarily from references or experts may be prone to citation or publication bias.131 132
The flow diagram and text should describe clearly the process of report selection throughout the review. Authors should report unique records identified in searches, records excluded after preliminary screening (such as screening of titles and abstracts), reports retrieved for detailed evaluation, potentially eligible reports that were not retrievable, retrieved reports that did not meet inclusion criteria and the primary reasons for exclusion, and the studies included in the review. Indeed, the most appropriate layout may vary for different reviews.
Authors should also note the presence of duplicate or supplementary reports so that readers understand the number of individual studies compared with the number of reports that were included in the review. Authors should be consistent in their use of terms, such as whether they are reporting on counts of citations, records, publications, or studies. We believe that reporting the number of studies is the most important.
A flow diagram can be very useful; it should depict all the studies included based on fulfilling the eligibility criteria, and whether data have been combined for statistical analysis. A recent review of 87 systematic reviews found that about half included a QUOROM flow diagram.133 The authors of this research recommended some important ways that reviewers can improve the use of a flow diagram when describing the flow of information throughout the review process, including a separate flow diagram for each important outcome reported.133
#### Item 18: Study characteristics
For each study, present characteristics for which data were extracted (such as study size, PICOS, follow-up period) and provide the citation.
Examples In text: “Characteristics of included studies
Methods
All four studies finally selected for the review were randomised controlled trials published in English. The duration of the intervention was 24 months for the RIO-North America and 12 months for the RIO-Diabetes, RIO-Lipids and RIO-Europe study. Although the last two described a period of 24 months during which they were conducted, only the first 12-months results are provided. All trials had a run-in, as a single blind period before the randomisation.
Participants
The included studies involved 6625 participants. The main inclusion criteria entailed adults (18 years or older), with a body mass index greater than 27 kg/m2 and less than 5 kg variation in body weight within the three months before study entry.
Intervention
All trials were multicentric. The RIO-North America was conducted in the USA and Canada, RIO-Europe in Europe and the USA, RIO-Diabetes in the USA and 10 other different countries not specified, and RIO-Lipids in eight unspecified different countries.
The intervention received was placebo, 5 mg of rimonabant or 20 mg of rimonabant once daily in addition to a mild hypocaloric diet (600 kcal/day deficit).
Outcomes
Primary
In all studies the primary outcome assessed was weight change from baseline after one year of treatment and the RIO-North America study also evaluated the prevention of weight regain between the first and second year. All studies evaluated adverse effects, including those of any kind and serious events. Quality of life was measured in only one study, but the results were not described (RIO-Europe).
These included prevalence of metabolic syndrome after one year and change in cardiometabolic risk factors such as blood pressure, lipid profile, etc.
No study included mortality and costs as outcome.
The timing of outcome measures was variable and could include monthly investigations, evaluations every three months or a single final evaluation after one year.”134
In table: See table 2.
Table 2
Example of summary of study characteristics: Summary of included studies evaluating the efficacy of antiemetic agents in acute gastroenteritis. Adapted from DeCamp et al135
View this table:
Explanation For readers to gauge the validity and applicability of a systematic review’s results, they need to know something about the included studies. Such information includes PICOS (box 2) and specific information relevant to the review question. For example, if the review is examining the long term effects of antidepressants for moderate depressive disorder, authors should report the follow-up periods of the included studies. For each included study, authors should provide a citation for the source of their information regardless of whether or not the study is published. This information makes it easier for interested readers to retrieve the relevant publications or documents.
Reporting study-level data also allows the comparison of the main characteristics of the studies included in the review. Authors should present enough detail to allow readers to make their own judgments about the relevance of included studies. Such information also makes it possible for readers to conduct their own subgroup analyses and interpret subgroups, based on study characteristics.
Authors should avoid, whenever possible, assuming information when it is missing from a study report (such as sample size, method of randomisation). Reviewers may contact the original investigators to try to obtain missing information or confirm the data extracted for the systematic review. If this information is not obtained, this should be noted in the report. If information is imputed, the reader should be told how this was done and for which items. Presenting study-level data makes it possible to clearly identify unpublished information obtained from the original researchers and make it available for the public record.
Typically, study-level characteristics are presented as a table as in the example (table 2). Such presentation ensures that all pertinent items are addressed and that missing or unclear information is clearly indicated. Although paper based journals do not generally allow for the quantity of information available in electronic journals or Cochrane reviews, this should not be accepted as an excuse for omission of important aspects of the methods or results of included studies, since these can, if necessary, be shown on a website.
Following the presentation and description of each included study, as discussed above, reviewers usually provide a narrative summary of the studies. Such a summary provides readers with an overview of the included studies. It may, for example, address the languages of the published papers, years of publication, and geographic origins of the included studies.
The PICOS framework is often helpful in reporting the narrative summary indicating, for example, the clinical characteristics and disease severity of the participants and the main features of the intervention and of the comparison group. For non-pharmacological interventions, it may be helpful to specify for each study the key elements of the intervention received by each group. Full details of the interventions in included studies were reported in only three of 25 systematic reviews relevant to general practice.84
#### Item 19: Risk of bias within studies
Present data on risk of bias of each study and, if available, any outcome-level assessment (see item 12).
Example See table 3.
Table 3
Example of assessment of the risk of bias: Quality measures of the randomised controlled trials that failed to fulfil any one of six markers of validity. Adapted from Devereaux et al96
View this table:
Explanation We recommend that reviewers assess the risk of bias in the included studies using a standard approach with defined criteria (see item 12). They should report the results of any such assessments.89
Reporting only summary data (such as “two of eight trials adequately concealed allocation”) is inadequate because it fails to inform readers which studies had the particular methodological shortcoming. A more informative approach is to explicitly report the methodological features evaluated for each study. The Cochrane Collaboration’s new tool for assessing the risk of bias also requests that authors substantiate these assessments with any relevant text from the original studies.11 It is often easiest to provide these data in a tabular format, as in the example. However, a narrative summary describing the tabular data can also be helpful for readers.
#### Item 20: Results of individual studies
For all outcomes considered (benefits and harms), present, for each study, simple summary data for each intervention group and effect estimates and confidence intervals, ideally with a forest plot.
Examples See table 4 and fig 3.
Fig 3 Example of summary results: Overall failure (defined as failure of assigned regimen or relapse) with tetracycline-rifampicin versus tetracycline-streptomycin. Adapted from Skalsky et al137
Table 4
Example of summary results: Heterotopic ossification in trials comparing radiotherapy to non-steroidal anti-inflammatory drugs after major hip procedures and fractures. Adapted from Pakos et al136
View this table:
Explanation Publication of summary data from individual studies allows the analyses to be reproduced and other analyses and graphical displays to be investigated. Others may wish to assess the impact of excluding particular studies or consider subgroup analyses not reported by the review authors. Displaying the results of each treatment group in included studies also enables inspection of individual study features. For example, if only odds ratios are provided, readers cannot assess the variation in event rates across the studies, making the odds ratio impossible to interpret.138 Additionally, because data extraction errors in meta-analyses are common and can be large,139 the presentation of the results from individual studies makes it easier to identify errors. For continuous outcomes, readers may wish to examine the consistency of standard deviations across studies, for example, to be reassured that standard deviation and standard error have not been confused.138
For each study, the summary data for each intervention group are generally given for binary outcomes as frequencies with and without the event (or as proportions such as 12/45). It is not sufficient to report event rates per intervention group as percentages. The required summary data for continuous outcomes are the mean, standard deviation, and sample size for each group. In reviews that examine time-to-event data, the authors should report the log hazard ratio and its standard error (or confidence interval) for each included study. Sometimes, essential data are missing from the reports of the included studies and cannot be calculated from other data but may need to be imputed by the reviewers. For example, the standard deviation may be imputed using the typical standard deviations in the other trials116 117 (see item 14). Whenever relevant, authors should indicate which results were not reported directly and had to be estimated from other information (see item 13). In addition, the inclusion of unpublished data should be noted.
For all included studies it is important to present the estimated effect with a confidence interval. This information may be incorporated in a table showing study characteristics or may be shown in a forest plot.140 The key elements of the forest plot are the effect estimates and confidence intervals for each study shown graphically, but it is preferable also to include, for each study, the numerical group-specific summary data, the effect size and confidence interval, and the percentage weight (see second example, fig 3). For discussion of the results of meta-analysis, see item 21.
In principle, all the above information should be provided for every outcome considered in the review, including both benefits and harms. When there are too many outcomes for full information to be included, results for the most important outcomes should be included in the main report with other information provided as a web appendix. The choice of the information to present should be justified in light of what was originally stated in the protocol. Authors should explicitly mention if the planned main outcomes cannot be presented due to lack of information. There is some evidence that information on harms is only rarely reported in systematic reviews, even when it is available in the original studies.141 Selective omission of harms results biases a systematic review and decreases its ability to contribute to informed decision making.
#### Item 21: Syntheses of results
Present the main results of the review. If meta-analyses are done, include for each, confidence intervals and measures of consistency.
Examples “Mortality data were available for all six trials, randomizing 311 patients and reporting data for 305 patients. There were no deaths reported in the three respiratory syncytial virus/severe bronchiolitis trials; thus our estimate is based on three trials randomizing 232 patients, 64 of whom died. In the pooled analysis, surfactant was associated with significantly lower mortality (relative risk =0.7, 95% confidence interval =0.4–0.97, P=0.04). There was no evidence of heterogeneity (I2=0%).”142
“Because the study designs, participants, interventions, and reported outcome measures varied markedly, we focused on describing the studies, their results, their applicability, and their limitations and on qualitative synthesis rather than meta-analysis.”143
“We detected significant heterogeneity within this comparison (I2=46.6%, χ2=13.11, df=7, P=0.07). Retrospective exploration of the heterogeneity identified one trial that seemed to differ from the others. It included only small ulcers (wound area less than 5 cm2). Exclusion of this trial removed the statistical heterogeneity and did not affect the finding of no evidence of a difference in healing rate between hydrocolloids and simple low adherent dressings (relative risk=0.98, [95% confidence interval] 0.85 to 1.12, I2=0%).”144
Explanation Results of systematic reviews should be presented in an orderly manner. Initial narrative descriptions of the evidence covered in the review (see item 18) may tell readers important things about the study populations and the design and conduct of studies. These descriptions can facilitate the examination of patterns across studies. They may also provide important information about applicability of evidence, suggest the likely effects of any major biases, and allow consideration, in a systematic manner, of multiple explanations for possible differences of findings across studies.
If authors have conducted one or more meta-analyses, they should present the results as an estimated effect across studies with a confidence interval. It is often simplest to show each meta-analysis summary with the actual results of included studies in a forest plot (see item 20).140 It should always be clear which of the included studies contributed to each meta-analysis. Authors should also provide, for each meta-analysis, a measure of the consistency of the results from the included studies such as I2 (heterogeneity, see box 6); a confidence interval may also be given for this measure.145 If no meta-analysis was performed, the qualitative inferences should be presented as systematically as possible with an explanation of why meta-analysis was not done, as in the second example above.143 Readers may find a forest plot, without a summary estimate, helpful in such cases.
Authors should in general report syntheses for all the outcome measures they set out to investigate (that is, those described in the protocol, see item 4) to allow readers to draw their own conclusions about the implications of the results. Readers should be made aware of any deviations from the planned analysis. Authors should tell readers if the planned meta-analysis was not thought appropriate or possible for some of the outcomes and the reasons for that decision.
It may not always be sensible to give meta-analysis results and forest plots for each outcome. If the review addresses a broad question, there may be a very large number of outcomes. Also, some outcomes may have been reported in only one or two studies, in which case forest plots are of little value and may be seriously biased.
Of 300 systematic reviews indexed in Medline in 2004, a little more than half (54%) included meta-analyses, of which the majority (91%) reported assessing for inconsistency in results.
#### Item 22: Risk of bias across studies
Present results of any assessment of risk of bias across studies (see item 15).
Example “Strong evidence of heterogeneity (I2=79%, P<0.001) was observed. To explore this heterogeneity, a funnel plot was drawn. The funnel plot [fig 4 ] shows evidence of considerable asymmetry.”146
Fig 4 Example of a funnel plot showing evidence of considerable asymmetry. SE = standard error. Adapted from Appleton et al146
“Specifically, four sertraline trials involving 486 participants and one citalopram trial involving 274 participants were reported as having failed to achieve a statistically significant drug effect, without reporting mean HRSD [Hamilton Rating Scale for Depression] scores. We were unable to find data from these trials on pharmaceutical company Web sites or through our search of the published literature. These omissions represent 38% of patients in sertraline trials and 23% of patients in citalopram trials. Analyses with and without inclusion of these trials found no differences in the patterns of results; similarly, the revealed patterns do not interact with drug type. The purpose of using the data obtained from the FDA was to avoid publication bias, by including unpublished as well as published trials. Inclusion of only those sertraline and citalopram trials for which means were reported to the FDA would constitute a form of reporting bias similar to publication bias and would lead to overestimation of drug–placebo differences for these drug types. Therefore, we present analyses only on data for medications for which complete clinical trials’ change was reported.”147
Explanation Authors should present the results of any assessments of risk of bias across studies. If a funnel plot is reported, authors should specify the effect estimate and measure of precision used, presented typically on the x axis and y axis, respectively. Authors should describe if and how they have tested the statistical significance of any possible asymmetry (see item 15). Results of any investigations of selective reporting of outcomes within studies (as discussed in item 15) should also be reported. Also, we advise authors to tell readers if any pre-specified analyses for assessing risk of bias across studies were not completed and the reasons (such as too few included studies).
Give results of additional analyses, if done (such as sensitivity or subgroup analyses, meta-regression [see item 16]).
Example “...benefits of chondroitin were smaller in trials with adequate concealment of allocation compared with trials with unclear concealment (P for interaction =0.050), in trials with an intention-to-treat analysis compared with those that had excluded patients from the analysis (P for interaction =0.017), and in large compared with small trials (P for interaction =0.022).”148
“Subgroup analyses according to antibody status, antiviral medications, organ transplanted, treatment duration, use of antilymphocyte therapy, time to outcome assessment, study quality and other aspects of study design did not demonstrate any differences in treatment effects. Multivariate meta-regression showed no significant difference in CMV [cytomegalovirus] disease after allowing for potential confounding or effect-modification by prophylactic drug used, organ transplanted or recipient serostatus in CMV positive recipients and CMV negative recipients of CMV positive donors.”149
Explanation Authors should report any subgroup or sensitivity analyses and whether they were pre-specified (see items 5 and 16). For analyses comparing subgroups of studies (such as separating studies of low and high dose aspirin), the authors should report any tests for interactions, as well as estimates and confidence intervals from meta-analyses within each subgroup. Similarly, meta-regression results (see item 16) should not be limited to P values but should include effect sizes and confidence intervals,150 as the first example reported above does in a table. The amount of data included in each additional analysis should be specified if different from that considered in the main analyses. This information is especially relevant for sensitivity analyses that exclude some studies; for example, those with high risk of bias.
Importantly, all additional analyses conducted should be reported, not just those that were statistically significant. This information will help avoid selective outcome reporting bias within the review as has been demonstrated in reports of randomised controlled trials.42 44 121 151 152 Results from exploratory subgroup or sensitivity analyses should be interpreted cautiously, bearing in mind the potential for multiple analyses to mislead.
### Discussion
#### Item 24: Summary of evidence
Summarise the main findings, including the strength of evidence for each main outcome; consider their relevance to key groups (such as healthcare providers, users, and policy makers).
Example “Overall, the evidence is not sufficiently robust to determine the comparative effectiveness of angioplasty (with or without stenting) and medical treatment alone. Only 2 randomized trials with long-term outcomes and a third randomized trial that allowed substantial crossover of treatment after 3 months directly compared angioplasty and medical treatment…the randomized trials did not evaluate enough patients or did not follow patients for a sufficient duration to allow definitive conclusions to be made about clinical outcomes, such as mortality and cardiovascular or kidney failure events.
Some acceptable evidence from comparison of medical treatment and angioplasty suggested no difference in long-term kidney function but possibly better blood pressure control after angioplasty, an effect that may be limited to patients with bilateral atherosclerotic renal artery stenosis. The evidence regarding other outcomes is weak. Because the reviewed studies did not explicitly address patients with rapid clinical deterioration who may need acute intervention, our conclusions do not apply to this important subset of patients.”143
Explanation Authors should give a brief and balanced summary of the nature and findings of the review. Sometimes, outcomes for which little or no data were found should be noted due to potential relevance for policy decisions and future research. Applicability of the review’s findings—to different patients, settings, or target audiences, for example—should be mentioned. Although there is no standard way to assess applicability simultaneously to different audiences, some systems do exist.153 Sometimes, authors formally rate or assess the overall body of evidence addressed in the review and can present the strength of their summary recommendations tied to their assessments of the quality of evidence (such as the GRADE system).10
Authors need to keep in mind that statistical significance of the effects does not always suggest clinical or policy relevance. Likewise, a non-significant result does not demonstrate that a treatment is ineffective. Authors should ideally clarify trade-offs and how the values attached to the main outcomes would lead different people to make different decisions. In addition, adroit authors consider factors that are important in translating the evidence to different settings and that may modify the estimates of effects reported in the review.153 Patients and healthcare providers may be primarily interested in which intervention is most likely to provide a benefit with acceptable harms, while policy makers and administrators may value data on organisational impact and resource utilisation.
#### Item 25: Limitations
Discuss limitations at study and outcome level (such as risk of bias), and at review level (such as incomplete retrieval of identified research, reporting bias).
Examples Outcome level: “The meta-analysis reported here combines data across studies in order to estimate treatment effects with more precision than is possible in a single study. The main limitation of this meta-analysis, as with any overview, is that the patient population, the antibiotic regimen and the outcome definitions are not the same across studies.”154
Study and review level: “Our study has several limitations. The quality of the studies varied. Randomization was adequate in all trials; however, 7 of the articles did not explicitly state that analysis of data adhered to the intention-to-treat principle, which could lead to overestimation of treatment effect in these trials, and we could not assess the quality of 4 of the 5 trials reported as abstracts. Analyses did not identify an association between components of quality and re-bleeding risk, and the effect size in favour of combination therapy remained statistically significant when we excluded trials that were reported as abstracts.
Publication bias might account for some of the effect we observed. Smaller trials are, in general, analyzed with less methodological rigor than larger studies, and an asymmetrical funnel plot suggests that selective reporting may have led to an overestimation of effect sizes in small trials.”155
Explanation A discussion of limitations should address the validity (that is, risk of bias) and reporting (informativeness) of the included studies, limitations of the review process, and generalisability (applicability) of the review. Readers may find it helpful if authors discuss whether studies were threatened by serious risks of bias, whether the estimates of the effect of the intervention are too imprecise, or if there were missing data for many participants or important outcomes.
Limitations of the review process might include limitations of the search (such as restricting to English-language publications), and any difficulties in the study selection, appraisal, and meta-analysis processes. For example, poor or incomplete reporting of study designs, patient populations, and interventions may hamper interpretation and synthesis of the included studies.84 Applicability of the review may be affected if there are limited data for certain populations or subgroups where the intervention might perform differently or few studies assessing the most important outcomes of interest; or if there is a substantial amount of data relating to an outdated intervention or comparator or heavy reliance on imputation of missing values for summary estimates (item 14).
#### Item 26: Conclusions
Provide a general interpretation of the results in the context of other evidence, and implications for future research.
Example Implications for practice: “Between 1995 and 1997 five different meta-analyses of the effect of antibiotic prophylaxis on infection and mortality were published. All confirmed a significant reduction in infections, though the magnitude of the effect varied from one review to another. The estimated impact on overall mortality was less evident and has generated considerable controversy on the cost effectiveness of the treatment. Only one among the five available reviews, however, suggested that a weak association between respiratory tract infections and mortality exists and lack of sufficient statistical power may have accounted for the limited effect on mortality.”
Implications for research: “A logical next step for future trials would thus be the comparison of this protocol against a regimen of a systemic antibiotic agent only to see whether the topical component can be dropped. We have already identified six such trials but the total number of patients so far enrolled (n=1056) is too small for us to be confident that the two treatments are really equally effective. If the hypothesis is therefore considered worth testing more and larger randomised controlled trials are warranted. Trials of this kind, however, would not resolve the relevant issue of treatment induced resistance. To produce a satisfactory answer to this, studies with a different design would be necessary. Though a detailed discussion goes beyond the scope of this paper, studies in which the intensive care unit rather than the individual patient is the unit of randomisation and in which the occurrence of antibiotic resistance is monitored over a long period of time should be undertaken.”156
Explanation Systematic reviewers sometimes draw conclusions that are too optimistic157 or do not consider the harms equally as carefully as the benefits, although some evidence suggests these problems are decreasing.158 If conclusions cannot be drawn because there are too few reliable studies, or too much uncertainty, this should be stated. Such a finding can be as important as finding consistent effects from several large studies.
Authors should try to relate the results of the review to other evidence, as this helps readers to better interpret the results. For example, there may be other systematic reviews about the same general topic that have used different methods or have addressed related but slightly different questions.159 160 Similarly, there may be additional information relevant to decision makers, such as the cost-effectiveness of the intervention (such as health technology assessment). Authors may discuss the results of their review in the context of existing evidence regarding other interventions.
We advise authors also to make explicit recommendations for future research. In a sample of 2535 Cochrane reviews, 82% included recommendations for research with specific interventions, 30% suggested the appropriate type of participants, and 52% suggested outcome measures for future research.161 There is no corresponding assessment about systematic reviews published in medical journals, but we believe that such recommendations are much less common in those reviews.
Clinical research should not be planned without a thorough knowledge of similar, existing research.162 There is evidence that this still does not occur as it should and that authors of primary studies do not consider a systematic review when they design their studies.163 We believe systematic reviews have great potential for guiding future clinical research.
### Funding
#### Item 27: Funding
Describe sources of funding or other support (such as supply of data) for the systematic review, and the role of funders for the systematic review.
Examples “The evidence synthesis upon which this article was based was funded by the Centers for Disease Control and Prevention for the Agency for Healthcare Research and Quality and the U.S. Prevention Services Task Force.”164
“Role of funding source: The funders played no role in study design, collection, analysis, interpretation of data, writing of the report, or in the decision to submit the paper for publication. They accept no responsibility for the contents.”165
Explanation Authors of systematic reviews, like those of any other research study, should disclose any funding they received to carry out the review, or state if the review was not funded. Lexchin and colleagues166 observed that outcomes of reports of randomised trials and meta-analyses of clinical trials funded by the pharmaceutical industry are more likely to favor the sponsor’s product compared with studies with other sources of funding. Similar results have been reported elsewhere.167 168 Analogous data suggest that similar biases may affect the conclusions of systematic reviews.169
Given the potential role of systematic reviews in decision making, we believe authors should be transparent about the funding and the role of funders, if any. Sometimes the funders will provide services, such as those of a librarian to complete the searches for relevant literature or access to commercial databases not available to the reviewers. Any level of funding or services provided to the systematic review team should be reported. Authors should also report whether the funder had any role in the conduct or report of the review. Beyond funding issues, authors should report any real or perceived conflicts of interest related to their role or the role of the funder in the reporting of the systematic review.170
In a survey of 300 systematic reviews published in November 2004, funding sources were not reported in 41% of the reviews.3 Only a minority of reviews (2%) reported being funded by for-profit sources, but the true proportion may be higher.171
## Additional considerations for systematic reviews of non-randomised intervention studies or for other types of systematic reviews
The PRISMA statement and this document have focused on systematic reviews of reports of randomised trials. Other study designs, including non-randomised studies, quasi-experimental studies, and interrupted time series, are included in some systematic reviews that evaluate the effects of healthcare interventions.172 173 The methods of these reviews may differ to varying degrees from the typical intervention review, for example regarding the literature search, data abstraction, assessment of risk of bias, and analysis methods. As such, their reporting demands might also differ from what we have described here. A useful principle is for systematic review authors to ensure that their methods are reported with adequate clarity and transparency to enable readers to critically judge the available evidence and replicate or update the research.
In some systematic reviews, the authors will seek the raw data from the original researchers to calculate the summary statistics. These systematic reviews are called individual patient (or participant) data reviews.40 41 Individual patient data meta-analyses may also be conducted with prospective accumulation of data rather than retrospective accumulation of existing data. Here too, extra information about the methods will need to be reported.
Other types of systematic reviews exist. Realist reviews aim to determine how complex programmes work in specific contexts and settings.174 Meta-narrative reviews aim to explain complex bodies of evidence through mapping and comparing different overarching storylines.175 Network meta-analyses, also known as multiple treatments meta-analyses, can be used to analyse data from comparisons of many different treatments.176 177 They use both direct and indirect comparisons and can be used to compare interventions that have not been directly compared.
We believe that the issues we have highlighted in this paper are relevant to ensure transparency and understanding of the processes adopted and the limitations of the information presented in systematic reviews of different types. We hope that PRISMA can be the basis for more detailed guidance on systematic reviews of other types of research, including diagnostic accuracy and epidemiological studies.
## Discussion
We developed the PRISMA statement using an approach for developing reporting guidelines that has evolved over several years.178 The overall aim of PRISMA is to help ensure the clarity and transparency of reporting of systematic reviews, and recent data indicate that this reporting guidance is much needed.3 PRISMA is not intended to be a quality assessment tool and it should not be used as such.
This PRISMA explanation and elaboration document was developed to facilitate the understanding, uptake, and dissemination of the PRISMA statement and hopefully provide a pedagogical framework for those interested in conducting and reporting systematic reviews. It follows a format similar to that used in other explanatory documents.17 18 19 Following the recommendations in the PRISMA checklist may increase the word count of a systematic review report. We believe, however, that the benefit of readers being able to critically appraise a clear, complete, and transparent systematic review report outweighs the possible slight increase in the length of the report.
While the aims of PRISMA are to reduce the risk of flawed reporting of systematic reviews and improve the clarity and transparency in how reviews are conducted, we have little data to state more definitively whether this “intervention” will achieve its intended goal. A previous effort to evaluate QUOROM was not successfully completed.178 Publication of the QUOROM statement was delayed for two years while a research team attempted to evaluate its effectiveness by conducting a randomised controlled trial with the participation of eight major medical journals. Unfortunately that trial was not completed due to accrual problems (David Moher, personal communication). Other evaluation methods might be easier to conduct. At least one survey of 139 published systematic reviews in the critical care literature179 suggests that their quality improved after the publication of QUOROM.
If the PRISMA statement is endorsed by and adhered to in journals, as other reporting guidelines have been,17 18 19 180 there should be evidence of improved reporting of systematic reviews. For example, there have been several evaluations of whether the use of CONSORT improves reports of randomised controlled trials. A systematic review of these studies181 indicates that use of CONSORT is associated with improved reporting of certain items, such as allocation concealment. We aim to evaluate the benefits (that is, improved reporting) and possible adverse effects (such as increased word length) of PRISMA and we encourage others to consider doing likewise.
Even though we did not carry out a systematic literature search to produce our checklist, and this is indeed a limitation of our effort, PRISMA was developed using an evidence based approach whenever possible. Checklist items were included if there was evidence that not reporting the item was associated with increased risk of bias, or where it was clear that information was necessary to appraise the reliability of a review. To keep PRISMA up to date and as evidence based as possible requires regular vigilance of the literature, which is growing rapidly. Currently the Cochrane Methodology Register has more than 11 000 records pertaining to the conduct and reporting of systematic reviews and other evaluations of health and social care. For some checklist items, such as reporting the abstract (item 2), we have used evidence from elsewhere in the belief that the issue applies equally well to reporting of systematic reviews. Yet for other items, evidence does not exist; for example, whether a training exercise improves the accuracy and reliability of data extraction. We hope PRISMA will act as a catalyst to help generate further evidence that can be considered when further revising the checklist in the future.
More than 10 years have passed between the development of the QUOROM statement and its update, the PRISMA statement. We aim to update PRISMA more frequently. We hope that the implementation of PRISMA will be better than it has been for QUOROM. There are at least two reasons to be optimistic. First, systematic reviews are increasingly used by healthcare providers to inform “best practice” patient care. Policy analysts and managers are using systematic reviews to inform healthcare decision making and to better target future research. Second, we anticipate benefits from the development of the EQUATOR Network, described below.
Developing any reporting guideline requires considerable effort, experience, and expertise. While reporting guidelines have been successful for some individual efforts,17 18 19 there are likely others who want to develop reporting guidelines who possess little time, experience, or knowledge as to how to do so appropriately. The EQUATOR (enhancing the quality and transparency of health research) Network aims to help such individuals and groups by serving as a global resource for anybody interested in developing reporting guidelines, regardless of the focus.7 180 182 The overall goal of EQUATOR is to improve the quality of reporting of all health science research through the development and translation of reporting guidelines. Beyond this aim, the network plans to develop a large web presence by developing and maintaining a resource centre of reporting tools, and other information for reporting research (www.equator-network.org/).
We encourage healthcare journals and editorial groups, such as the World Association of Medical Editors and the International Committee of Medical Journal Editors, to endorse PRISMA in much the same way as they have endorsed other reporting guidelines, such as CONSORT. We also encourage editors of healthcare journals to support PRISMA by updating their “instructions to authors” and including the PRISMA web address, and by raising awareness through specific editorial actions.
#### Box 1: Terminology
The terminology used to describe systematic reviews and meta-analyses has evolved over time and varies between fields. Different terms have been used by different groups, such as educators and psychologists. The conduct of a systematic review comprises several explicit and reproducible steps, such as identifying all likely relevant records, selecting eligible studies, assessing the risk of bias, extracting data, qualitative synthesis of the included studies, and possibly meta-analyses.
Initially this entire process was termed a meta-analysis and was so defined in the QUOROM statement.8 More recently, especially in healthcare research, there has been a trend towards preferring the term systematic review. If quantitative synthesis is performed, this last stage alone is referred to as a meta-analysis. The Cochrane Collaboration uses this terminology,9 under which a meta-analysis, if performed, is a component of a systematic review. Regardless of the question addressed and the complexities involved, it is always possible to complete a systematic review of existing data, but not always possible or desirable, to quantitatively synthesise results because of clinical, methodological, or statistical differences across the included studies. Conversely, with prospective accumulation of studies and datasets where the plan is eventually to combine them, the term “(prospective) meta-analysis” may make more sense than “systematic review.”
For retrospective efforts, one possibility is to use the term systematic review for the whole process up to the point when one decides whether to perform a quantitative synthesis. If a quantitative synthesis is performed, some researchers refer to this as a meta-analysis. This definition is similar to that found in the current edition of the Dictionary of Epidemiology.183
While we recognise that the use of these terms is inconsistent and there is residual disagreement among the members of the panel working on PRISMA, we have adopted the definitions used by the Cochrane Collaboration.9
Systematic review A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria to answer a specific research question. It uses explicit, systematic methods that are selected with a view to minimising bias, thus providing reliable findings from which conclusions can be drawn and decisions made.184 185 The key characteristics of a systematic review are (a) a clearly stated set of objectives with an explicit, reproducible methodology; (b) a systematic search that attempts to identify all studies that would meet the eligibility criteria; (c) an assessment of the validity of the findings of the included studies, such as through the assessment of risk of bias; and (d) systematic presentation and synthesis of the characteristics and findings of the included studies.
Meta-analysis Meta-analysis is the use of statistical techniques to integrate and summarise the results of included studies. Many systematic reviews contain meta-analyses, but not all. By combining information from all relevant studies, meta-analyses can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review.
#### Box 2: Helping to develop the research question(s): the PICOS approach
Formulating relevant and precise questions that can be answered in a systematic review can be complex and time consuming. A structured approach for framing questions that uses five components may help facilitate the process. This approach is commonly known by the acronym “PICOS” where each letter refers to a component: the patient population or the disease being addressed (P), the interventions or exposure (I), the comparator group (C), the outcome or endpoint (O), and the study design chosen (S).186 Issues relating to PICOS affect several PRISMA items (items 6, 8, 9, 10, 11, and 18).
• P—Providing information about the population requires a precise definition of a group of participants (often patients), such as men over the age of 65 years, their defining characteristics of interest (often disease), and possibly the setting of care considered, such as an acute care hospital.
• I—The interventions (exposures) under consideration in the systematic review need to be transparently reported. For example, if the reviewers answer a question regarding the association between a woman’s prenatal exposure to folic acid and subsequent offspring’s neural tube defects, reporting the dose, frequency, and duration of folic acid used in different studies is likely to be important for readers to interpret the review’s results and conclusions. Other interventions (exposures) might include diagnostic, preventive, or therapeutic treatments; arrangements of specific processes of care; lifestyle changes; psychosocial or educational interventions; or risk factors.
• C—Clearly reporting the comparator (control) group intervention(s)—such as usual care, drug, or placebo—is essential for readers to fully understand the selection criteria of primary studies included in the systematic review, and might be a source of heterogeneity investigators have to deal with. Comparators are often poorly described. Clearly reporting what the intervention is compared with is important and may sometimes have implications for the inclusion of studies in a review—many reviews compare with “standard care,” which is otherwise undefined; this should be properly addressed by authors.
• O—The outcomes of the intervention being assessed—such as mortality, morbidity, symptoms, or quality of life improvements—should be clearly specified as they are required to interpret the validity and generalisability of the systematic review’s results.
• S—Finally, the type of study design(s) included in the review should be reported. Some reviews include only reports of randomised trials, whereas others have broader design criteria and include randomised trials and certain types of observational studies. Still other reviews, such as those specifically answering questions related to harms, may include a wide variety of designs ranging from cohort studies to case reports. Whatever study designs are included in the review, these should be reported.
Independently from how difficult it is to identify the components of the research question, the important point is that a structured approach is preferable, and this extends beyond systematic reviews of effectiveness. Ideally the PICOS criteria should be formulated a priori, in the systematic review’s protocol, although some revisions might be required because of the iterative nature of the review process. Authors are encouraged to report their PICOS criteria and whether any modifications were made during the review process. A useful example in this realm is the appendix of the “systematic reviews of water fluoridation” undertaken by the Centre for Reviews and Dissemination.187
#### Box 3: Identification of study reports and data extraction
Comprehensive searches usually result in a large number of identified records, a much smaller number of studies included in the systematic review, and even fewer of these studies included in any meta-analyses. Reports of systematic reviews often provide little detail as to the methods used by the review team in this process. Readers are often left with what can be described as the “X-files” phenomenon, as it is unclear what occurs between the initial set of identified records and those finally included in the review.
Sometimes, review authors simply report the number of included studies; more often they report the initial number of identified records and the number of included studies. Rarely, although this is optimal for readers, do review authors report the number of identified records, the smaller number of potentially relevant studies, and the even smaller number of included studies, by outcome. Review authors also need to differentiate between the number of reports and studies. Often there will not be a 1:1 ratio of reports to studies and this information needs to be described in the systematic review report.
Ideally, the identification of study reports should be reported as text in combination with use of the PRISMA flow diagram. While we recommend use of the flow diagram, a small number of reviews might be particularly simple and can be sufficiently described with a few brief sentences of text. More generally, review authors will need to report the process used for each step: screening the identified records; examining the full text of potentially relevant studies (and reporting the number that could not be obtained); and applying eligibility criteria to select the included studies.
Such descriptions should also detail how potentially eligible records were promoted to the next stage of the review (such as full text screening) and to the final stage of this process, the included studies. Often review teams have three response options for excluding records or promoting them to the next stage of the winnowing process: “yes,” “no,” and “maybe.”
Similarly, some detail should be reported on who participated and how such processes were completed. For example, a single person may screen the identified records while a second person independently examines a small sample of them. The entire winnowing process is one of “good bookkeeping” whereby interested readers should be able to work backwards from the included studies to come up with the same numbers of identified records.
There is often a paucity of information describing the data extraction processes in reports of systematic reviews. Authors may simply report that “relevant” data were extracted from each included study with little information about the processes used for data extraction. It may be useful for readers to know whether a systematic review’s authors developed, a priori or not, a data extraction form, whether multiple forms were used, the number of questions, whether the form was pilot tested, and who completed the extraction. For example, it is important for readers to know whether one or more people extracted data, and if so, whether this was completed independently, whether “consensus” data were used in the analyses, and if the review team completed an informal training exercise or a more formal reliability exercise.
#### Box 4: Study quality and risk of bias
In this paper, and elsewhere,11 we sought to use a new term for many readers, namely, risk of bias, for evaluating each included study in a systematic review. Previous papers89 188 tended to use the term “quality.” When carrying out a systematic review we believe it is important to distinguish between quality and risk of bias and to focus on evaluating and reporting the latter. Quality is often the best the authors have been able to do. For example, authors may report the results of surgical trials in which blinding of the outcome assessors was not part of the trial’s conduct. Even though this may have been the best methodology the researchers were able to do, there are still theoretical grounds for believing that the study was susceptible to (risk of) bias.
Assessing the risk of bias should be part of the conduct and reporting of any systematic review. In all situations, we encourage systematic reviewers to think ahead carefully about what risks of bias (methodological and clinical) may have a bearing on the results of their systematic reviews.
For systematic reviewers, understanding the risk of bias on the results of studies is often difficult, because the report is only a surrogate of the actual conduct of the study. There is some suggestion189 190 that the report may not be a reasonable facsimile of the study, although this view is not shared by all.88 191 There are three main ways to assess risk of bias—individual components, checklists, and scales. There are a great many scales available,192 although we caution against their use based on theoretical grounds193 and emerging empirical evidence.194 Checklists are less frequently used and potentially have the same problems as scales. We advocate using a component approach and one that is based on domains for which there is good empirical evidence and perhaps strong clinical grounds. The new Cochrane risk of bias tool11 is one such component approach.
The Cochrane risk of bias tool consists of five items for which there is empirical evidence for their biasing influence on the estimates of an intervention’s effectiveness in randomised trials (sequence generation, allocation concealment, blinding, incomplete outcome data, and selective outcome reporting) and a catch-all item called “other sources of bias”.11 There is also some consensus that these items can be applied for evaluation of studies across diverse clinical areas.93 Other risk of bias items may be topic or even study specific—that is, they may stem from some peculiarity of the research topic or some special feature of the design of a specific study. These peculiarities need to be investigated on a case-by-case basis, based on clinical and methodological acumen, and there can be no general recipe. In all situations, systematic reviewers need to think ahead carefully about what aspects of study quality may have a bearing on the results.
#### Box 5: Whether to combine data
Deciding whether to combine data involves statistical, clinical, and methodological considerations. The statistical decisions are perhaps the most technical and evidence-based. These are more thoroughly discussed in box 6. The clinical and methodological decisions are generally based on discussions within the review team and may be more subjective.
Clinical considerations will be influenced by the question the review is attempting to address. Broad questions might provide more “license” to combine more disparate studies, such as whether “Ritalin is effective in increasing focused attention in people diagnosed with attention deficit hyperactivity disorder (ADHD).” Here authors might elect to combine reports of studies involving children and adults. If the clinical question is more focused, such as whether “Ritalin is effective in increasing classroom attention in previously undiagnosed ADHD children who have no comorbid conditions,” it is likely that different decisions regarding synthesis of studies are taken by authors. In any case authors should describe their clinical decisions in the systematic review report.
Deciding whether to combine data also has a methodological component. Reviewers may decide not to combine studies of low risk of bias with those of high risk of bias (see items 12 and 19). For example, for subjective outcomes, systematic review authors may not wish to combine assessments that were completed under blind conditions with those that were not.
For any particular question there may not be a “right” or “wrong” choice concerning synthesis, as such decisions are likely complex. However, as the choice may be subjective, authors should be transparent as to their key decisions and describe them for readers.
#### Box 6: Meta-analysis and assessment of consistency (heterogeneity)
##### Meta-analysis: statistical combination of the results of multiple studies
If it is felt that studies should have their results combined statistically, other issues must be considered because there are many ways to conduct a meta-analysis. Different effect measures can be used for both binary and continuous outcomes (see item 13). Also, there are two commonly used statistical models for combining data in a meta-analysis.195 The fixed-effect model assumes that there is a common treatment effect for all included studies;196 it is assumed that the observed differences in results across studies reflect random variation.196 The random-effects model assumes that there is no common treatment effect for all included studies but rather that the variation of the effects across studies follows a particular distribution.197 In a random-effects model it is believed that the included studies represent a random sample from a larger population of studies addressing the question of interest.198
There is no consensus about whether to use fixed- or random-effects models, and both are in wide use. The following differences have influenced some researchers regarding their choice between them. The random-effects model gives more weight to the results of smaller trials than does the fixed-effect analysis, which may be undesirable as small trials may be inferior and most prone to publication bias. The fixed-effect model considers only within-study variability, whereas the random-effects model considers both within- and between-study variability. This is why a fixed-effect analysis tends to give narrower confidence intervals (that is, provides greater precision) than a random-effects analysis.110 196 199 In the absence of any between-study heterogeneity, the fixed- and random-effects estimates will coincide.
In addition, there are different methods for performing both types of meta-analysis.200 Common fixed-effect approaches are Mantel-Haenszel and inverse variance, whereas random-effects analyses usually use the DerSimonian and Laird approach, although other methods exist, including Bayesian meta-analysis.201
In the presence of demonstrable between-study heterogeneity (see below), some consider that the use of a fixed-effect analysis is counterintuitive because their main assumption is violated. Others argue that it is inappropriate to conduct any meta-analysis when there is unexplained variability across trial results. If the reviewers decide not to combine the data quantitatively, a danger is that eventually they may end up using quasi-quantitative rules of poor validity (such as vote counting of how many studies have nominally significant results) for interpreting the evidence. Statistical methods to combine data exist for almost any complex situation that may arise in a systematic review, but one has to be aware of their assumptions and limitations to avoid misapplying or misinterpreting these methods.
##### Assessment of consistency (heterogeneity)
We expect some variation (inconsistency) in the results of different studies due to chance alone. Variability in excess of that due to chance reflects true differences in the results of the trials, and is called “heterogeneity.” The conventional statistical approach to evaluating heterogeneity is a χ2 test (Cochran’s Q), but it has low power when there are few studies and excessive power when there are many studies.202 By contrast, the I2 statistic quantifies the amount of variation in results across studies beyond that expected by chance and so is preferable to Q.202 203 I2 represents the percentage of the total variation in estimated effects across studies that is due to heterogeneity rather than to chance; some authors consider an I2 value less than 25% as low.202 However, I2 also suffers from large uncertainty in the common situation where only a few studies are available,204 and reporting the uncertainty in I2 (such as 95% confidence interval) may be helpful.145 When there are few studies, inferences about heterogeneity should be cautious.
When considerable heterogeneity is observed, it is advisable to consider possible reasons.205 In particular, the heterogeneity may be due to differences between subgroups of studies (see item 16). Also, data extraction errors are a common cause of substantial heterogeneity in results with continuous outcomes.139
#### Box 7: Bias caused by selective publication of studies or results within studies
Systematic reviews aim to incorporate information from all relevant studies. The absence of information from some studies may pose a serious threat to the validity of a review. Data may be incomplete because some studies were not published, or because of incomplete or inadequate reporting within a published article. These problems are often summarised as “publication bias,” although the bias arises from non-publication of full studies and selective publication of results in relation to their findings. Non-publication of research findings dependent on the actual results is an important risk of bias to a systematic review and meta-analysis.
##### Missing studies
Several empirical investigations have shown that the findings from clinical trials are more likely to be published if the results are statistically significant (P<0.05) than if they are not.125 206 207 For example, of 500 oncology trials with more than 200 participants for which preliminary results were presented at a conference of the American Society of Clinical Oncology, 81% with P<0.05 were published in full within five years compared with only 68% of those with P>0.05.208
Also, among published studies, those with statistically significant results are published sooner than those with non-significant findings.209 When some studies are missing for these reasons, the available results will be biased towards exaggerating the effect of an intervention.
##### Missing outcomes
In many systematic reviews only some of the eligible studies (often a minority) can be included in a meta-analysis for a specific outcome. For some studies, the outcome may not be measured or may be measured but not reported. The former will not lead to bias, but the latter could.
Evidence is accumulating that selective reporting bias is widespread and of considerable importance.42 43 In addition, data for a given outcome may be analysed in multiple ways and the choice of presentation influenced by the results obtained. In a study of 102 randomised trials, comparison of published reports with trial protocols showed that a median of 38% efficacy and 50% safety outcomes per trial, respectively, were not available for meta-analysis. Statistically significant outcomes had higher odds of being fully reported in publications when compared with non-significant outcomes for both efficacy (pooled odds ratio 2.4 (95% confidence interval 1.4 to 4.0)) and safety (4.7 (1.8 to 12)) data. Several other studies have had similar findings.210 211
##### Detection of missing information
Missing studies may increasingly be identified from trials registries. Evidence of missing outcomes may come from comparison with the study protocol, if available, or by careful examination of published articles.11 Study publication bias and selective outcome reporting are difficult to exclude or verify from the available results, especially when few studies are available.
If the available data are affected by either (or both) of the above biases, smaller studies would tend to show larger estimates of the effects of the intervention. Thus one possibility is to investigate the relation between effect size and sample size (or more specifically, precision of the effect estimate). Graphical methods, especially the funnel plot,212 and analytic methods (such as Egger’s test) are often used,213 214 215 although their interpretation can be problematic.216 217 Strictly speaking, such analyses investigate “small study bias”; there may be many reasons why smaller studies have systematically different effect sizes than larger studies, of which reporting bias is just one.218 Several alternative tests for bias have also been proposed, beyond the ones testing small study bias,215 219 220 but none can be considered a gold standard. Although evidence that smaller studies had larger estimated effects than large ones may suggest the possibility that the available evidence is biased, misinterpretation of such data is common.123
## Notes
Cite this as: BMJ 2009;339:b2700
## Footnotes
• Lorenzo Moja helped with the preparation and the several updates of the manuscript and assisted with the preparation of the reference list. AL is the guarantor of the manuscript.
• Competing interests: None declared.
• Provenance and peer review: Not commissioned; externally peer reviewed.
• In order to encourage dissemination of the PRISMA statement, this article is freely accessible on bmj.com and will also be published in PLoS Medicine, Annals of Internal Medicine, Journal of Clinical Epidemiology, and Open Medicine. The authors jointly hold the copyright of this article. For details on further use, see the PRISMA website (www.prisma-statement.org/).
This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
View Abstract
|
{}
|
Prove the following
Question:
If for $x \geq 0, y=y(x)$ is the solution of the differential equation $(x+1) d y=\left((x+1)^{2}+y-3\right) d x, y(2)=0$ then $y(3)$ is equal to______.
Solution:
|
{}
|
## Thursday, April 06, 2017
### Does the NCLT Have Enough Judges?
by Devendra Damle and Prasanth Regy.
The recently passed Finance Bill made headlines for combining tribunals, purportedly to rationalise their functioning. This is not the first such attempt. The Parliament set up the National Company Law Tribunal (NCLT) with a similar objective of streamlining all judicial proceedings under the Companies Act, 1956 and Companies Act, 2013. It has been operating since June 2016. The NCLT will hear all cases under these Acts which would earlier have gone to one of four existing courts and tribunals: the Company Law Board (CLB), Board for Industrial and Financial Reconstruction (BIFR), high courts (HCs) and Debt Recovery Tribunals (DRT).
However, commentators have pointed out that the NCLT is ill-equipped to cope with the pending cases it will inherit from the high courts and three tribunals. Others have pointed out that many of the legal and procedural issues which made the other tribunals ineffective will likely plague the NCLT too. One common concern is whether there are enough judges.
The popular discourse till date has largely focussed on pending cases which the NCLT will inherit. While the volume of these cases is substantial, we estimate that the volume of new cases which will instituted will be even larger. In this article we estimate how many judges the NCLT will need to handle this caseload. We find that the present strength of the NCLT is far lower than what is required. If this problem is not solved, NCLT is likely to end up a slow and inefficient tribunal.
### Our Approach
We use the tribunals which would have originally heard the cases as a proxy for the different types of cases NCLT will hear. So we have HC-type cases, BIFR-type cases, DRT-type cases, and CLB-type cases. To calculate the number of judges NCLT needs, we need to know the:
1. Annual rate of institution of cases of each type (I), and
2. Annual rate of disposal of cases per judge for cases of each type (D).
To get I, we take the average number of cases instituted every year in HCs and each of the three tribunals. We only count the kind of cases which will be transferred to NCLT. For example, original jurisdiction company petitions were earlier being heard by HCs, but will now be heard by NCLT. So, from the total cases instituted in HCs every year, we only count original jurisdiction company petitions.
For D, we first calculate the average number of cases (of the relevant kind) disposed of by HCs and each of the three tribunals every year. Then we divide this average disposal rate for each of them by the number of judges. For example, the CLB has 5 benches. So D for CLB is equal to total cases disposed of by CLB in one year divided by 5. Thus, we get the average disposal rate per judge for each of the four types.
I/D gives us the number of judges required for disposing of each type of case instituted in one year. Adding all values of I/D gives the total judges NCLT will need to dispose of all cases of all types instituted every year.
### What is the NCLT's caseload?
In an article by the consultancy Alvarez and Marsal, the authors estimate that a total of 24,900 existing cases — 4,000 from CLB, 700 from BIFR, 5,200 from HCs, and 15,000 from DRT — will be transferred to NCLT. Cases which were being heard by CLB will be be transferred to the NCLT automatically. Cases from BIFR will only be taken up by the NCLT if the parties file fresh applications. Cases from HCs which are eligible for transfer to NCLT, will be transferred in stages through notifications by the Ministry of Corporate Affairs. Assuming all cases do get transferred to NCLT, the tribunal will start with 24,900 cases.
What about the admission of new cases? From 2011-12 to 2014-15, on an average the CLB admitted about 10,170 cases per year (See CLB Annual Statistics).
In the same period, BIFR admitted an average 140 new cases per year (See BIFR Annual Statistics).
All HCs put together admitted approximately 14,000 original jurisdiction company matters in 2015-16 (See SC Annual Report 2015-16). In the Bombay, Delhi and Orissa HCs, approximately 90% of all original jurisdiction company matters are company petitions and applications made thereunder, cases likely to now go to NCLT. Assuming this proportion holds good for all HCs, about 12,700 cases which would earlier have gone to HCs will now be heard by NCLT.
DRTs admitted an average 21,470 Original Applications (OA cases) per year from 2012-13 to 2014-15 under the Recovery of Debts Due to Banks and Financial Institutions Act, 1993 (See here and here). In a sample of about 15,000 cases we collected from Delhi DRT-3 company and LLP related matters — matters which will now be heard by NCLT — constitute approximately 45% of the total cases. That brings the number to 9660 cases for all DRTs put together. These are the cases which will now go to NCLT instead of DRTs.
Table.1: Institution rate of fresh cases in NCLT
Type of Cases Institution Rate (I)
(cases per year)
HCs 12,700
BIFR 140
DRT 9,660
CLB 10,170
Total 32,670
This brings the total volume of fresh cases to 32,670. This will be the annual rate of institution of new cases in NCLT, assuming it stays constant over the years. The next question we have to tackle is: how many of these cases can be disposed of in a year?
### What will be the disposal rate of cases?
The DRT had an average disposal rate of 360 cases per judge per year (from 2012-13 to 2014-15). The CLB had an average disposal rate of approximately 1,705 per judge per year (from 2012-13 to 2014-15). We assume the disposal rate for these types of cases will be the same even in NCLT.
For cases from DRT and CLB we can straightaway use the respective tribunals' disposal rates.
For HCs and BIFR, the disposal rate cannot be calculated in the same manner. HCs deal with a large variety of matters. Of these, company petitions — the kind of cases which will now be heard by NCLT — form a small fraction. The average per judge annual disposal rate for all HCs put together is 19. Bombay HC, which has the highest disposal rate for company petitions among all HCs, only disposes of 60 company petition cases per judge per year. It stands to reason that a specialised tribunal like NCLT would have a higher disposal rate than that. Therefore, we assume that in NCLT these cases will have the same disposal rate as DRTs' average disposal rate i.e. 360 per judge per year.
BIFR is notorious for cases pending for a very long time. In the three years from 2010 to 2013, BIFR disposed of just 169 cases (62 pending). Since the Companies Act, 2013 and the Insolvency and Bankruptcy Code, 2015 have more streamlined processes for winding-up of companies, it is expected that the disposal rate would be higher when NCLT takes over cases being heard by BIFR. Therefore, we assume that when NCLT hears these cases it will dispose of them at a rate equal to DRT's disposal rate i.e. 360 per judge per year.
For cases from HCs and BIFR, why do we take the disposal rate of DRTs and not CLB? Unlike DRT, 60–80% of the CLB's caseload consisted of compliance related matters or small matters. For example, in 2013-14 and 2014-15, 60% and 40% (respectively) of the cases instituted in CLBs were matters regarding taking deposits without advertising (Sec.58A(9) Companies Act, 1956). These are routine matters and have high disposal rates; the ratio of annual disposals to institutions is consistently close to 1. Substantive matters on the other hand constitute 20–40% of the instituted cases, and their disposal rate is much lower. For example, in cases regarding mismanagement of companies (Sec. 397, 398 Companies Act, 1956), the ratio of annual disposals to institutions is around 0.7. Between 2013-14 to 2014-15 cases under Sec. 397/398 constituted 4–6% of total cases instituted in CLB, but represented 40% of the total pending matters at the end of the year. Since most of the cases heard by HCs and BIFR are expected to be of a substantive nature, we cannot use CLB's average disposal rate. Therefore, we assume that NCLT's disposal rate for these cases will be similar to the DRTs' average disposal rate, rather than the CLB's.
### How many judges does the NLCT need?
Armed with the institution and disposal rates of each type of case we can now calculate the number of judges required. One last consideration is the structural difference between the NCLT and the other tribunals. The NCLT has two types of benches, viz. division benches consisting of 2 judges and single benches consisting of 1 judge. Therefore, in the case of NCLT we calculate the number of benches rather than judges.
(cases per year) (cases per bench per year) Type of Cases Institution Rate (I) Disposal Rate (D) Benches Required (I/D) HCs + BIFR 12,840 360 36 DRT 9,660 360 27 CLB 10,170 1,705 6 Total 32,670 473 69
Thus, we estimate that the NCLT will require 69 benches just to keep up with its caseload. We can also use these disposal rates to calculate the number of judges required for clearing the inherited backlog of 24,900 cases. If these cases are to be disposed of steadily over the next five years, NCLT would need about 80 benches.
The NCLT currently has 14 benches (in eleven locations) (See here, here and here). With a disposal rate of 473 per bench per year, and 14 benches, NCLT can dispose of 6,620 cases per year, i.e. only a fifth of the incoming fresh cases. At this rate, the NCLT will accumulate a backlog of 26,050 cases per year; a total backlog of 1,30,250 cases in five years. The Central Government is planning to establish one NCLT bench in every HC jurisdiction, i.e. a total of 24. Even with 24 benches, the NCLT would accumulate a backlog of 21,320 every year. That's a total backlog of 1,06,600 cases over the next five years.
It should be noted that in these calculations we haven't factored in any increase in the rate of filing fresh cases, nor have we considered the entirely new categories of cases (e.g.: class action suits) which the NCLT will hear. These will place an even greater burden on the tribunal.
### Some corroborating evidence for our predictions
Some data on the disposal rates of NCLT are already available. In the six months from June 2016 to November 2016, NCLT disposed of 1,930 cases. From June to August only 240 cases were disposed of. This is probably because the NCLT just started functioning in June. The bulk of the cases — 1690 — were disposed of in the last three months, i.e. from September to November. If we take the disposal rate for just these three months and project it for the whole year, it translates to a disposal rate of approximately 6,750 cases per year. This is close to our estimated disposal rate of 6,620 cases per year.
### Conclusions
There is evidence to suggest that badly designed procedures which allow unecessary adjournments, lost working days, and administrative inefficiency substantially contribute to judicial delays and pendency. In the case of NCLT, even if these issues are fixed and we manage to double its disposal rate, the current bench strength would still be far short of what is required to handle its caseload.
This analysis draws attention to the fact that we need to do a better job of estimating the judicial resources required to handle case loads. What we have presented in this article is an example of a Judicial Impact Assessment (JIA): estimating the resources required in the judiciary to handle the case load, using data on court productivity. A pre-requisite for JIA is the availability of high-quality empirical data on case loads and productivity. It is important that JIA should be institutionalised in the legislative process, so that courts are able to deliver timely justice.
The US has a specialised body called the National Centre for State Courts dedicated to conducting research on the functioning of state courts. It maintains databases of case-level data and has developed models for various kinds of judicial needs assessments for state courts. It has even developed software for court and case management, which are currently used in the US state courts. There is a similar system in place for federal courts. The estimates for judicial resource requirements in both cases are based on weighted caseloads which are derived from granular, case-level data.
One significant difference between India and the US is that in comparison to the US, it is easier for the Government of India to change the number of benches as needed. In the US, new judgeships can only be created by an act of the legislature. By contrast, in India, new benches for the NCLT can be created by the Central Government simply by notification. What is however lacking is a sound process for determining the number of judges/benches needed.
With its lack of adequate number of benches, the NCLT is likely to be plagued by delays just like its predecessors. A more efficient judicial procedure, and greater bench strength, are both required to effect a lasting solution.
### References
Pratik Datta and Prasanth Regy. Judicial procedures will make or break the Insolvency and Bankruptcy Code. Ajay Shah's Blog, January 2017.
Prasanth Regy, Shubho Roy and Renuka Sane. Understanding Judicial Delays in India: Evidence from Debt Recovery Tribunals. Ajay Shah's Blog, May 18, 2016.
Pratik Datta and Ajay Shah. How to make courts work? Ajay Shah's Blog, February 22, 2015.
Reserve Bank of India. Report on Trend and Progress of Banking in India 2015-16
Nikhil Shah, Khushboo Vaish, Kavya Ramanathan. NCLT Readiness Report. Alvares and Marsal India, 2017
Company Law Board, annual statistics.
Lok Sabha Questions on cases pending in DRT dated 4th March 2016 and 4th December 2015.
The authors are researchers at the National Institute of Public Finance and Policy, New Delhi.
#### 1 comment:
1. 1. A very nicely written article indeed. Ultimately, what one gathers is: -
(a) Nothing is going to change in a hurry (IF AT ALL). Delays will continue to be the order of the day.
(b) Defaulting promoters need not change their strategy for exploiting the new system.
(c) Banks can continue to wait endlessly. It is the common man's money that has been given away by the banks (with full knowledge that there is no hope for recovery). So there is no problem.
2. If the NCLT's ruling is inimical to a debtor, isn't there a way of causing further delays? NCLAT followed by High Court and Supreme Court? Could you kindly share your views on this aspect too??
Please note: Comments are moderated. Only civilised conversation is permitted on this blog. Criticising me is perfectly okay; uncivilised language is not. I delete any comment which is spam, has personal attacks against anyone, or uses foul language. I delete any comment which does not contribute to the intellectual discussion about the blog article in question.
Please note: LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.
|
{}
|
## Explicit reconstruction in quantum cohomology and $$K$$-theory.(English. French summary)Zbl 1360.14130
The author proves that if $$\sum_{d}I_{d}Q^{d}$$ where $$I_{d}(z,z^{-1})$$ are cohomology valued Laurent $$z$$-series (representing a point on the graph of d$$\mathcal{F}$$, the differential of genus-0 descendent potential, in symplectic loop space $$\mathcal{H}$$) and if $$\Phi_{\alpha}$$ are polynomials in $$p_{1},\ldots , p_{r}$$ then the family $I(\tau)=\sum_{d}T_{d}Q^{d}\text{exp}\Big\{ \frac{1}{z}\sum_{\alpha}\tau_{\alpha} \Phi_{\alpha}(p_{1}-zd_{1},\ldots , p_{r}-zd_{r}) \Big \}$ lies on the graph of d$$\mathcal{F}$$. Here $$Q^{d}$$ stands for the element corresponding to $$d$$ in the semigroup ring of Mori cone $$\mathcal{M}$$ of the compact Kähler manifold $$X$$. Moreover, for arbitrary scalar power series $$C_{\alpha}(z)=\sum_{k\geq 0}\tau_{\alpha,k}z^{k}$$, the linear combination $$\sum_{\alpha}c_{\alpha}(z)z\partial_{\tau_{\alpha}}I$$ of the derivatives also lies on the graph. Furthermore, in case when $$p_{1}\ldots ,p_{r}$$ generate $$H^{*}(X,\mathbb{Q})$$, and $$\Phi_{\alpha}$$ represents a linear basis, such linear combinations comprise the whole graph.
### MSC:
14N35 Gromov-Witten invariants, quantum cohomology, Gopakumar-Vafa invariants, Donaldson-Thomas invariants (algebro-geometric aspects) 53D45 Gromov-Witten invariants, quantum cohomology, Frobenius manifolds 19L10 Riemann-Roch theorems, Chern characters 32Q15 Kähler manifolds
Full Text:
### References:
[1] Ciocan-Fontanine (I.), Kim (B.).— Big $$J$$-functions. Preprint, 23pp., arXiv: 1401.7417 · Zbl 1369.14018 [2] Coates (T.).— Riemann-Roch theorems in Gromov-Witten theory. PhD thesis, (2003-, available at http://math.harvard.edu/ tomc/thesis.pdf [3] Corti (A.), Coates (T.), Iritani (H.), H.-H. Tseng.— Computing twisted genus-zero Gromov-Witten invariants. Duke Mathematical Journal, 147, no. 3, p. 377-438 (2009). · Zbl 1176.14009 [4] Coates (T.), Givental (A.).— Quantum Riemann-Roch, Lefschetz and Serre. Ann. of Math. (2), 165, p. 15-53 (2007). [5] Coates (T.), Givental (A.).— Quantum cobordisms and formal group laws. The unity of mathematics, 155-171, Progr. Math., 244, Birkhäuser Boston, Boston, MA (2006). · Zbl 1103.14011 [6] Givental (A.).— Homological geometry I. Projective hypersurfaces. Selecta Math. (New Series) 1, p. 325-345 (1995). · Zbl 0920.14028 [7] Givental (A.).— Equivariant Gromov-Witten invariants. IMRN, p. 613-663 (1996). [8] Givental (A.).— Symplectic geometry of Frobenius structures. Frobenius manifolds, Aspects Math., E36, Vieweg, Wiesbaden, p. 91-112 (2004). · Zbl 1075.53091 [9] Givental (A.), Lee (Y.-P.).— Quantum K-theory on flag manifolds, finite difference Toda lattices and quantum groups. Invent. Math. 151, p. 193-219 (2003). · Zbl 1051.14063 [10] Givental (A.), Tonita (V.).— The Hirzebruch-Riemann-Roch theorem in true genus-0 quantum K-theory. Preprint, arXiv:1106.3136 · Zbl 1335.19002 [11] Iritani (H.).— Quantum D-modules and generalized mirror transformations. Topology 47, no. 4, p. 225-276 (2008). · Zbl 1170.53071 [12] Iritani (H.), Milanov (T.), Tonita (V.).— Reconstruction and convergence in quantum K-theory via difference equations. Preprint, 36 pp., arXiv:1309.3750 [13] Kontsevich (M.), Mani (Y.).— Gromov-Witten classes, quantum cohomology, and enumerative geometry. In “Mirror Symmetry II,” AMS/IP Stud. Adv. Math., v. 1, AMS, Providence, RI, p. 607-653 (1997). · Zbl 0931.14030 [14] Lee (Y.-P.).— Quantum K-theory I. Foundations. Duke Math. J. 121, no. 3, p. 389-424 (2004). · Zbl 1051.14064 [15] Lee (Y.-P.), R. Pandharipande.— A reconstruction theorem in quantum cohomology and quantum K-theory. Amer. J. Math. 126, no. 6 p. 1367-1379 (2004). · Zbl 1080.14065 [16] Tonita (V.).— A virtual Kawasaki Riemann-Roch formula. Pacific J. Math. 268, no. 1, p. 249-255 (2014). arXiv:1110.3916. · Zbl 1346.19009 [17] Tonita (V.).— Twisted orbifold Gromov-Witten invariants. Nagoya Math. J. 213, p. 141-187 (2014). arXiv:1202.4778 · Zbl 1303.14065
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
{}
|
# Extraction of uranium from seawater: a few facts
* Corresponding author
Abstract : Although uranium concentration in seawater is only about 3 micrograms per liter, the quantity of uranium dissolved in the world's oceans is estimated to amount to 4.5 billion tonnes of uranium metal (tU). In contrast, the current conventional terrestrial resource is estimated to amount to about 17 million tU. However, for a number of reasons the extraction of significant amounts of uranium from seawater remains today more a dream than a reality. Firstly, pumping the seawater to extract this uranium would need more energy than what could be produced with the recuperated uranium. Then if trying to use existing industrial flow rates, as for example on a nuclear power plant, it appears that the annual possible quantity remains very low. In fact huge quantities of water must be treated. To produce the annual world uranium consumption (around 65,000 tU), it would need at least to extract all uranium of 2 x 10$^{13}$ tonnes of seawater, the volume equivalent of the entire North Sea. In fact only the great ocean currents are providing without pumping these huge quantities, and the idea is to try to extract even very partially this uranium. For example Japan, which used before the Fukushima accident about 8,000 tU by year, sees about 5.2 million tU passing every year, in the ocean current Kuro Shio in which it lies. A lot of research works have been published on the studies of adsorbents immersed in these currents. Then, after submersion, these adsorbents are chemically treated to recuperate the uranium. Final quantities remain very low in comparison of the complex and costly operations to be done in sea. One kilogram of adsorbent, after one month of submersion, yields about 2 g of uranium and the adsorbent can only be used six times due to decreasing efficiency. The industrial extrapolation exercise made for the extraction of 1,200 tU/year give with these values a very costly installation installed on more than 1000 km$^2$ of sea with a lot of boats for transportation and maintenance. The ecological management of this huge installation would present significant challenges. This research will continue to try to increase the efficiency of these adsorbents, but it is clear that it would be very risky today, to have a long-term industrial strategy based on significant production of uranium from seawater with an affordable cost.
Document type :
Journal articles
Domain :
Cited literature [12 references]
https://hal-cea.archives-ouvertes.fr/cea-02305455
Contributor : Bruno Savelli <>
Submitted on : Friday, October 4, 2019 - 11:09:22 AM
Last modification on : Tuesday, April 28, 2020 - 11:28:13 AM
### File
Gui.pdf
Publisher files allowed on an open archive
### Citation
Joël Guidez, Sophie Gabriel. Extraction of uranium from seawater: a few facts. EPJ N - Nuclear Sciences & Technologies, EDP Sciences, 2016, 2, pp.10. ⟨10.1051/epjn/e2016-50059-2⟩. ⟨cea-02305455⟩
Record views
|
{}
|
# [luatex] Question about creating virtual fonts
Henri Menke henrimenke at gmail.com
Thu Jun 27 06:47:31 CEST 2019
Dear list,
I want to create a virtual font where the digits are mirrored upside
down. To this end I defined a virtual font where in the commands arrays
I apply a PDF transformation to the digits. However, when I write
1234567 in the source, the characters show up upside down but also in
reverse. Why is that? Also, all the other characters seem to
have zero width.
Another issue is that if I add a size field in the fonts array, the font
completely disappears from the PDF with no error message. I'm surely
doing something wrong but the LuaTeX manual has very little
documentation about the creation of virtual fonts.
MWE is below.
Many thanks, Henri
------------------
{\catcode\%=12\gdef\letterpercent{%}}
\directlua{
f.name = "cmtt10-digits"
f.type = "virtual"
f.fonts = {
{
name = "cmtt10",
% size = 10 % adding this field makes the font disappear
}
}
for i,v in pairs(f.characters) do
local height = f.characters[i].height
local width = f.characters[i].width
if (string.char(i)):find("[1234567890]") then
v.commands = {
{ "special", string.format("pdf: q 1 0 0 -1 0 \letterpercent d cm", height) },
{ "font", 0 },
{ "char", i },
{ "special", "pdf: Q" },
}
else
v.commands = {
{ "font", 0 },
{ "char", i },
}
end
end
myfont = font.define(f)
}
\def\myfont{\setfontid\directlua{tex.print(myfont)}}
\font\cmtt=cmtt10
\cmtt abc 1234567 def
\myfont abc 1234567 def
\bye
`
|
{}
|
2 deleted 1 characters in body
Say we have a convex polytope in standard form:
\begin{equation*} \begin{array}{rl} \mathbf{A}\mathbf{x} = \mathbf{b} \\ \mathbf{x} \ge 0 \end{array} \end{equation*}
Are there any known methods for finding a hyperplane $\mathbf{d} \mathbf{x} +d_0= 0$ that splits the polyhedron in a way that the number of vertices on each side of the hyperplane is approximately the same? (i.e. a hyperplane that minimizes the absolute difference of vertex cardinalities on the two sides of the split).
Also, are there any known results regarding the computational complexity of this problem?
### Addendum: Restricting the types of cuts:
Here is a variation of the original problem with the hope that it is easier to solve than the original one:
Is there a way to efficiently compute or estimate for which coordinate $i$ a hyperplane of the form $d_ix_i + d_0 = 0$ would yield the lowest absolute difference of vertex cardinalities on both sides of the split? By efficient I mean anything more efficient than the exhaustive enumeration of vertex cardinalities for all possible such possible splits.
### Note:
I first asked this question in CSTheory.stackexchange.com last week. Since the question has not seen any significant progress since then, I thought I could try here.
1
# Finding a hyperplane that splits a convex polytope evenly
Say we have a convex polytope in standard form:
\begin{equation*} \begin{array}{rl} \mathbf{A}\mathbf{x} = \mathbf{b} \\ \mathbf{x} \ge 0 \end{array} \end{equation*}
Are there any known methods for finding a hyperplane $\mathbf{d} \mathbf{x} +d_0= 0$ that splits the polyhedron in a way that the number of vertices on each side of the hyperplane is approximately the same? (i.e. a hyperplane that minimizes the absolute difference of vertex cardinalities on the two sides of the split).
Also, are there any known results regarding the computational complexity of this problem?
### Addendum: Restricting the types of cuts:
Here is a variation of the original problem with the hope that it is easier to solve than the original one:
Is there a way to efficiently compute or estimate for which coordinate $i$ a hyperplane of the form $d_ix_i + d_0 = 0$ would yield the lowest absolute difference of vertex cardinalities on both sides of the split? By efficient I mean anything more efficient than the exhaustive enumeration of vertex cardinalities for all possible such splits.
### Note:
I first asked this question in CSTheory.stackexchange.com last week. Since the question has not seen any significant progress since then, I thought I could try here.
|
{}
|
# Multiple values not for R4RS
Well I guess now the focus is to generate a proposal in preparation
for the next Scheme meeting. Hopefully, the proposal presented at the
meeting will have been agreed to by those reachable by E-mail. It
sounds like those interested in records are following the same plan.
R4RS just is not going to have multiple values. Let's make sure R5RS
does.
>> From: Morris J. Katz <katz@Polya.Stanford.EDU>
...
>> The escape procedure is a Scheme procedure which when applied to a
>> value(s) will ignore whatever continuation is in effect at the time of
>> application and will instead give its argument(s) to the continuation
>> that was in effect when the escape procedure was created. The arity
>> of an escape procedure created in the 'generator' position of a
>> 'with-values' form must match that of the 'receiver'. All other
>> escape procedures must accept at least one argument. Some
>> implementions may choose to ignore extra arguments, others may signal
>> an error when more then one argument is given.
...
Morry, I think your wording is better than my attempt, but how about
this small modification?
The escape procedure is a Scheme procedure which when applied to some
values will ignore whatever continuation is in effect at the time of
application and will instead give its arguments to the continuation
that was in effect when the escape procedure was created. The arity
of an escape procedure created in the 'generator' position of a
'with-values' form must match that of the 'receiver'. All other
escape procedures must accept at least one argument. Some
implementions may choose to ignore extra arguments, others may signal
an error when more then one argument is given.
I am not happy with Morry's rewording of the description of
with-values. Any more suggestions?
John
PS. The intended formal description of with-values is:
%%% Raw TeX.
$$\hbox{\it with-values} = \hbox{\it twoarg }(\lambda \epsilon_1 \epsilon_2\kappa . \hbox{ \it applicate } \epsilon_1 \langle \rangle \lambda \epsilon^\star . \hbox{ \it applicate } \epsilon_2 \epsilon^\star \kappa)$$
\end
|
{}
|
# Control Systems/Glossary
The following is a listing of some of the most important terms from the book, along with a short definition or description.
## A, B, C
Acceleration Error
The amount of steady state error of the system when stimulated by a unit parabolic input.
Acceleration Error Constant
A system metric that determines that amount of acceleration error in the system.
A branch of control theory where controller systems are able to change their response characteristics over time, as the input characteristics to the system change.
when control gain is varied depending on system state or condition, such as a disturbance
A system is additive if a sum of inputs results in a sum of outputs.
Analog System
A system that is continuous in time and magnitude.
ARMA
Autoregressive Moving Average, see [1]
ATO
Analog Timed Output. Control loop output is correlated to a timed contact closure.
A/M
Auto-Manual. Control modes, where auto typically means output is computer-driven, calculated while manual can be field-driven or merely using a static setpoint.
Bilinear Transform
a variant of the Z-transform, see [2]
Block Diagram
A visual way to represent a system that displays individual system components as boxes, and connections between systems as arrows.
Bode Plots
A set of two graphs, a "magnitude" and a "phase" graph, that are both plotted on log scale paper. The magnitude graph is plotted in decibels versus frequency, and the phase graph is plotted in degrees versus frequency. Used to analyze the frequency characteristics of the system.
Bounded Input, Bounded Output
BIBO. If the input to the system is finite, then the output must also be finite. A condition for stability.
When the output of a control loop is fed to/from another loop.
Causal
A system whose output does not depend on future inputs. All physical systems must be causal.
Classical Approach
See Classical Controls.
Classical Controls
A control methodology that uses the transform domain to analyze and manipulate the Input-Output characteristics of a system.
Closed Loop
a controlled system using feedback or feedforward
Compensator
A Control System that augments the shortcomings of another system.
Condition Number
Conditional Stability
A system with variable gain is conditionally stable if it is BIBO stable for certain values of gain, but not BIBO stable for other values of gain.
Continuous-Time
A system or signal that is defined at all points t.
Control Rate
the rate at which control is computed and any appropriate output sent. Lower bound is sample rate.
Control System
A system or device that manages the behavior of another system or device.
Controller
See Control System.
Convolution
A complex operation on functions defined by the integral of the two functions multiplied together, and time-shifted.
Convolution Integral
The integral form of the convolution operation.
CQI
Control Quality Index, ${\displaystyle =1-abs(PV-SP)/max[PVmax-SP,SP-PVmin]}$ , 1 being ideal.
CV
Controlled variable
## D, E, F
Damping Ratio
A constant that determines the damping properties of a system.
time shift between the output change and the related effect (typ. at least one control sample). One sees "Lag" used for this action sometimes.
Digital
A system that is both discrete-time, and quantized.
Direct action
target output increase is required to bring the process variable (PV) to setpoint (SP) when PV is below SP. Thus, PV increases with output increase directly.
Discrete magnitude
See quantized.
Discrete time
A system or signal that is only defined at specific points in time.
Distributed
A system is distributed if it has both an infinite number of states, and an infinite number of state variables. See Lumped.
Dynamic
A system is called dynamic if it doesn't have memory. See Instantaneous, Memory.
Eigenvalues
Solutions to the characteristic equation of a matrix. If the matrix is itself a function of time, the eigenvalues might be functions of time. In this case, they are frequently called eigenfunctions.
Eigenvectors
The nullspace vectors of the characteristic equation for particular eigenvalues. Used to determine state-transitions, among other things. See [3]
Euler's Formula
An equation that relates complex exponentials to complex sinusoids.
Exponential Weighted Average (EWA)
Apportions fractional weight to new and existing data to form a working average. Example EWA=0.70*EWA+0.30*latest, see Filtering.
External Description
A description of a system that relates the input of the system to the output, without explicitly accounting for the internal states of the system.
Feedback
The output of the system is passed through some sort of processing unit H, and that result is fed into the plant as an input.
Feedforward
whwn apriori knowledge is used to forecast at least part of the control response.
Filtering (noise)
Use of signal smoothing techniques to reject undesirable components like noise. Can be as simple as using exponential weighted averaging on the input.
Final Value Theorem
A theorem that allows the steady-state value of a system to be determined from the transfer function.
FOH
First order hold
Frequency Response
The response of a system to sinusoids of different frequencies. The Fourier Transform of the impulse response.
Fourier Transform
An integral transform, similar to the Laplace Transform, that analyzes the frequency characteristics of a system.
See [4]
## G, H, I
Game Theory
A branch of study that is related to control engineering, and especially optimal control. Multiple competing entities, or "players" attempt to minimize their own cost, and maximize the cost of the opponents.
Gain
A constant multiplier in a system that is typically implemented as an amplifier or attenuator. Gain can be changed, but is typically not a function of time. Adaptive control can use time-adaptive gains that change with time.
General Description
An external description of a system that relates the system output to the system input, the system response, and a time constant through integration.
Electrical Engineer, did work in control theory and communications. Is primarily remembered in control engineering for his introduction of the bode plot.
Harry Nyquist
Electrical Engineer, did extensive work in controls and information theory. Is remembered in this book primarily for his introduction of the Nyquist Stability Criterion.
Homogeniety
Property of a system whose scaled input results in an equally scaled output.
Hybrid Systems
Systems which have both analog and digital components.
Impulse
A function denoted δ(t), that is the derivative of the unit step.
Impulse Response
The system output when the system is stimulated by an impulse input. The Inverse Laplace Transform of the transfer function of the system.
Initial Conditions
The conditions of the system at time ${\displaystyle t=t_{0}}$ , where t0 is the first time the system is stimulated.
Initial Value Theorem
A theorem that allows the initial conditions of the system to be determined from the Transfer function.
Input-Output Description
See external description.
Instantaneous
A system is instantaneous if the system doesn't have memory, and if the current output of the system is only dependent on the current input. See Dynamic, Memory.
Integrated Absolute Error (IAE)
absolute error (ideal vs actual performance) is integrated over the analysis period.
Integrated Squared Error (ISE)
squared error (ideal vs actual performance) is integrated over the analysis period.
Integrators
A system pole at the origin of the S-plane. Has the effect of integrating the system input.
Inverse Fourier Transform
An integral transform that converts a function from the frequency domain into the time-domain.
Inverse Laplace Transform
An integral transform that converts a function from the S-domain into the time-domain.
Inverse Z-Transform
An integral transform that converts a function from the Z-domain into the discrete time domain.
## J, K, L
Lag
The observed process impact from an output is slower than the control rate.
Laplace Transform
An integral transform that converts a function from the time domain into a complex frequency domain.
Laplace Transform Domain
A complex domain where the Laplace Transform of a function is graphed. The imaginary part of s is plotted along the vertical axis, and the real part of s is plotted along the horizontal axis.
Left Eigenvectors
Left-hand nullspace solutions to the characteristic equation of a matrix for given eigenvalues. The rows of the inverse transition matrix.
Linear
A system that satisfies the superposition principle. See Additive and Homogeneous.
Linear Time-Invariant
LTI. See Linear, and Time-Invariant.
Low Clamp
User-applied lower bound on control output signal.
L/R
Local/Remote operation.
LQR
Lumped
A system with a finite number of states, or a finite number of state variables.
## M, N, O
Magnitude
the gain component of frequency response. This is often all that is considered in saying a discrete filter's response is well matched to the analog's. It is the DC gain at 0 frequency.
Marginal Stability
A system has an oscillatory response, as determined by having imaginary poles or imaginary eigenvalues.
Mason's Rule
see [5]
MATLAB
Commercial software having a Control Systems toolbox. Also see Octave.
Memory
A system has memory if its current output is dependent on previous and current inputs.
MFAC
MIMO
A system with multiple inputs and multiple outputs.
Modern Approach
see modern controls
Modern Controls
A control methodology that uses the state-space representation to analyze and manipulate the Internal Description of a system.
Modified Z-Transform
A version of the Z-Transform, expanded to allow for an arbitrary processing delay.
MPC
Model Predictive Control.
MRAC
MV
can denote Manipulated variable or Measured variable (not the same)
Natural Frequency
The fundamental frequency of the system, the frequency for which the system's frequency response is largest.
Negative Feedback
A feedback system where the output signal is subtracted from the input signal, and the difference is input to the plant.
The Nyquist Criteria
A necessary and sufficient condition of stability that can be derived from Bode plots.
Nonlinear Control
A branch of control engineering that deals exclusively with non-linear systems. We do not cover nonlinear systems in this book.
OCTAVE
Open-source software having a Control Systems toolbox. Also see MATLAB.
Offset
The discrepancy between desired and actual value after settling. P-only control can give offset.
Oliver Heaviside
Electrical Engineer, Introduced the Laplace Transform as a tool for control engineering.
Open Loop
when the system is not closed, its behavior has a free-running component rather than controlled
Optimal Control
A branch of control engineering that deals with the minimization of system cost, or maximization of system performance.
Order
The order of a polynomial is the highest exponent of the independent variable in that exponent. The order of a system is the order of the Transfer Function's denominator polynomial.
Output equation
An equation that relates the current system input, and the current system state to the current system output.
Overshoot
measures the extent of system response against desired (setpoint tracking).
## P, Q, R
Parabolic
A parabolic input is defined by the equation${\displaystyle {\frac {1}{2}}t^{2}u(t)}$ .
Partial Fraction Expansion
A method by which a complex fraction is decomposed into a sum of simple fractions.
Percent Overshoot
PO, the amount by which the step response overshoots the reference value, in percentage of the reference value.
Phase
the directional component of frequency response, not typically well-matched between a discrete filter equivalent to the analog version, especially as frequency approaches the Nyquist limit. The final value in the limit drives system stability, and stems from the poles and zeros of the characteristic equation.
PID
Proportional-Integral-Derivative
Plant
A central system which has been provided, and must be analyzed or controlled.
PLC
Programmable Logic Controller
Pole
A value for s that causes the denominator of the transfer function to become zero, and therefore causes the transfer function itself to approach infinity.
Pole-Zero Form
The transfer function is factored so that the locations of all the poles and zeros are clearly evident.
Position Error
The amount of steady-state error of a system stimulated by a unit step input.
Position Error Constant
A constant that determines the position error of a system.
Positive Feedback
A feedback system where the system output is added to the system input, and the sum is input into the plant.
PSD
The power spectral density which shows the distribution of power in the spectrum of a particular signal.
Pulse Response
The response of a digital system to a unit step input, in terms of the transfer matrix.
PV
Process variable
Quantized
A system is quantized if it can only output certain discrete values.
Quarter-decay
the time or number of control rates required for process overshoot to be limited to within 1/4 of the maximum peak overshoot (PO) after a SP change. If the PO is 25% at sample time N, this would be time N+k when subsequent PV remains < SP*1.0625, presuming the process is settling.
Raise-Lower
Output type that works from present position rather than as a completely new computed spanned output. For R/L, the % change should be applied to the working clamps i.e. 5%(hi clamp-lo clamp).
Ramp
A ramp is defined by the function ${\displaystyle tu(t)}$ .
Reconstructors
A system that converts a digital signal into an analog signal.
Reference Value
The target input value of a feedback system.
Relaxed
A system is relaxed if the initial conditions are zero.
Reverse action
target output decrease is required to bring the process variable (PV) to setpoint (SP) when PV is below SP. Thus, PV decreases with output increase.
Rise Time
The amount of time it takes for the step response of the system to reach within a certain range of the reference value. Typically, this range is 80%.
Robust Control
A branch of control engineering that deals with systems subject to external and internal noise and disruptions.
## S, T, U, V
Samplers
A system that converts an analog signal into a digital signal.
Sampled-Data Systems
See Hybrid Systems'.
Sampling Time
In a discrete system, the sampling time is the amount of time between samples. Reflects the lower bound for Control rate.
Supervisory Control and Data Acquisition.
S-Domain
The domain of the Laplace Transform of a signal or system.
Second-order System;
Settling Time
The amount of time it takes for the system's oscillatory response to be damped to within a certain band of the steady-state value. That band is typically 10%.
Signal Flow Diagram
A method of visually representing a system, using arrows to represent the direction of signals in the system.
SISO
Single input, single output.
Span
the designed operation region of the item,=high range-low range. Working span can be smaller if output clamps are used.
Stability
Typically "BIBO Stability", a system with a well-behaved input will result in a well-behaved output. "Well-behaved" in this sense is arbitrary.
Star Transform
A version of the Laplace Transform that acts on discrete signals. This transform is implemented as an infinite sum.
State Equation
An equation that relates the future states of a system with the current state and the current system input.
State Transition Matrix
A coefficient matrix, or a matrix function that relates how the system state changes in response to the system input. In time-invariant systems, the state-transition matrix is the matrix exponential of the system matrix.
State-Space Equations
A set of equations, typically written in matrix form, that relates the input, the system state, and the output. Consists of the state equation and the output equation. See [6]
State-Variable
A vector that describes the internal state of the system.
Stability
The system output cannot approach infinity as time approaches infinity. See BIBO, Lyapunov Stability.
Step Response
The response of a system when stimulated by a unit-step input. A unit step is a setpoint change for setpoint tracking.
The output value of the system as time approaches infinity.
At steady state, the amount by which the system output differs from the reference value.
Superposition
A system satisfies the condition of superposition if it is both additive and homogeneous.
System Identification
method of trying to identify the system characterization , typically through least squares analysis of input,output and noise data vectors. May use ARMA type framework.
System Type
The number of ideal integrators in the system.
Time-Invariant
A system is time-invariant if an input time-shifted by an arbitrary delay produces an output shifted by that same delay.
Transfer Function
The ratio of the system output to its input, in the S-domain. The Laplace Transform of the function's impulse response.
Transfer Function Matrix
The Laplace transform of the state-space equations of a system, that provides an external description of a MIMO system.
Uniform Stability
Also "Uniform BIBO Stability", a system where an input signal in the range [0, 1] results in a finite output from the initial time until infinite time. See [7].
Unit Step
An input defined by ${\displaystyle u(t)}$ . Practically, a setpoint change.
Unity Feedback
A feedback system where the feedback loop element H has a transfer function of 1.
Velocity Error
The amount of steady-state error when the system is stimulated by a ramp input.
Velocity Error Constant
A constant that determines that amount of velocity error in a system.
## W, X, Y, Z
W-plane
Reference plane used in the bilinear transform.
Wind-up
when the numerics of computed control adjustment can "wind-up", yielding control correction with an inappropriate component unless prevented. An example is the "I" contribution of PID if output has been disconnected during PID calculation
Zero
A value for s that causes the numerator of the transfer function to become zero, and therefore causes the transfer function itself to become zero.
Zero Input Response
The response of a system with zero external input. Relies only on the value of the system state to produce output.
Zero State Response
The response of the system with zero system state. The output of the system depends only on the system input.
ZOH
Zero order hold.
Z-Transform
An integral transform that is related to the Laplace transform through a change of variables. The Z-Transform is used primarily with digital systems. See [8]
|
{}
|
# Stars/Sun/Solar binary
< Stars | Sun(Redirected from Solar binary)
This composite image shows an exoplanet (2M1207b, the red spot on the lower left), orbiting the brown dwarf 2M1207 (centre). 2M1207b is a Jupiter-like planet. It orbits the brown dwarf at a distance nearly twice as far as Neptune is from the Sun. Credit: ESO.
A solar binary of the Sun and Jupiter may serve to establish an upper limit for interstellar cometary capture. The basic problem even with a passage through a molecular cloud of some 10 million years is the low relative velocity (~0.5 km s-1) required between the solar system and the cometary medium. Some of the captured bodies may localize in the Oort cloud, while others localize near the Sun or Jupiter.
As stars often occur as binaries or multiple star systems, it is likely that the Sun may have been a member of a binary system or even a multiple star system at some time in the past.
## Solar twins
Solar twins have the following qualities:[1]
• Temperature within 50 K Solar (roughly 5720 to 5830 K)
• Metallicity of 89—112% (± 0.05 dex) Solar, meaning the star's proplyd would have had almost exactly the same amount of dust for planetary formation
• No stellar companion, because the Sun itself is solitary
• An age within 1 billion years Solar (roughly 3.5 to 5.6 Ga)
Identifier Coordinates[2] Distance[2]
(ly)
Stellar
Class
[2]
Temperature
(K)
Metallicity
(dex)
Age
(Gyr)
Notes
Right ascension Declination
Sun 0.00 G2V 5,778 +0.00 4.6 [3]
18 Scorpii 16h 15m 37.3s –08° 22′ 06″ 45.1 G2Va 5,835 +0.04 4.2 [4]
HD 44594 06h 20m 06.1s -48° 44′ 29″ 84 G3V 5,840 +0.15 4.1 [5]
HD 195034 20h 28m 11.8s +22° 07′ 44″ 92 G5 5,760 -0.04 5.1 [6]
HD 138573 15h 32m 43.7s +10° 58′ 06″ 101 G5IV-V 5,710 –0.03 7.8 [7]
HD 142093 15h 52m 00.6s +15° 14′ 09″ 103 G2V 5,841 –0.15 5.0 [7]
HD 98618 11h 21m 29.1s +58° 29′ 04″ 126 G5V 5,851 +0.03 4.7 [4]
HD 143436 16h 00m 18.8s +00° 08′ 13″ 141 G0 5,768 +0.00 3.8 [7]
HD 129357 14h 41m 22.4s +29° 03′ 32″ 154 G2V 5,749 –0.02 8.2 [7]
HD 133600 15h 05m 13.2s +06° 17′ 24″ 171 G0 5,808 +0.02 6.3 [4]
HD 101364 11h 40m 28.5s +69° 00′ 31″ 208 G5V 5,795 +0.02 3.5 [4][8]
A solar twin is more similar to the Sun than a solar analog.
## Solar analogs
Solar analogs "are photometrically similar to the Sun, having the following qualities:"[1]
• Temperature within 500 K Solar (roughly 5200 to 6300 K)
• Metallicity of 50—200% (± 0.3 dex) Solar, meaning the star's protoplanetary disk would have had similar amounts of dust from which planets could form
• No close companion (orbital period of ten days or less), as such a companion stimulates stellar activity
Solar analog stars would have an effective surface temperature of ~5800 K.
Solar analogs not meeting the stricter solar twin criteria include, within 50 light years and in order of increasing distance:
Identifier Coordinates[2] Distance[2]
(ly)
Stellar
Class
[2]
Temperature
(K)
Metallicity
(dex)
Notes
Right ascension Declination
Alpha Centauri A 14h 39m 36.5s -60° 50′ 02″ 4.37 G2V 5,847 +0.24 [9]
Alpha Centauri B 14h 39m 35.0s -60° 50′ 14″ 4.37 K1V 5,316 +0.25 [9]
70 Ophiuchi A 18h 05m 27.3s +02° 30′ 00″ 16.6 K0V 5,314 –0.02 [10]
Sigma Draconis 19h 32m 21.6s +69° 39′ 40″ 18.8 K0V 5,297 –0.20 [11]
Eta Cassiopeiae A 00h 49m 06.3s +57° 48′ 55″ 19.4 G0V 5,941 –0.17 [12]
107 Piscium 01h 42m 29.8s +20° 16′ 07″ 24.4 K1V 5,242 –0.04 [13][14]
Beta Canum Venaticorum 12h 33m 44.5s +41° 21′ 27″ 27.4 G0V 5,930 -0.30 [13]
61 Virginis 13h 18m 24.3s -18° 18′ 40″ 27.8 G5V 5,558 –0.02 [15]
Zeta Tucanae 00h 20m 04.3s –64° 52′ 29″ 28.0 F9.5V 5,956 –0.14 [16]
Chi¹ Orionis A 05h 54m 23.0s +20° 16′ 34″ 28.3 G0V 5,902 –0.16 [13]
Beta Comae Berenices 13h 11m 52.4s +27° 52′ 41″ 29.8 G0V 5,970 –0.06 [13]
HR 4523 A 11h 46m 31.1s –40° 30′ 01″ 30.1 G5V 5,629 –0.29 [15]
61 Ursae Majoris 11h 41m 03.0s +34° 12′ 06″ 31.1 G8V 5,483 –0.12 [13]
HR 4458 A 11h 34m 29.5s –32° 49′ 53″ 31.1 K0V 5,629 –0.29 [15]
HR 511 01h 47m 44.8s +63° 51′ 09″ 32.8 K0V 5,333 +0.05 [13]
Alpha Mensae 06h 10m 14.5s –74° 45′ 11″ 33.1 G5V 5,594 +0.10 [16]
Zeta Reticuli1 03h 17m 46.2s -62° 34′ 31″ 39.5 G3-5V 5,733 -0.22 [16]
Zeta Reticuli2 Reticuli 03h 18m 12.8s -62° 30′ 23″ 39.5 G2V 5,843 -0.23 [16]
55 Cancri 08h 52m 35.81s +28° 19′ 51″ 40.3 G8V 5,235 +0.25 [12]
HD 69830 08h 18m 23.9s -12° 37′ 56″ 40.6 K0V 5,410 -0.03 [16]
HD 10307 01h 41m 47.1s +42° 36′ 48″ 41.2 G1.5V 5,848 -0.05 [13]
HD 147513 16h 24m 01.3s -39° 11′ 35″ 42.0 G1V 5,858 +0.03 [15]
58 Eridani 04h 47m 36.3s -16° 56′ 04″ 43.3 G3V 5,868 +0.02 [16]
Upsilon Andromedae A 01h 36m 47.8s +41° 24′ 20″ 44.0 F8V 6,212 +0.13 [16]
HD 211415 A 22h 18m 15.6s –53° 37′ 37″ 44.4 G1-3V 5,890 -0.17 [16]
47 Ursae Majoris 10h 59m 28.0s +40° 25′ 49″ 45.9 G1V 5,954 +0.06 [16]
Alpha Fornacis A 03h 12m 04.3s -28° 59′ 21″ 46.0 F8IV 6,275 -0.19 [16]
Psi Serpentis A 15h 44m 01.8s +02° 30′ 55″ 47.9 G5V 5,636 -0.03 [13]
HD 84117 09h 42m 14.4s –23° 54′ 56″ 48.5 F8V 6,167 –0.03 [16]
HD 4391 00h 45m 45.6s –47° 33′ 07″ 48.6 G3V 5,878 –0.03 [16]
20 Leonis Minoris 10h 01m 00.7s +31° 55′ 25″ 49.1 G3 V 5,741 +0.20 [13]
Nu Phoenicis 01h 15m 11.1s –45° 31′ 54″ 49.3 F8V 6,140 +0.18 [16]
51 Pegasi 22h 57m 28.0s +20° 46′ 08″ 50.9 G2.5IVa 5,804 +0.20 [16]
## Solar types
A solar-type star is less similar to the Sun than a solar analog star.
Solar-type stars "are main-sequence stars with a B-V color between 0.48 and 0.80, the Sun having a B-V color of 0.65. Alternatively, a definition based on spectral type can be used, such as F8 V through K2 V, which would correspond to B-V color of 0.50 to 1.00.[1] This definition fits approximately 10% of stars".
"Solar-type stars show highly correlated behavior between their rotation rates and their chromospheric activity (e.g. Calcium H & K line emission) and coronal activity (e.g. X-ray emission). As solar-type stars spin-down during their main-sequence lifetimes due to magnetic braking, these correlations allow rough ages to be derived.[17]
The following table shows a sample of solar-type stars within 50 light years that nearly satisfy the criteria for solar analogs, based on current measurements.
Sample of solar-type stars
Identifier Coordinates[2] Distance[2]
(ly)
Stellar
Class
[2]
Temperature
(K)
Metallicity
(dex)
Notes
Right ascension Declination
Tau Ceti 01h 44m 04.1s -15° 56′ 15″ 11.9 G8V 5,344 –0.52 [16]
40 Eridani A 04h 15m 16.3s -07° 39′ 10″ 16.5 K1V 5,126 –0.31 [16]
82 Eridani 03h 19m 55.7s -43° 04′ 11.2″ 19.8 G8V 5,338 –0.54 [13]
Delta Pavonis 20h 08m 43.6s -66° 10′ 55″ 19.9 G8IV 5,604 +0.33 [15]
HR 7722 20h 15m 17.4s -27° 01′ 59″ 28.8 K0V 5,166 –0.04 [15]
Gliese 86 A 02h 10m 25.9s -50° 49′ 25″ 35.2 K1V 5,163 -0.24 [16]
54 Piscium 00h 39m 21.8s +21° 15′ 02″ 36.1 K0V 5,129 +0.19 [13]
V538 Aurigae 05h 41m 20.3s +53° 28′ 51.8″ 39.9 K1V 3,500-5,000 -0.20 [13]
HD 14412 02h 18m 58.5s -25° 56′ 45″ 41.3 G5V 5,432 -0.46 [13]
HR 4587 12h 00m 44.3s -10° 26′ 45.7″ 42.1 G8IV 5,538 0.18 [13]
HD 172051 18h 38m 53.4s -21° 03′ 07″ 42.7 G5V 5,610 -0.32 [13]
72 Herculis 17h 20m 39.6s +32° 28′ 04″ 46.9 G0V 5,662 -0.37 [13]
HD 196761 20h 40m 11.8s -23° 46′ 26″ 46.9 G8V 5,415 -0.31 [15]
Nu² Lupi 15h 21m 48.1s -48° 19′ 03″ 47.5 G4V 5,664 -0.34 [15]
## Solar binary theory
Uranus is named after the ancient Greek deity of the sky Uranus, the father of Cronus (Saturn) and grandfather of Zeus (Jupiter). Though it is visible to the naked eye like the five classical planets, it was never recognized as a planet by ancient observers because of its dimness and slow orbit.[18]
## Binary stars
Def. "[t]wo stars that appear to be one when seen with the naked eye, either because they orbit one another (binary stars) or happen to be in the same line of sight even though they are separated by a great distance> is called a double star.
Def. "[a] star that appears as a double due to an optical illusion; in reality, the stars may be far apart from each other> is called an optical double.
Def. two stars which form a stellar system, such that they orbit the point of equilibrium of their gravitational fields is called a double star.
Def. a stellar system that has two stars orbiting around each other is called a binary star.
A binary star is a star system consisting of two stars orbiting around their common center of mass. The brighter star is called the primary and the other is its companion star, ... or secondary. Many visual binaries have long orbital periods of several centuries or millennia and therefore have orbits which are uncertain or poorly known.
Def. a binary star whose components can be visually resolved is called a visual binary.
## Astrometric binaries
Astrometric binaries are relatively nearby stars which can be seen to wobble around a point in space, with no visible companion. The same mathematics used for ordinary binaries can be applied to infer the mass of the missing companion. The companion could be very dim, so that it is currently undetectable or masked by the glare of its primary, or it could be an object that emits little or no electromagnetic radiation.
The visible star's position is carefully measured and detected to vary, due to the gravitational influence from its counterpart. The position of the star is repeatedly measured relative to more distant stars, and then checked for periodic shifts in position. Typically this type of measurement can only be performed on nearby stars, such as those within 10 parsecs. Nearby stars often have a relatively high proper motion, so astrometric binaries will appear to follow a sinusoidal path across the sky.
## Spectroscopic binaries
Sometimes, the only evidence of a binary star comes from the Doppler effect on its emitted light. In these cases, the binary consists of a pair of stars where the spectral lines in the light emitted from each star shifts first toward the blue, then toward the red, as each moves first toward us, and then away from us, during its motion about their common center of mass, with the period of their common orbit.
In these systems, the separation between the stars is usually very small, and the orbital velocity very high. Unless the plane of the orbit happens to be perpendicular to the line of sight, the orbital velocities will have components in the line of sight and the observed radial velocity of the system will vary periodically. Since radial velocity can be measured with a spectrometer by observing the Doppler shift of the stars' spectral lines, the binaries detected in this manner are known as spectroscopic binaries. Most of these cannot be resolved as a visual binary, even with telescopes of the highest existing resolving power.
## Eclipsing binaries
An eclipsing binary is shown with an indication of the variation in intensity.[19][20] Credit: .
An eclipsing binary star is a binary star in which the orbit plane of the two stars lies so nearly in the line of sight of the observer that the components undergo mutual eclipses. In the case where the binary is also a spectroscopic binary and the parallax of the system is known, the binary is quite valuable for stellar analysis.[21] Algol is the best-known example of an eclipsing binary.[21]
### Eclipsing binary with matter transfer
This animation is for an eclipsing binary with matter (gas + plasma) transfer like beta Lyrae. Credit: Stanlekub.
Matter may transfer from one star to another through a process known as Roche Lobe overflow (RLOF) through an accretion disc. The mathematical point through which this transfer happens is called the first Lagrangian point.[22] It is not uncommon that the accretion disc is the brightest (and thus sometimes the only visible) element of a binary star.
## Detached binaries
For detached binaries each component is within its Roche lobe, i.e. the area where the gravitational pull of the star itself is larger than that of the other component. The stars have no major effect on each other, and essentially evolve separately. Most binaries belong to this class.
## Semidetached binaries
A semidetached binary has “one of the components [filling] the binary star's Roche lobe and the other does not. Gas from the surface of the Roche-lobe-filling component (donor) is transferred to the other, accreting star. The mass transfer dominates the evolution of the system. In many cases, the inflowing gas forms an accretion disc around the accretor.
## Contact binaries
In a contact binary both components fill their Roche lobes; i.e., they “are so close that they touch each other or have merged to share their gaseous envelopes.
When stars share an envelope the pair may be called an overcontact binary.[23][24][25] “As the friction of the envelope brakes the orbital motion, the stars may eventually merge.[26] A contact binary is a stable configuration [where the two stars touch,] with a typical lifetime of millions to billions of years. Almost all known contact binary systems are eclipsing binaries[27]
## Common envelope binaries
A common envelope (CE) binary refers to a short-lived (months to years) phase in the evolution of a binary star in which the largest of the two stars (the donor star) has initiated unstable mass transfer to its companion star. This [may become a run-away process in which] the [donor star’s] envelope [expands to] engulf the companion star. ... The [loss] of orbital energy [may] heat up and expand the envelope. The whole common-envelope phase ends when either the envelope is expelled into space, or the two objects inside the envelope merge and no more energy is available to expand or even expel the envelope. This phase of the shrinking of the orbit inside the common envelope is known as a spiral-in.
## Chaos assisted capture
In a mechanism of chaos assisted capture (CAC), particles such as comets or those of sizes in the range of the irregular moons of Jupiter become entangled in chaotic layers which temporarily “extend the lifetimes of [these] particles within the Hill sphere, thereby providing the breathing space necessary for relatively weak dissipative forces (eg gas-drag) to effect permanent capture.”[28] These objects of the Sun-Jupiter binary system may localize near Jupiter and become satellites, specifically the irregular moons.[28]
## Solar antapexes
In 1917 the solar antapex has an equatorial location of right ascension (RA) 6h declination (Dec) -34°.[29]
## Star fission
"Zero-age contact must be a consequence of star fission under critical angular momentum."[30] When the angular momentum is too large, the star brakes into a detached binary.[30] When the angular momentum is too small, the star remains as a single star.[30] BH Centauri and V1010 Ophiuchi have zero-age radii and are zero-age contact systems.[30] BH Centauri is an overcontact system.[30]
Fragmentation of the molecular cloud during the formation of protostars is an acceptable explanation for the formation of a binary or multiple star system.[31][32]
With respect to low-mass star formation, "fragmentation to form a binary star [may be] most simply achieved if collapse is initiated by an external impulse."[33] "On its own, [the] process under which a dense molecular cloud core can collapse to form a binary, or multiple, star system would produce wide binaries".[33] "[C]lose binaries [may be] formed because of mutual interactions between the protostellar discs surrounding the various fragments."[33] "[T]he most likely collision which has an effect on the core is the one for which [the velocity change imparted by the impulse,] Δv ~ cs [(the internal sound speed),] induced by a clump of mass ~0.1Mʘ."[33] "[T]he impulsive collapse of the cloud cores [requires] that they are not primarily magnetically supported in their central regions."[33]
## Electromagnetics
Consider an object of mass (m = 0.1M) passing through the heliosphere of the Sun at a velocity of 20 km s-1 from a galactic location originally outside the heliosphere. There are no other large objects in orbit around the Sun at the time of this object's entry. Jupiter, Saturn, Uranus, and Neptune are not present.
The object has a surface negative charge of 0.2Q and an intrinsic magnetic field of 100 Gauss (G) directed vertically. A current of 5 e- per second is following along magnetic field lines toward the Sun from the object. A comparable number of protons are following magnetic field lines from the Sun to the object.
At t0 the object is at 1,000 AU. Its RA is 14h 30m Dec +46° 30'.
Let the Sun have a magnetic field of 1 G directed vertically. The charge on the Sun (Q) is -3 x 1027 e.s.u.
## Electrostatics
Coulomb's law states that the electrostatic force ${\displaystyle F_{q}}$ experienced by a charge, ${\displaystyle 0.2Q_{\odot }}$ at position ${\displaystyle r_{q}}$, in the vicinity of another charge, ${\displaystyle Q_{\odot }}$ at position ${\displaystyle r_{Q}}$, in vacuum is equal to:
${\displaystyle F_{q}={0.2Q_{\odot }^{2} \over 4\pi \varepsilon _{0}}{1 \over {r^{2}}},}$
where ${\displaystyle \varepsilon _{0}}$ is the electric constant or the permittivity of free space and ${\displaystyle r}$ is the distance between the two charges and the constant ${\displaystyle \varepsilon _{0}}$ is in SI units of C2 m−2 N−1, where C is Coulombs.
ε0 ≈ 8.854 x 10-12 C2 m−2 N−1, or
ε0 ≈ 8.854 x 10-12 C2 10-6 km−2 N−1, then
ε0 ≈ 8.854 x 10-18 C2 km−2 N−1.
1 e.s.u. ≈ 3.34 x 10-10 C.
${\displaystyle F_{q}={\frac {0.2}{4\pi }}{\frac {[(-3\times 10^{27}e.s.u.)(3.34\times 10^{-10}C/e.s.u.)]^{2}}{(8.854\times 10^{-18}C^{2}km^{-2}N^{-1})[(1.5\times 10^{8})10^{3}km]^{2}}},}$
${\displaystyle F_{q}={\frac {0.2(-3)^{2}(3.34)^{2}}{4\pi (8.854)(1.5)^{2}}}{\frac {(10^{27}\times 10^{-10})^{2}}{10^{-18}(10^{8}\times 10^{3})^{2}}}{\frac {[(e.s.u.)(C/e.s.u.)]^{2}}{C^{2}km^{-2}N^{-1}km^{2}}},}$
${\displaystyle F_{q}=8.02\times 10^{28}N.}$
Each of the constants used are approximates so Fq ≈ 8.02 x 1028 N.
### Statuses
“The ... Larmor radius ... is the radius of the circular motion of a charged particle in the presence of a uniform magnetic field. “[F]or a particle of energy E in EeV and charge Z in a magnetic field B in µG [the Larmor radius (RL)] is roughly”[34]
${\displaystyle R_{L}=1kpc{\frac {E}{ZB}}}$
where
• ${\displaystyle R_{L}\ }$ is the Larmor radius,
• ${\displaystyle E\ }$ is the energy of the particle in EeV
• ${\displaystyle Z\ }$ is the charge of the particle, and
• ${\displaystyle B\ }$ is the constant magnetic field.
Or,
${\displaystyle R_{Lm}={\frac {mv_{\perp }}{|q_{m}|B_{\odot }}}}$
${\displaystyle R_{L\odot }={\frac {M_{\odot }(-v_{\perp })}{|q_{\odot }|B_{m}}}}$
where
• ${\displaystyle R_{L}\ }$ is the Lamor or gyroradius,
• ${\displaystyle m\ }$ is the mass of the charged particle,
• ${\displaystyle v_{\perp }}$ is the velocity component perpendicular to the direction of the magnetic field,
• ${\displaystyle q_{m}\ }$ is the charge of the particle, and
• ${\displaystyle B_{\odot }\ }$ is the constant magnetic field for the Sun.
For the object, as deflected by the Sun,
${\displaystyle 1G(Gauss)={\frac {1T(Tesla)}{10,000}}={\frac {1Ns}{10^{4}C10^{-3}km}}={\frac {1kg10^{-3}kms^{-2}s}{10^{4}C10^{-3}km}},}$
${\displaystyle 1G={\frac {1kg}{10^{4}Cs}}.}$
${\displaystyle R_{Lm}={\frac {(0.1)(2\times 10^{30}kg)(20km/s)}{(0.2)(3\times 10^{27}e.s.u.)(3.34\times 10^{-10}C/e.s.u.)(1kg/(10^{4}Cs))}},}$
${\displaystyle R_{Lm}={\frac {(0.1)(2)(20)}{(0.2)(3)(3.34)(1)}}{\frac {10^{30}\times 10^{4}}{10^{27}\times 10^{-10}}}{\frac {kgkmCs}{(e.s.u.)(C/e.s.u.)kgs}},}$
${\displaystyle R_{Lm}=2\times 10^{17}km.}$
Or,
${\displaystyle R_{Lm}=1.33\times 10^{9}AU.}$
For the Sun, as deflected by the object,
${\displaystyle R_{L\odot }={\frac {(2\times 10^{30}kg)(20km/s)}{(3\times 10^{27}e.s.u.)(3.34\times 10^{-10}C/e.s.u.)(100kg/(10^{4}Cs))}},}$
${\displaystyle R_{L\odot }={\frac {(2)(20)}{(3)(3.34)(100)}}{\frac {10^{30}\times 10^{4}}{10^{27}\times 10^{-10}}}{\frac {kgkmCs}{(e.s.u.)(C/e.s.u.)kgs}},}$
${\displaystyle R_{L\odot }=3.99\times 10^{15}km.}$
Or,
${\displaystyle R_{L\odot }=2.66\times 10^{7}AU.}$
### Electric orbits
In order for the object to have an orbit around the Sun, its approach velocity of 20 km/s must be reduced to below the Sun's escape velocity of 1.33 km/s.
The electrostatic force is the major available force of repulsion to slow down the object. As it continues to travel closer to the Sun, the forces increase. But, an estimate can be calculated of how much time is required to reduce the object's velocity to below the Sun's escape velocity.
${\displaystyle v_{1}=20km/s-(F_{q}/m)t<1.33km/s.}$
${\displaystyle 20km/s<1.33km/s+(F_{q}/m)t,}$
${\displaystyle (F_{q}/m)t>20km/s-1.33km/s,}$
${\displaystyle t>{\frac {20km/s-1.33km/s}{F_{q}/m}},}$
${\displaystyle t>{\frac {20km/s-1.33km/s}{(8.02\times 10^{28}kg10^{-3}km/s^{2})/((0.1)(2\times 10^{30}kg))}},}$
${\displaystyle t>{\frac {20-1.33}{8.02/((0.1)(2))}}{\frac {10^{30}}{10^{28}\times 10^{-3}}}{\frac {(km/s)kg}{kg(km/s^{2})}},}$
${\displaystyle t>4.66\times 10^{4}s.}$
Or,
${\displaystyle t>12.9h.}$
## Weak forces
"Some of the oddly skewed orbits of many alien worlds may be due to the twin stars they are often found circling [...] many of the exoplanets astronomers have discovered in the past two decades or so have mysteriously skewed orbits. They may be eccentric — that is, oval-shaped. They could also be inclined — tilted at an angle from the equators of their stars. One potential explanation for these skewed orbits might be the gravitational influence of a companion star near the host stars of those exoplanets. [...] most stars form in binary pairs, with both stars orbiting each other. In fact, there are many three-star systems as well, and even some that harbor up to seven stars."[35]
For a solitary star, "the orbits of most planets are nearly circular, orbiting [their star']s equator."[35]
## X-rays
According to SIMBAD, Vega (alpha Lyrae) (delta Sct type variable) is an X-ray source in the first Einstein catalog (1E) and an ultraviolet source from the CEL, EUVE, and TD1 catalogs. It has not been detected as a gamma-ray source.
At one time (1932), Vega was listed as a double star.[36] Also in 1963, Vega is listed as a visual double star.[37] Again in 1983, it was listed as a double star.[38] This was repeated in 1994.[39] It is still listed as a double star in 1996.[40] The update in 2001 also lists it.[41] The star with Vega is 56.41 arcsec away and is designated as BD+38 3238D of unknown spectral class. That these two stars are undifferentiated between double star or binary star for some 70 years at only 25 lyrs away is remarkable.
## Vega
The X-ray "counts observed from [...] Vega in the HRI are very likely to be due entirely to UV contamination from the photospheric emission, and hence the X-ray luminosities for [...] Vega [...] should be replaced by upper limits at least one order of magnitude lower, i.e., log LX, Vega < 26.6 [...] Thus the HRI observation of Vega is completely consistent with the upper limit obtained in the IPC, and the only remaining inconsistency concerning the X-ray emission from Vega is the rocket experiment by Topka et al. (1979), who report a detection of Vega in a 5 s pointing yielding 7 counts; however, in our opinion these authors do not convincingly rule out the possibility of UV contamination. Note in this context that the IPC's used for Topka et al.s' (1979) rocket flight and for the Einstein Observatory were not identical."[42]
"Many types of main sequence stars emit in the X-ray portion of the spectra. In massive stars, strong stellar winds ripping through the extended atmosphere of the star create X-ray photons. On lower mass stars, magnetic fields twisting through the photosphere heat it sufficiently to produce X-rays. But between these two mechanisms, in the late B to mid A classes of stars, neither of these mechanisms should be sufficient to produce X-rays. Yet when X-ray telescopes examined these stars, many were found to produce X-rays just the same."[43]
"The first exploration into the X-ray emission of this class of stars was the Einstein Observatory, launched in 1978 and deorbited in 1982. While the telescoped confirmed that these B and A stars had significantly less X-ray emission overall, seven of the 35 A type stars still had some emission. Four of these were confirmed as being in binary systems in which the secondary stars could be the source of the emission, leaving three of seven with unaccounted for X-rays."[43]
"The German ROSAT satellite found similar results, detecting 232 X-ray stars in this range. Studies explored connections with irregularities in the spectra of these stars and rotational velocities, but found no correlation with either. The suspicion was that these stars simply hid undetected, lower mass companions."[43]
Either "the main star truly is the source, or there are even more elusive, sub-arcsecond binaries skewing the data."[43]
On July 27, 1977, at 05:41:48.1 UTC, an Aerobee 350 or boosted Black Brant launched from White Sands Missile Range using Vega as a reference by its star tracker to update its position while maneuvering between X-ray targets automatically observed Vega with its X-ray telescope for 4.8 s.[44]
The quantity of detected photons (7) in the band 0.2-0.80 keV corresponds to an X-ray luminosity LX ≈ 3 x 1028 erg s-1.[44]
"The ANS 3 σ upper limit for Vega (2.5 x 1028 ergs s-1) is only slightly lower than our flux measurement."[44]
"Because the X-ray [luminosity] of Vega [is] much closer to that of the Sun than to the typical galactic X-ray sources which have been detected to date, it is natural to consider processes analogous to solar coronal activity as the explanation for the X-ray activity."[44]
"Vega is thought to be a solitary star, and therefore noncoronal X-ray-producing mechanisms seem to be excluded".[44]
"Vega is the first solitary main-sequence star beyond the Sun known to be an X-ray emitter".[44]
Vega's "computed X-ray surface luminosity [...] is comparable to that of the quiet Sun [...] Note, however, that because of our very short exposure, the average level of coronal emission may vary significantly from our single measurement."[44]
"Using estimates of the stellar [radius] derived from stelar structure calculations, we obtain [a] surface X-ray [luminosity] of ~6.4 x 104 ergs cm-2 s-1 for Vega [that falls] within the range of solar coronal X-ray emission, which can vary between ~8 x 103 ergs cm-2 s-1 in coronal holes and ~3 x 106 ergs cm-2 s-1 in active regions".[44]
Magnetic "field activity, leading to coronal heating, may account for Vega's X-ray emission because of inhomogeneous distribution of surface magnetic flux and associated coronal activity."[44]
That Vega is regarded as an X-ray source rests on one 4.8 s star-tracking observation by one sounding rocket flight carrying an X-ray detector flown on many flights that yields trustworthy results.
"Vega is a pole-on, highly oblate, rapid rotator [...] the star exhibits extreme limb darkening and a large decrement in effective temperature from pole to equator. [...] the best fittingmodel (Teff pole=10150 K, Teff eq = 7900 K, θ = 3.329 mas) has the pole inclined 5° to line of sight and rotates at 91% of the angular speed of break-up, resulting in a temperature drop of 2250 K from center to limb. [...] the total luminosity [...] is emitted in a highly non-homogeneous manner with five times more UV flux being emitted from the pole as is emitted in the equatorial plane, while the visible through near-IR flux is some 70% greater at the pole than that of the equatorial plane and 54% greater than that expected from a slow or non-rotating A0 V star."[45]
A 'polar coronal hole' is a coronal hole that occurs above one or both rotational poles of a star that has a coronal cloud around it.
"The radiant emission from coronal holes is greatly diminished relative to other coronal regions".[46]
The "emission is proportional to the integral of the square of the electron density along the line of sight [...] Data of this type are therefore heavily influenced by regions of high density along the line of sight--the low corona for disk observations, and denser structures surrounding coronal holes for limb observations."[46]
An "analysis of the northern polar region during the period 1973 June 29 to July 13 [...] can be summarized as follows. The boundary of the hole is essentially axisymmetrc about the polar axis and is nearly radial from 3 to 6 R. The boundary at these heights is located at 25° ± 5° latitude, although it is of much smaller extent (boundary ~65° latitude) as observed near the solar surface with the American Science and Engineering (AS&E) X-ray experiment on Skylab [...] the increase of the polar hole's cross sectional area from the surface to 3 R is approximately 7 times greater than for a purely radial boundary."[46]
For α Lyr, log FX/FV = -6.79 (variable X-ray source), log LX 27.6 erg s-1 (variable X-ray source).[47] Upper limits were log FX/FV = -7.4 and log LX 27.0 erg s-1.[47]
A coronal cloud is not a diffuse, homogeneous hot atmosphere, but one or more strongly structured topologically closed features dominated by magnetic confinement.
### Magnetic field of Vega
An extensive convection zone is not required, and any star with magnetic field strengths and geometry similar to the Sun's will possess a corona.[44]
Magnetic fields on the order of ~30 gauss have been reported for Vega (~1 gauss for the Sun), so perhaps these substantially higher average field strengths compensate for the expected reduced convective activity, resulting in surface X-ray luminosities comparable to the quiet Sun.[44]
Using spectropolarimetry, a magnetic field has been detected on the surface of Vega by a team of astronomers at the Observatoire du Pic du Midi.[48] They "report the detection of a magnetic field on Vega and argue that Vega is probably the first member of a new class of yet undetected magnetic A-type stars."[48]
The "polarization [is] a Zeeman signature [that] leads to a value of [Bl =] -0.6 ± 0.3 G for the disk-averaged line-of-sight component of the surface magnetic field."[48]
"The strength of Vega magnetic field is about 50 micro-tesla, which is close to that of the mean field on Earth and on the Sun."[49]
### Rotational velocity of Vega
"Vega rotates in less than a day, while the Sun's rotation period is 27 days."[49]
### Infrared analysis
This is an infrared image of the debris disc around Vega taken with the Herschel Space Observatory. Credit: Herschel Space Observatory, Steward Observatory, University of Arizona.
This is a graph of infrared excesses including Vega. Credit: Herschel Space Observatory, Steward Observatory, University of Arizona.
"The infrared excesses are well modeled by two components, a warm belt close to the star, and a cooler belt farther out. The clear separation of the belts could be explained by the presence of planets clearing the gap."[50]
The graph at left shows the clear separation of infrared belts for Vega. This separation may "be explained by the presence of planets clearing the gap."[50]
## Sun as an X-ray source
The photosphere of the Sun does not emit X-rays. The chromosphere and the transition region either emit ultraviolet, extreme ultraviolet, or X-rays (most likely soft X-rays). The coronal clouds around the Sun emit X-rays and sometimes gamma-rays, neutrinos, neutrals, protons, positrons, and electrons, among other solar cosmic-rays.
The X-ray characteristics of the Sun have been studied since the 1940s. The may be considered an X-ray variable star as its intensity corresponds to the sunspot cycle. This cycle may have its origins in the mechanisms that heat the photosphere and the coronal clouds around the Sun. An analysis of the X-ray characteristics of Vega suggest that it is a Sun-like X-ray star.
It has taken a number of decades to observe and describe the X-ray properties of Vega. Associated with these properties are a magnetic field comparable to the Sun and the possibility of a coronal hole over the pole facing Earth.
By scanning the available literature to ascertain the X-ray properties of Vega, several apparent conclusions may be drawn. The X-ray output of Vega falls between that of the Sun. The X-ray output of the Sun minimizes during the solar cycle quiet period and maximizes at the sunspot maximum.
Vega is considered an X-ray variable star with a corona and an X-ray output comparable to the Sun.
### Vega characteristics
Vega is considered to be Sun-like in its X-ray output and variability. The peak of X-ray output may not be directly observable as the star has a pole facing Earth.
The photosphere of Vega has a diameter five times that of the Sun. Its apparent stellar system has similarities to the solar system but is scaled as approximately four times larger. Even with a pole temperature above 10,000 K Vega produces no detectable X-ray output from its photosphere, just like the Sun at 5,778 K. Vega is a visual spectral type A0V star which means it is transparent. The Sun is not considered transparent at visual wavelengths.
The physical size of the photosphere of Vega and its transparency may reflect more the cloud it formed from than anything else.
The above surface fusion that is occurring on the Sun may not be happening above the photosphere of Vega as no flares or gamma rays have been detected.
Vega is a Sun-like X-ray source.
The characteristics that allow Vega to be seen as a Sun-like X-ray source suggests that if the Sun's X-ray characteristics have not changed over time it may have been in a wide-orbit binary with any star whose X-ray characteristics allow the Sun to remain much as it is.
## Sun
For comparison with other stars, the Sun has the following properties:
• The effective temperature of the surface of the Sun's photosphere is 5,778 K.[51]
• Metallicity, Z = 0.0122[52], "lowest seismic estimate of solar metallicity is Z = 0.0187 [to the] highest is Z = 0.0239, with uncertainties in the range of 12%-19%."[53]
• Stellar companion: Jupiter at present, perhaps Uranus in some larger form previously
• Age: 4.57 billion years[54]
• B-V = 0.656 ± 0.005[55]
• Rotation rate: 7.189 x 103 km h-1 (at the equator), equatorial circumference of 4,379,000 kilometres divided by sidereal rotation period of 609.12 hours[56]
### Metallicity
The hydrogen mass fraction is generally expressed as ${\displaystyle X\equiv {\frac {m_{\mathrm {H} }}{M}}}$ where ${\displaystyle M}$ is the total mass of the system and ${\displaystyle m_{\mathrm {H} }}$ the mass of the hydrogen it contains.
"[T]he helium mass fraction is denoted as ${\displaystyle Y\equiv {\frac {m_{\mathrm {He} }}{M}}}$.
The metallicity—the mass fraction of elements heavier than helium—can be calculated as
${\displaystyle Z=\sum _{i>\mathrm {He} }{\frac {m_{i}}{M}}=1-X-Y.}$
${\displaystyle X=0.7393.}$[52]
${\displaystyle Y=0.2485.}$[52]
## Sun-Jupiter binary
The Sun-Jupiter binary may serve to establish an upper limit for interstellar cometary capture when three bodies are extremely unequal in mass, such as the Sun, Jupiter, and a third body (potential comet) at a large distance from the binary.[57] The basic problem with a capture scenario even from passage through “a cloud of some 10 million years, or from a medium enveloping the solar system, is the low relative velocity [~0.5 km s-1] required between the solar system and the cometary medium.”[58] The capture of interstellar comets by Saturn, Uranus, and Neptune together cause about as many captures as Jupiter alone.[58]
## Saturn
"There is one God, greatest among gods and men, neither in shape nor in thought like unto mortals ... He abides ever in the same place motionless, and it befits him not to wander hither and thither."[59]
"Saturn, the old man who lives at the north pole, and brings with him to the children of men a sprig of evergreen (the Christmas tree), is familiar to the little folks under the name Santa Claus, for he brings each winter the gift of a new year."[60]
"The religions of all ancient nations ... associate the abode of the supreme God with the North Pole, the centre of heaven; or with the celestial space immediately surrounding it."[61]
"Lenormant, speaking of Rome and Olympia, remarks, "It is impossible not to note that the Capitoline was first of all the Mount of Saturn, and that the Roman archaeologists established a complete affinity between the Capitoline and Mount Cronios in Olympia, from the standpoint of their traditions and religious origin (Dionysius Halicarn., i., 34). This Mount Cronios is, as it were, the Omphalos of the sacred city of Elis, the primitive centre of its worship. It sometimes receives the name Olympos."1 Here is not only symbolism in general, but also a symbolism pointing to the Arctic Eden, already shown to be the primeval mount of Kronos, the Omphalos of the whole earth.2"[61]
"As an offshoot of these Hellenistic speculations we should place Tacitus, Histories V,2: "Iudaeos Creta insula profugos novissima Libyae insedisse memorant, qua tempestate Saturnus vi Jovis pulsus cesserit regnis" (quoted from Loeb Classical Library)."[62] i.e., "Jews were fugitives from the island of Crete and settled in Libya recorded the time when Saturn was driven from his throne by force of Jupiter".
"The motif of Saturn handing over power to Jupiter derives, of course, from Hesiod's account of the succession of the gods in his Theogony, and his story of the five successive ages of men -- the first, or golden, age being under the reign of Kronos (Saturn) and the following ages being under the reign of Zeus (Jupiter) -- in his Works and Days (110ff.). These stories were often retold. Ovid, for example, combines in his Metamorphoses the stories in the Theogony and Works and Days, telling us how, "when Saturn was consigned to the darkness of Tartarus, and the world passed under the rule of Jove, the age of silver replaced that of gold."8"[63]
Much of the legend surrounding this early Saturn suggests that it may have been in a binary star system with the Sun.
### Pole stars
This diagram shows the path of the north celestial pole among the stars due to the precession (assuming constant precessional speed and obliquity of epoch JED 2000). Credit: Tauʻolunga.
This diagram shows the path of the south celestial pole among the stars due to the precession (assuming constant precessional speed and obliquity of JED 2000). Credit: Tauʻolunga.
At the present time, the northern pole star, or North Star, is a moderately bright star with an apparent magnitude of 1.97 (variable), the brightest star in the Ursa Minor constellation (at the end of the or "handle" of the "Little Dipper" asterism).[64] Its current (October 2012) declination is +89°19'8" (as per epoch J2000 it was +89°15'51.2"). Therefore it always appears due north in the sky to a precision better than one degree, and the angle it makes with respect to the horizon is equal to the latitude of the observer. It is consequently known as Polaris (from Latin stella polaris "pole star"). It also retains its older name, Cynosura, from a time before it was the pole star, from its Greek name meaning "dog's tail" (as the constellation of Ursa Minor was interpreted as a dog, not a bear, in antiquity).
Due to the precession of the equinoxes (as well as the stars' proper motions), the role of North Star passes from one star to another. The name stella polaris has been given to α Ursae Minoris since at least the 16th century, even though at that time it was still several degrees away from the celestial pole. Gemma Frisius determined this distance as 3°7' in the year 1547.[65]
In the Roman era, the celestial pole was about equally distant from α Ursae Minoris (Cynosura) and β Ursae Minoris (Kochab). Before this, during the 1st millennium BC, β Ursae Minoris was the bright star closest to the celestial pole, but it was never close enough to be taken as marking the pole, and the Greek navigator Pytheas in ca. 320 BC described the celestial pole as devoid of stars. Polaris was described as αει φανης "always visible" by Stobaeus in the 5th century, when it was still removed from the celestial pole by about 8°. It was known as scip-steorra ("ship-star") in 10th-century Anglo-Saxon England, reflecting its use in navigation.
The precession of the equinoxes takes about 25,770 years to complete a cycle. Polaris' mean position (taking account of precession and proper motion) will reach a maximum declination of +89°32'23", so 1657" or 0.4603° from the celestial north pole, in February 2102. Its maximum apparent declination (taking account of nutation and aberration) will be +89°32'50.62", so 1629" or 0.4526° from the celestial north pole, on 24 March 2100.[66]
In 3000 BC the faint star Thuban in the constellation Draco was the North Star. At magnitude 3.67 (fourth magnitude) it is only one-fifth as bright as Polaris, and today it is invisible in light-polluted urban skies.
The Celestial south pole is moving toward the Southern Cross, which has pointed to the south pole for the last 2,000 years or so. As a consequence, the constellation is no longer visible from subtropical northern latitudes, as it was in the time of the ancient Greeks.
There have been many pole stars throughout the millennia. Around 2000 BC, the star Eta Hydri was the nearest bright star to the Celestial south pole. Around 2800 BC, Achernar was only 8 degrees from the south pole.
### Orbital poles
This is a snapshot of the planetary orbital poles. Credit: Urhixidur.
An orbital pole is either end of an imaginary line running through the center of an orbit perpendicular to the orbital plane, projected onto the celestial sphere. It is similar in concept to a celestial pole but based on the planet's orbit instead of the planet's rotation.
The north orbital pole of a celestial body is defined by the right-hand rule: If you curve the fingers of your right hand along the direction of orbital motion, with your thumb extended parallel to the orbital axis, the direction your thumb points is defined to be north.
At right is a snapshot of the planetary orbital poles.[67] The field of view is about 30°. The yellow dot in the centre is the Sun's North pole. Off to the side, the orange dot is Jupiter's orbital pole. Clustered around it are the other planets: Mercury in pale blue (closer to the Sun than to Jupiter), Venus in green, the Earth in blue, Mars in red, Saturn in violet, Uranus in grey partly underneath Earth and Neptune in lavender. Dwarf planet Pluto is the dotless cross off in Cepheus.
## Uranus
In the Chinese, Japanese, Korean, and Vietnamese languages, the planet's name is literally translated as the sky king star[68][69].
Uranus is named after the ancient Greek deity of the sky Uranus, the father of Cronus (Saturn) and grandfather of Zeus (Jupiter). Though it is visible to the naked eye like the five classical planets, it was never recognized as a planet by ancient observers because of its dimness and slow orbit.[18]
### Ouranos
“Uranus was the Sky in Greek mythology, which was thought to be dominated by the combined powers of the Sun and Mars.[70]
Uranus ... , Ouranos meaning "sky" or "heaven") was the primal Greek god personifying the sky. His equivalent in Roman mythology was Caelus. In Ancient Greek literature, Uranus or Father Sky was the son and husband of Gaia, Mother Earth. According to Hesiod's Theogony, Uranus was conceived by Gaia alone, but other sources cite Aether as his father.[71]
### Caelus
Caelus appears at the top of the cuirass of the Augustus of Prima Porta, counterposed to Earth at the bottom. Credit: Sailko.
Caelus or Coelus was a primal god of the sky in Roman myth and theology, iconography, and literature (compare caelum, the Latin word for "sky" or "the heavens", hence English "celestial").
“The name of Caelus indicates that he was the Roman counterpart of the Greek god Uranus, who was of major importance in the theogonies of the Greeks. Varro couples him with Terra (Earth) as pater and mater (father and mother), and says that they are "great deities" (dei magni) in the theology of the mysteries at Samothrace.[72]
According to Cicero and Hyginus, Caelus was the son of Aether and Dies ("Day" or "Daylight").[73] Caelus and Dies were in this tradition the parents of Mercury.[74] Caelus was the father with Hecate of the distinctively Roman god Janus, as well as of Saturn and Ops.[75] Caelus was also the father of one of the three forms of Jupiter, the other two fathers being Aether and Saturn.[76]
## Solar-like binaries
This is a Keck adaptive optics image of TYC 4110-01037-1 in K′. Credit: Keith Matthews at the Keck Observatory.
For a solar-like binary system, the primary has Teff ≲ 6000 K.[77]
"TYC 4410-01037-1 [has] a mass of 1.07 ± 0.08 M and radius of 0.99 ± 0.18 R. ... Teff = 5879 ± 29 K [and] [Fe/H] = -0.01 ± 0.05".[77]
At right is a Keck adaptive optics image of TYC 4110-01037-1 in K′. A faint candidate tertiary companion (indicated by the arrow) with red colors is separated by 986 ± 4 mas from the primary star. If it is physically associated with the primary, it is most likely a dM3-dM4 star. The companion is designated MARVELS-3B.
Its "low-mass stellar companion [has a] small mass ratio [ q ≥ 0.087 ± 0.003] and short orbital period [78.994 ± 0.012 days, which] are atypical amongst solar-like ... binary systems. [The orbit has] an eccentricity of 0.1095 ± 0.0023, and a semi-amplitude of 4199 ± 11 m s-1. ... the minimum companion mass (if sin i = 1) [is] 97.7 ± 5.8 MJup."[77]
"One possible way to create such a system would be if a triple-component stellar multiple broke up into a short period, low q binary during the cluster dispersal phase of its lifetime. A candidate tertiary body has been identified in the system via single-epoch, high contrast imagery. If this object is confirmed to be co-moving, ... it [may] be a dM4 star."[77]
## Binary twins
16 Cyg A and B are binary solar twins.[78]
"16 Cygni or 16 Cyg is a triple star system approximately 69 light-years away from Earth in the constellation of Cygnus. It consists of two Sun-like yellow dwarf stars, 16 Cygni A and 16 Cygni B, together with a red dwarf, 16 Cygni C. In 1996 an extrasolar planet was discovered in an eccentric orbit around 16 Cygni B.
16 Cygni is a hierarchal triple system. Stars A and C form a close binary with a projected separation of 73 AU.[79] The orbital elements of the A–C binary are currently unknown. At a distance of 860 AU from A is a third component designated 16 Cygni B.
"B orbits between 100 and 160 degrees inclination, that is against the A–C pole such that 90 degrees would be ecliptical.[80]
"Both 16 Cygni A and 16 Cygni B are yellow dwarf stars like our Sun. According to data from the Geneva–Copenhagen survey, both stars have masses similar to the Sun.[81][82] Age estimates for the two stars vary slightly, but 16 Cygni is likely to be much older than the Solar System, at around 10,000 million years old. 16 Cygni C is much fainter than either of these stars, and may be a red dwarf.[79]
"Despite differing in Teff by only 35-40 K, the Li abundances of 16 Cyg A and B differ by a factor of ≥ 4.5. The solar photospheric abundance is intermediate to the two values. This intermediacy indicates that the Sun, whose highly depleted photospheric Li abundance is in gross conflict with standard stellar models, is not an isolated anomaly in its Li abundance evolution."[78]
"In 1996 an extrasolar planet in an eccentric orbit was announced around the star 16 Cygni B.[83] The planet's orbit takes 798.5 days to complete, with a semimajor axis of 1.68 AU.[84]
""For the 16 Cyg B system, only particles inside of about 0.3 AU remained stable [within a million years of formation], leaving open the possibility of short-period planets". For them, observation rules out any such planet of over a Neptune mass.[85]
## Binary analogs
α Cen A and B are binary solar analogs.[78]
"The two components α Cen A and B are separated by roughly 25 AU, with an orbital period of 80 years. The age of the system is thought to be slightly larger than that of the Sun, correspondingly both stars are also slow rotators (periods are 29 (A) and 42 (B) days) with a rather inactive corona."[86]
"[F]ive XMM-Newton observations of the binary system α Centauri ... observed in snapshot like exposures of roughly two hours each during the last two years [has found that] the X-ray emission of the system is dominated by α Cen B, a K1 star. ... the optically brighter component α Cen A, a [G2V] star very similar to our Sun, [has] fainted in X-rays by at least an order of magnitude during the observation program, a behaviour never observed before on α Cen A, but rather similar to the X-ray behaviour observed with XMM-Newton on HD 81809."[86]
## Interstellar cometary captures
"NASA's Hubble Space Telescope has detected several comets diving toward a young star about 95 light-years from Earth."[87]
"HD 172555 [...] represents the third extrasolar system where astronomers have detected such comets, [...] known as "exocomets" because they're outside Earth's solar system."[87]
"The presence of comets falling toward HD 172555 was determined based on observations of nearby gases, which [...] are the vaporized remnants of disintegrated comets after they have ricocheted off unseen Jupiter-size planets. The massive planet's gravity catapults the comets into the star in a process known as "gravitational stirring." Similar processes can be seen in our own solar system when sungrazing comets plunge into the sun."[87]
"Seeing these sun-grazing comets in our solar system and in three extrasolar systems means that this activity may be common in young star systems."[88]
"This activity at its peak represents a star's active teenage years. Watching these events gives us insight into what probably went on in the early days of our solar system, when comets were pelting the inner solar system bodies, including Earth. In fact, these star-grazing comets may make life possible, because they carry water and other life-forming elements, such as carbon, to terrestrial planets."[88]
"HD 172555 is part of a collection of stars known as the Beta Pictoris Moving Group. Another one of the stars, Beta Pictoris, is known to have a young gas-giant planet forming in its protoplanetary disk of dust and gas. This collection of stars is the closest star system to Earth and could be a breeding ground for terrestrial planets."[88]
"Silicon and carbon-gas signatures were detected in the vicinity of HD 172555 using Hubble's Space Telescope Imaging Spectrograph (STIS) and the Cosmic Origins Spectrograph (COS)."[87]
"The gas was moving at about 360,000 miles per hour across the face of the star. The most likely explanation for the speedy gas is that Hubble is seeing material from comet-like objects that broke apart after streaking across the star's disk."[88]
"Hubble shows that these star-grazers look and move like comets, but until we determine their composition, we cannot confirm they are comets. We need additional data to establish whether our star-grazers are icy like comets or more rocky like asteroids."[88]
"Nightly changes in the absorption strength of the Ca II K-line near the stellar radial velocity were observed in four of the stars (HD 21620, HD 110411, HD 145964 and HD 183324). This type of absorption variability indicates the presence of a circumstellar gas disk around these stars."[89]
Weak "absorption features that sporadically appear with velocities in the range ± 100 km s-1 of the main circumstellar K-line in the spectra of HD 21620, HD 42111, HD 110411 and HD 145964 [plus] the known presence of both gas and dust disks surrounding these four stars, these transient absorption features are most probably associated with the presence of Falling Evaporated Bodies (FEBs, or exocomets) that are thought to liberate gas on their grazing trajectory toward and around the central star."[89]
"This now [2013] brings the total number of A-type stars in which the evaporation of Ca II gas from protoplanetary bodies (i.e., exocomets) has been observed to vary on a nightly basis to 10 systems [including HD 256 (HR 10), HD 9672 (49 Ceti), HD 39060 (Beta Pictoris), HD 85905, HD 182919 (5 Vulpeculae), HD 217782 (2 Andromedae)]. A statistical analysis of the 10 A-stars showing FEB-activity near the Ca II K-line compared to 21 A-type stars that exhibit no measurable variability reveals that FEB-activity occurs in significantly younger stellar systems that also exhibit chemical peculiarities. The presence of FEB-activity does not appear to be associated with a strong mid-IR excess. This is probably linked to the disk inclination angle, since unless the viewing angle is favorable the detection of time-variable absorption may be unlikely. Additionally, if the systems are more evolved then the evaporation of gas due to FEB activity could have ceased, whereas the circumstellar dust disk may still remain."[89]
## Interstellar planet captures
The "hypothetical Planet Nine could have been lost by another solar system and then poached from the intergalactic wastes by our sun's gravity."[90]
The "mysterious planet, if it does exist in the outermost reaches of the solar system, may have been a "rogue" that was captured by the sun's gravity."[90]
Certain "gravitational anomalies in the outer solar system could be explained by a massive planet lurking beyond the observed reaches of the solar system, around 20 times farther than Neptune's average distance from the sun."[90]
"A rogue planet is an object that formed like a planet from a disk around a star, like the planets in our own solar system. However, if the planet passed nearby a much more massive planet early in its formation, before the orbits in its home system settled down, it could get slingshot out of its solar system, and would now be wandering through interstellar space in the Milky Way among the stars."[91]
The "rogue planet got tossed out of the system by gravitational forces in 60 percent of the simulations – but in the other 40 percent, it got captured by the sun."[90]
"Imagine that the Sun was the size of an orange or an apple. Imagine the planets as maybe fruit flies buzzing around the apple-sized Sun. On this scale, the next closest star to the Sun, Proxima Centauri, would be another apple roughly 1,400 miles away! That's roughly Chicago to Tucson. Now imagine the chance of a fruit fly in Chicago making its way 1,400 miles and finding the apple in Tucson. It could happen, but it's not the way to bet."[92]
"The 'classical' planets, Mercury, Venus, Mars, Jupiter, and Saturn are all easily visible to the naked eye and have been known for thousands and thousands of years. Because these objects changed their positions in the sky night after night compared to the background stars (which never seemed to change), god-like attributes were given to the planets. In fact, the name 'planet' comes from the Greek word for 'wanderer.'"[92]
"Those [discoveries of later planets] served to show people how expansive the universe was and how much there was to learn. But it generally did not change the worldview of the civilizations that discovered them because those discoveries did not fundamentally change the picture of the universe they had, the way Copernicus and Galileo changed our understanding of the place of the Earth in the solar system, or the way Einstein changed our perceptions of space and time."[91]
"If [Planet Nine] really exists, and is confirmed and observed, it will likely tell us that the process of planet formation is more violent, and chaotic, than previously thought. And it would tell us that there is still a lot of space to explore where new discoveries may be lurking."[91]
## Hypotheses
1. At some point, perhaps around 40,000 b2k, the Sun-Earth system was in a binary with a smaller star that was a pole star for the north geographic/rotational pole of the Earth.
For a solar binary, proof of concept is that the Sun and Jupiter together can act in some way like a stellar binary.
## References
1. D. R. Soderblom; J. R. King (1998). "Solar-Type Stars: Basic Information on Their Classification and Characterization". Solar Analogs : Characteristics and Optimum Candidates. Retrieved 2008-02-26.
2. SIMBAD Astronomical Database. Centre de Données astronomiques de Strasbourg. Retrieved 2009-01-14.
3. Williams, D.R. (2004). Sun Fact Sheet. NASA. Retrieved 2009-06-23.
4. Meléndez, Jorge; Ramírez, Iván (November 2007). "HIP 56948: A Solar Twin with a Low Lithium Abundance". The Astrophysical Journal 669 (2): L89–L92. doi:10.1086/523942.
5. Sousa, S. G.; Fernandes, J.; Israelian, G.; Santos, N. C. (March 2010). "Higher depletion of lithium in planet host stars: no age and mass effect". Astronomy and Astrophysics 512: L5. doi:10.1051/0004-6361/201014125.
6. Takeda, Y.; Tajitsu, A.; Tajitsu (2009). "High-Dispersion Spectroscopic Study of Solar Twins: HIP 56948, HIP 79672, and HIP 100963". Publications of the Astronomical Society of Japan 61: 471.
7. King, Jeremy R.; Boesgaard, Ann M.; Schuler, Simon C. (November 2005). "Keck HIRES Spectroscopy of Four Candidate Solar Twins". The Astronomical Journal 130 (5): 2318–25. doi:10.1086/452640.
8. Vázquez, M.; Pallé, E.; Rodríguez, P. Montañés (2010). Is Our Environment Special?, In: The Earth as a Distant Planet: A Rosetta Stone for the Search of Earth-Like Worlds. Springer New York. pp. 391–418. doi:10.1007/978-1-4419-1684-6. ISBN 978-1-4419-1683-9.CS1 maint: Multiple names: authors list (link)
9. Porto de Mello, G. F.; Lyra, W.; Keller, G. R. (September 2008). "The Alpha Centauri binary system. Atmospheric parameters and element abundances". Astronomy and Astrophysics 488 (2): 653–66. doi:10.1051/0004-6361:200810031.
10. Casagrande, Luca; Flynn, Chris; Portinari, Laura; Girardi, Leo; Jimenez, Raul (December 2007). "The helium abundance and ?Y/?Z in lower main-sequence stars". Monthly Notices of the Royal Astronomical Society 382 (4): 1516–40. doi:10.1111/j.1365-2966.2007.12512.x.
11. Tabetha S. Boyajian, Harold A. McAlister, Ellyn K. Baines, Douglas R. Gies, Todd Henry, Wei-Chun Jao, David O’Brien, Deepak Raghavan, Yamina Touhami (August 2008). "Angular Diameters of the G Subdwarf µ Cassiopeiae A and the K Dwarfs s Draconis and HR 511 from Interferometric Measurements with the CHARA Array". The Astrophysical Journal 683 (1): 424–32. doi:10.1086/589554.
12. Valenti, Jeff A.; Fischer, Debra A. (July 2005). "Spectroscopic Properties of Cool Stars (SPOCS). I. 1040 F, G, and K Dwarfs from Keck, Lick, and AAT Planet Search Programs". The Astrophysical Journal Supplement Series 159 (1): 141–66. doi:10.1086/430500. See VizieR catalogue J/ApJS/159/141.
13. Holmberg J., Nordstrom B., Andersen J. (July 2009). "The Geneva-Copenhagen survey of the solar neighbourhood. III. Improved distances, ages, and kinematics". Astronomy and Astrophysics 501 (3): 941–7. doi:10.1051/0004-6361/200811191. See Vizier catalogue V/130.
14. Kovtyukh, V. V.; Soubiran, C.; Belik, S. I.; Gorlova, N. I. (2003). "High precision effective temperatures for 181 F-K dwarfs from line-depth ratios". Astronomy and Astrophysics 411 (3): 559–64. doi:10.1051/0004-6361:20031378.
15. Sousa, S. G., N. C. Santos, M. Mayor, S. Udry, L. Casagrande, G. Israelian, F. Pepe, D. Queloz, M. J. P. F. G. Monteiro (August 2008). "Spectroscopic parameters for 451 stars in the HARPS GTO planet search program. Stellar [Fe/H] and the frequency of exo-Neptunes". Astronomy and Astrophysics 487 (1): 373–81. doi:10.1051/0004-6361:200809698. See VizieR catalogue J/A+A/487/373.
16. Santos, N. C.; Israelian, G.; Randich, S.; García López, R. J.; Rebolo, R. (October 2004). "Beryllium anomalies in solar-type field stars". Astronomy and Astrophysics 425 (3): 1013–1027. doi:10.1051/0004-6361:20040510.
17. E. E. Mamajek; L. A. Hillenbrand (2008). "Improved Age Estimation for Solar-Type Dwarfs Using Activity-Rotation Diagnostics". Astrophysical Journal 687 (2): 1264. doi:10.1086/591785.
18. MIRA's Field Trips to the Stars Internet Education Program, In: Monterey Institute for Research in Astronomy. Retrieved August 27, 2007.
19. D. Gossman (October 1989). "Light Curves and Their Secrets". Sky & Telescope: 410.
20. Eclipsing Binary Simulation. Cornell Astronomy.
21. D. Bruton. Eclipsing Binary Stars. Stephen F. Austin State University.
22. Jeff Bryant. Contact Binary Star Envelopes. Wolfram Demonstrations Project.
23. contact binary, David Darling, The Internet Encyclopedia of Science. Accessed on line November 4, 2007.
24. David Darling. Overcontact binary In: The Internet Encyclopedia of Science. Retrieved November 4, 2007.
25. pp. 51–53, An Introduction to Astrophysical Fluid Dynamics, Michael J. Thompson, London: Imperial College Press, 2006. ISBN 1-86094-615-1.
26. R. Voss, T.M. Tauris (2003). "Galactic distribution of merging neutron stars and black holes". Monthly Notices of the Royal Astronomical Society 342 (4): 1169–84. doi:10.1046/j.1365-8711.2003.06616.x.
27. p. 231, Stellar Rotation, Jean Louis Tassoul, Andrew King, Douglas Lin, Stephen P. Maran, Jim Pringle, and Martin Ward, Cambridge, UK, New York: Cambridge University Press, 2000 |ISBN=0-521-77218-4 |accessdate=2012-07-10 }}
28. Sergey A. Astakhov and David Farrelly (November 2004). "Capture and escape in the elliptic restricted three?body problem". Monthly Notices of the Royal Astronomical Society 354 (4): 971-9. doi:10.1111/j.1365-2966.2004.08280.x. Retrieved 2012-03-12.
29. Oliver Justin Lee (1917). Yerkes Observatory, ed. Zone +45° of Kapteyn’s Selected Areas: Parallaxes and Proper Motions of 1041 Stars, In: Publications of the Yerkes observatory of the University of Chicago, Volume 4 Part IV. Chicago, Illinois: University of Chicago Press. pp. 123–89. Retrieved 2012-07-10.
30. Kam-Ching Leung and Donald P. Schneider (February 1977). "Eclipsing systems in star clusters. III. Early-type contact system BH Centauri.". The Astrophysical Journal 211 (2): 844-52. doi:10.1086/154993.
31. A.P. Boss (1992). J. Sahade, G.E. McCluskey, Yoji Kondo, ed. Formation of Binary Stars, In: The Realm of Interacting Binary Stars. Dordrecht: Kluwer Academic. p. 355. ISBN 0-7923-1675-4. |access-date= requires |url= (help)CS1 maint: Multiple names: editors list (link)
32. J.E. Tohline, J.E. Cazes, H.S. Cohl. The Formation of Common-Envelope, Pre-Main-Sequence Binary Stars. Louisiana State University. |access-date= requires |url= (help)CS1 maint: Multiple names: authors list (link)
33. J. E. Pringle (July 1989). "On the formation of binary stars". Royal Astronomical Society, Monthly Notices 239 (7): 361-70.
34. P Sommers and S Westerhoff (May 12, 2009). "Cosmic ray astronomy". New Journal of Physics 11 (5): 055004. doi:10.1088/1367-2630/11/5/055004. Retrieved 2012-03-28.
35. Charles Q. Choi (July 30, 2014). Weird Orbits of Alien Planets May Be Due to Twin Stars. Space.com. Retrieved 2014-09-06.
36. Robert Grant Aitken and Eric Doolittle (1932). New general catalogue of double stars within 120 of the North pole. Publication 417. Washington, D.C.: Carnegie institution of Washington. Bibcode:1932ADS...C......0A. Retrieved 2014-04-02.
37. H. M. Jeffers, W. H. Van Den Bos, and F. M. Greeby (1963). "Index catalogue of visual double stars, 1961.0". Publications of the Lick Observatory 21 (1). Retrieved 2014-04-04.
38. J. Dommanget (March 1983). "Un catalogue des composantes d'etoiles doubles et multiples (C.C.D.M.)". Bulletin d'Information du Centre de Donnees Stellaires (24): 83-90. Retrieved 2014-04-04.
39. J. Dommanget and O. Nys (1994). "Catalogue des composantes d'etoiles doubles et multiples (CCDM) premiere edition - Catalogue of the components of double and multiple stars (CCDM) first edition". Com. de l'Observ. Royal de Belgique (115): 1. Retrieved 2014-04-04.
40. C. E. Worley and G. G. Douglass (November 1997). "The Washington Double Star Catalog (WDS, 1996.0)". Astronomy & Astrophysics, Supplement Series 125 (1): 523. doi:10.1051/aas:1997239. Retrieved 2014-04-04.
41. Brian D. Mason, Gary L. Wycoff. William I. Hartkopf, Geoffrey G. Douglass, and Charles E. Worley (December 2001). "The 2001 US Naval Observatory double star CD-ROM. I. The Washington double star catalog". Journal of Astronomy 122 (6): 3466-71. Retrieved 2014-04-04.
42. J. H. M. M. Schmitt, L. Golub, F. R. Harnden, Jr., C. W. Maxson, R. Rosner, and G. S. Vaiana (March 1, 1985). "An Einstein Observatory X-ray Survey of Main-Sequence Stars with Shallow Convection Zones". The Astrophysical Journal 290 (03): 307-20. doi:10.1086/162986. Retrieved 2014-04-04.
43. Jon Voisey (2011). Companion Stars Could Cause Unexpected X-Rays. Universe Today. Retrieved 2014-04-04.
44. K. Topka, D. Fabricant, F. R. Harnden, Jr., P. Gorenstein, and R. Rosner (April 15, 1979). "Detection of Soft X-rays from α Lyrae and η Bootis with an Imaging X-ray Telescope". The Astrophysical Journal 229 (04): 661-8. doi:10.1086/157000. Retrieved 2014-04-04.
45. Charles W. Engelke, Stephan D. Price, and Kathleen E. Kraemer (December 2010). "Spectral Irradiance Calibration in the Infrared. XVII. Zero-Magnitude Broadband Flux Reference for Visible-to-Infrared Photometry". The Astronomical Journal 140 (6): 1919-28. doi:10.1088/0004-6256/140/6/1919. Retrieved 2014-04-05.
46. Richard H. Munro and Bernard V. Jackson (May 1, 1977). "Physical Properties of a Polar Coronal Hole from 2 to 5 R". The Astrophysical Journal 213 (05): 874-5, 877-86. doi:10.1086/155220. Retrieved 2014-04-05.
47. G. S. Vaiana, J. P. Cassinelli, G. Fabbiano, R. Giacconi , L. Golub, P. Gorenstein, B. M. Haisch, F.R. Harnden Jr., H. M. Johnson, J. L. Linsky, C. W. Maxson, R. Mewe, R. Rosner, F. Seward, K. Topka, and C. Zwaan (April 1, 1981). "Results from an extensive Einstein stellar survey". The Astrophysical Journal 244 (04): 163-82. doi:10.1086/158797. Retrieved 2014-04-06.
48. F. Lignières, P. Petit, T. Böhm, M. Aurière (June 2009). "First evidence of a magnetic field on Vega. Towards a new class of magnetic A-type stars". Astronomy and Astrophysics 500 (3): L41-4. doi:10.1051/0004-6361/200911996. Retrieved 2014-04-06.
49. Pascal Petit (2009). Magnetic Field On Bright Star Vega. Science Daily. Retrieved 2014-04-06.
50. Jessica Donaldson (January 20, 2013). Asteroid belt found in the Vega System. Astrobites. Retrieved 2014-04-04.
51. David R. Williams (September 2004). Sun Fact Sheet. Greenbelt, MD: NASA Goddard Space Flight Center. Retrieved 2011-12-20.
52. M. Asplund, N. Grevesse and A. J. Sauval (January 2006). "The new solar abundances - Part I: the observations". Communications in Asteroseismology 147 (01): 76-9. doi:10.1553/cia147s76. Retrieved 2013-08-08.
53. William J. Chaplin, Aldo M. Serenelli, Sarbani Basu, Yvonne Elsworth, Roger New, and Graham A. Verner (November 20, 2007). "Solar Heavy-Element Abundance: Constraints from Frequency Separation Ratios of Low-Degree p-Modes". The Astrophysical Journal 670 (1): 872-84. doi:10.1086/522578. Retrieved 2013-08-08.
54. A. Bonanno, H. Schlattl, L. Paternò (2008). "The age of the Sun and the relativistic corrections in the EOS". Astronomy and Astrophysics 390 (3): 1115–1118. doi:10.1051/0004-6361:20020749.
55. David F. Gray (November 1992). "The Inferred Color Index of the Sun". Publications of the Astronomical Society of the Pacific 104 (681): 1035-8.
56. Samantha Harvey (April 26, 2007). Solar System Exploration. Washington, DC USA: National Aeronautics and Space Administration. Retrieved 2013-08-08.
57. MJ Valtonen (February 1983). "On the capture of comets into the Solar System". The Observatory 103 (2): 1-4.
58. M. J. Valtonen; K. A. Innanen (April 1982). "The capture of interstellar comets". The Astrophysical Journal 255 (4): 307-15. doi:10.1086/159830.
59. Joseph Campbell (June 26, 2008). The Masks of God: Occidental Mythology. Paw Prints. p. 564. ISBN 1439508925. Retrieved 2013-01-06.
60. Manly Palmer Hall (1928). Secret Teachings of All Ages. San Francisco: Hall Publishing Company. p. 648. Retrieved 2013-01-06.
61. William Fairfield Warren (1885). Paradise Found The Cradle of the Human Races at the North Pole. Boston: Houghton, Mifflin and Company. Retrieved 2013-01-06.
62. John Strange (1980). Caphtor/Keftiu: A New Investigation. Brill Archive. p. 227. ISBN 9004062564. Retrieved 2013-01-11.
63. David Ulansey (1989). The Origins of the Mithraic Mysteries: Cosmology and Salvation in the Ancient World. Oxford, England: Oxford University Press. ISBN 0-19-505402-4. Retrieved 2013-01-13.
64. van Leeuwen, F. (2007). HIP 11767, In: Hipparcos, the New Reduction. Retrieved 2011-03-01.
65. Jean Meeus, Mathematical Astronomy Morsels Ch.50; Willmann-Bell 1997
66. J. Herschel (June 1918). "The poles of planetary orbits". The Observatory 41: 255-7. Retrieved 2013-07-10.
67. Sailormoon Terms and Information. The Sailor Senshi Page. Retrieved March 5, 2006.
68. "Asian Astronomy 101". Hamilton Amateur Astronomers 4 (11). 1997. Retrieved August 5, 2007.
69. Planet symbols, In: NASA Solar System exploration. Retrieved August 4, 2007.
70. AETHER: Greek protogenos god of upper air & light ; mythology : AETHER. Theoi.com. Retrieved 2013-01-14.
71. Varro, De lingua Latina 5.58.
72. Cicero, De natura deorum 3.44, as cited by E.J. Kenney, Apuleius: Cupid and Psyche (Cambridge University Press, 1990, 2001), note to 6.6.4, p. 198; Hyginus, preface. This is not the theogony that Hesiod presents.
73. Cicero, De natura Deorum 3.56; also Arnobius, Adversus Nationes 4.14.
74. Ennius, Annales 27 (edition of Vahlen); Varro, as cited by Nonius Marcellus, p. 197M; Cicero, Timaeus XI; Arnobius, Adversus Nationes 2.71, 3.29.
76. John P. Wisniewski, Jian Ge, Justin R. Crepp, Nathan De Lee, Jason Eastman, Massimiliano Esposito, Scott W. Fleming, B. Scott Gaudi, Luan Ghezzi, Jonay I. Gonzalez Hernandez, Brian L. Lee, Keivan G. Stassun, Eric Agol, Carlos Allende Prieto, Rory Barnes, Dmitry Bizyaev, Phillip Cargile, Liang Chang, Luiz N. DaCosta, G.F. Porto De Mello, Bruno Femenía, Leticia D. Ferreira, Bruce Gary, Leslie Hebb, Jon Holtzman, Jian Liu, Bo Ma, Claude E. Mack III, Suvrath Mahadevan, Marcio A.G. Maia, Duy Cuong Nguyen, Ricardo L.C. Ogando, Daniel J. Oravetz, Martin Paegert, Kaike Pan, Joshua Pepper, Rafael Rebolo, Basilio Santiago, Donald P. Schneider, Alaina C Shelden, Audrey Simmons, Benjamin M. Tofflemire, Xiaoke Wan, Ji Wang, Bo Zhao (May 2012). "Very Low-Mass Stellar and Substellar Companions to Solar-like Stars from MARVELS I: a Low Ratio Stellar Companion to TYC 4110-01037-1 in a 79-day Orbit". Astronomical Journal 143 (5): 107-18. doi:10.1088/0004-6256/143/5/107. Retrieved 2013-08-05.
77. Jeremy R. King, Constantine P. Deliyannis, Daniel D. Hiltgen, Alex Stephens, Katia Cunha, and Ann Merchant Boesgaard (05 1997). "Lithium Abundances in the Solar Twins 16 Cyg A and B and the Solar Analog α Cen A, Calibration of the 6707 Å Li Region Linelist, and Implications". The Astronomical Journal 113 (5): 1871-83. doi:10.1086/118399. Retrieved 2013-08-06.
78. Deepak Raghavan, Todd J. Henry, Brian D. Mason, John P. Subasavage, Wei‐Chun Jao, Thom D. Beaulieu, Nigel C. Hambly (2006). "Two Suns in The Sky: Stellar Multiplicity in Exoplanet Systems". The Astrophysical Journal 646 (1): 523–542. doi:10.1086/504823.
79. H. Hauser and G. Marcy (1999). "The Orbit of 16 Cygni AB". Publications of the Astronomical Society of the Pacific 111 (757): 321–34. doi:10.1086/316328.
80. Holmberg; et al. (2007). Record 13627, In: Geneva-Copenhagen Survey of Solar neighbourhood. Retrieved 19 November 2008.CS1 maint: Explicit use of et al. (link)
81. Holmberg; et al. (2007). Record 13631, In: Geneva-Copenhagen Survey of Solar neighbourhood. Retrieved 19 November 2008.CS1 maint: Explicit use of et al. (link)
82. Cochran, Artie P. Hatzes, R. Paul Butler, Geoffrey W.Marcy (1997). "The Discovery of a Planetary Companion to 16 Cygni B". The Astrophysical Journal 483 (1): 457–63. doi:10.1086/304245.
83. Butler et al.; Wright, J. T.; Marcy, G. W.; Fischer, D. A.; Vogt, S. S.; Tinney, C. G.; Jones, H. R. A.; Carter, B. D. et al. (2006). "Catalog of Nearby Exoplanets". The Astrophysical Journal 646 (1): 505–22. doi:10.1086/504701.
84. Wittenmyer et al.; Endl, Michael; Cochran, William D.; Levison, Harold F. (2007). "Dynamical and Observational Constraints on Additional Planets in Highly Eccentric Planetary Systems". The Astronomical Journal 134 (3): 1276–84. doi:10.1086/520880.
85. J. Robrade, J.H.M.M. Schmitt, and F. Favata (October 4, 2005). "X-rays from α Centauri - The darkening of the solar twin". Astronomy & Astrophysics 442 (1): 315-21. doi:10.1051/0004-6361:20053314. Retrieved 2013-08-07.
86. Samantha Mathewson (10 January 2017). Hubble Spies Exocomets Diving into Young Star. Space.com. Retrieved 2017-01-13.
87. Carol Grady (10 January 2017). Hubble Spies Exocomets Diving into Young Star. Space.com. Retrieved 2017-01-13.
88. Barry Y. Welsh and Sharon Montgomery (July 2013). "Circumstellar Gas-Disk Variability Around A-Type Stars: The Detection of Exocomets?". Publications of the Astronomical Society of Pacific 125 (929): 759-774. doi:10.1086/671757. Retrieved 2017-01-13.
89. Weston Williams (12 January 2017). Could 'Planet Nine' be a rogue planet?. CSMonitor. Retrieved 2017-01-13.
90. Joshua Pepper (12 January 2017). Could 'Planet Nine' be a rogue planet?. The Christian Science Monitor. Retrieved 2017-01-13.
91. Michael Smutko (12 January 2017). Could 'Planet Nine' be a rogue planet?. The Christian Science Monitor. Retrieved 2017-01-13.
|
{}
|
# Naked Singularity, Black hole mass limit
• B
• Arman777
Gold Member
I came across a question on PSE. I am not sure its a violation to ask the same question here, but there's no answer to the question in there so I wanted to ask it here.
Quoting his question,
"Since the universe has a positive cosmological constant, there is an upper limit on the mass of the black holes as evident from the so-called Schwarzschild-de Sitter metric:
$$ds^2 = -f(r)dt^2 + \dfrac{1}{f(r)} dr^2 + r^2 d\Omega_2^2$$
where, ##f(r) = 1 - \dfrac{2M}{r} - \dfrac{\Lambda}{3} r^2##.
It suggests that a singularity would be a black hole only if the mass is not greater than ##\dfrac{1}{3\sqrt{\Lambda}}## and if the mass exceeds this limit then the singularity would become naked. But if we consider the Cosmic Censorship seriously then we must expect that (since the naked singularities can't exist) no singularity can have a mass greater than ##\dfrac{1}{3\sqrt{\Lambda}}##. This suggests that there is an upper limit on mass itself (namely, ##\dfrac{1}{3\sqrt{\Lambda}}##) that can be put inside a radius of ##\dfrac{2}{3\sqrt{\Lambda}}##. Thus, within a radius ##R##, mass can't ever exceed ##\dfrac{1}{3\sqrt{\Lambda}}## if ##R\leq\dfrac{2}{3\sqrt{\Lambda}}##. (Perhaps, by perpetually shifting the origin of the coordinate set-up to cover the desired region, I can argue that within a radius ##R##, mass can't exceed ##\dfrac{n}{3\sqrt{\Lambda}}## if ##n## is the smallest possible integer solution for ##l## where ##R\leq\dfrac{2l}{3\sqrt{\Lambda}}##.) If we consider censorship seriously, then it doesn't suggest that if the mass exceeds this limit then it will form a naked singularity but it rather suggests that mass just can't exceed this limit.
This seems result seems quite interesting to me and I can't figure out as to what reason or mechanism would keep the mass from crossing this limit. What is the resolution to this question (provided it demands a resolution)?"
Last edited:
Gold Member
This is definitely interesting. I don't have the time to go into it in detail right now, but I bet that if you considered a LCDM universe where the matter density was high enough that black holes in this mass range were possible, then you'd also end up with structures approaching Planck densities, indicating that quantum gravity has something to say which we don't understand how to interpret just yet.
Gold Member
f(r)=1−2M/r−Λ/3r2.
Hi Arman:
I confess I am puzzled by the math related to units. In particular, using SI units, it appears from the quoted equation that Λ has units kg/m3, based on the assumption that all terms in the RHS of the equation have the same units. I am guessing that there may be other aspects of the units that are hidden by assuming certain constants are equal to 1, e.g., G and c, but I am not familiar enough with the conventions to be sure of this.
The final mystery is that "the mass is not greater than 1/(3√Λ)." This should mean that the units of Λ has the units 1/kg2.
Regards,
Buzz
Gold Member
This is definitely interesting. I don't have the time to go into it in detail right now, but I bet that if you considered a LCDM universe where the matter density was high enough that black holes in this mass range were possible, then you'd also end up with structures approaching Planck densities, indicating that quantum gravity has something to say which we don't understand how to interpret just yet.
Hmm, so you are suggesting the limit is possible but we need quantum gravity to understand why. I cannot make much comment because the question is hard as you said.
Gold Member
Hi Arman:
I confess I am puzzled by the math related to units. In particular, using SI units, it appears from the quoted equation that Λ has units kg/m3, based on the assumption that all terms in the RHS of the equation have the same units. I am guessing that there may be other aspects of the units that are hidden by assuming certain constants are equal to 1, e.g., G and c, but I am not familiar enough with the conventions to be sure of this.
The final mystery is that "the mass is not greater than 1/(3√Λ)." This should mean that the units of Λ has the units 1/kg2.
Regards,
Buzz
I also did not understand how the unit system works.
Have made any calculation as to mass limit (in solar masses)?
Mentor
What is the resolution to this question (provided it demands a resolution)?"
I think the resolution is that the poster's assumption that the singularity becomes naked if the black hole mass ##M## exceeds the given threshold value is incorrect. Instead, I think what happens, heuristically, is that the black hole horizon and the cosmological horizon switch roles. See the comments here:
https://en.wikipedia.org/wiki/De_Sitter–Schwarzschild_metric#Horizon_properties
However, it's also possible that there's nothing to resolve, because the poster is using coordinates that don't cover the entire spacetime but only a part of it. The coordinates in which the metric is written in the OP are called "static" coordinates in the de Sitter case, and it is well known that they do not cover all of de Sitter spacetime. So it's possible that translating the question into coordinates that do cover all of de Sitter spacetime (or more precisely the analogue of such coordinates for the Schwarzschild-de Sitter case) will show that there is no issue in the first place.
Grinkle
Mentor
using SI units
The quoted metric is not written in SI units. It's written in "natural" units for GR, where ##G = c = 1##. That means the units of ##M## are length, and the units of ##\Lambda## are inverse length squared (which are curvature units).
Arman777 and Buzz Bloom
Staff Emeritus
Gold Member
From the Phys. Rev. D. paper "Global Structure of Robinson-Trautman radiative space-times with cosmological constant" by Bicak and Podolsky
https://arxiv.org/abs/gr-qc/9901018
"5 The Robinson-Trautman space-times with ##9 \Lambda m^2 >1##
In this case the corresponding Schwarzschild-de Sitter space-time (16) admits no horizon in the region r > 0 (cf. [21, 36]) so that there is only a naked singularity situated at ##r = 0##."
Mentor
In this case the corresponding Schwarzschild-de Sitter space-time (16) admits no horizon in the region r > 0
Hm, yes, I see that. But as far as I can tell, that is because the function ##f(r)## is negative in the entire range ##r < 0 < \infty##, which means the entire spacetime is like the interior of a Schwarzschild black hole: no timelike KVF and no stationary observers, everything forced to fall into the singularity in finite time. The singularity, as far as I can tell, is still spacelike, so it's not a "naked singularity" in the same sense as, for example, the singularity in super-extremal Kerr spacetime, which is timelike.
In other words, there are three possible behaviors for the function ##f(r)##:
(1) For the case with two horizons, ##9 \Lambda M^2 < 1##, ##f(r)## starts out negative at small ##r##, switches to positive at the black hole horizon, is positive for the "normal" region between the two horizons, then switches to negative again at the cosmological horizon, and is negative the rest of the way out to large ##r##.
(2) For the case with one horizon, ##9 \Lambda M^2 = 1##, ##f(r)## starts out negative at small ##r##, just reaches zero at the single horizon, then is negative again for large ##r##.
(3) For the case with no horizon, ##9 \Lambda M^2 > 1##, ##f(r)## is negative everywhere: the balancing between the two terms never allows it to become positive at all.
So my proposed resolution in post #7 is wrong, but the way that it's wrong suggests a different resolution. Consider: suppose we are in the "two horizon" regime, where ##9 \Lambda M^2 < 1##. Matter in this region could fall into the black hole and increase its mass, bringing the two horizons closer together. But if this process continues to the point where ##9 \Lambda M^2## approaches ##1##, the horizons get closer and closer together. (Note that, in the Nariai solution described in the Wikipedia article I linked to, they never actually meet--but I think this solution is no longer valid if we actually let ##9 \Lambda M^2## equal ##1##, it's only valid if ##9 \Lambda M^2## is still less than ##1## but very, very close to it.)
Now, suppose we are in a spacetime in which the horizons are approaching each other in this way. What happens to observers in the region between them when they actually touch? Well, there is only one place they can go: through the horizon. So in a universe like this, eventually everyone would end up in a region like the interior of the black hole (which, as noted above, basically becomes the entire spacetime if ##9 \Lambda M^2 > 1##. And inside the black hole, the singularity is no longer hidden by a horizon, by definition.
Whether this counts as a violation of the cosmic censorship conjecture could be debated, but certainly it doesn't violate it in the way that is intuitively suggested by the OP: we don't have a "normal universe" region that now has a naked singularity in it. There is no "normal universe" at all any more.
(It's worth noting, also, that the above analysis does not take into account quantum effects. As noted in the Wikipedia article I linked to, if we include quantum effects, both horizons radiate, and it's no longer completely clear how to distinguish them.)
Arman777
Mentor
what reason or mechanism would keep the mass from crossing this limit.
In addition to my previous comments, it's also worth noting that the whole idea of "mass falling into a black hole" implicitly assumes that the hole's spacetime is asymptotically flat. But Schwarzschild-de Sitter spacetime, unlike ordinary Schwarzschild spacetime, is not asymptotically flat. So it's not clear what "mass falling into a black hole" actually means in Schwarzschild-de Sitter spacetime.
Another way to put this is to observe that the conformal structure of Schwarzschild-de Sitter spacetime is different, not only from that of Schwarzschild spacetime, but different depending on whether ##9 \Lambda M^2## is less than, equal to, or greater than ##1##. And it's not clear how any physical process could change the conformal structure of a spacetime.
Arman777
Gold Member
Thanks for your explanations. I was looking at the PSE, and I also see another perspective which says by like this. It seems that this is an answer to another question. And I would like to also share since it seems interesting.
"The rules of classical general relativity say that when you add mass to a black hole, you get a larger black hole. If you add angular momentum to a black hole at a greater rate than that at which you add mass, it would theoretically be possible to get a Kerr black hole with ##a \gt M##, which would convert the black hole to a naked singularity, but the rules of black hole thermodyanamics say that a black hole with ##a = M## has zero temeperature, so creating a naked singularity in this way is believed to be impossible."
And continues that if it has zero tempature, no material can fall into it.
From, https://physics.stackexchange.com/questions/459067/do-black-holes-have-a-limit-of-mass/459070#459070
Last edited:
Mentor
the rules of black hole thermodyanamics say that a black hole with ##a = M## has zero temeperature, so creating a naked singularity in this way is believed to be impossible
This is a bit of a misstatement. The issue is not temperature (it's hard to see why it should be impossible to add energy to a system that has zero temperature, since adding energy would just increase the temperature above zero). The issue is that, when you analyze the details of how you would try to add angular momentum to a rapidly spinning black hole, i.e., one that is close to ##a = M##, it turns out that there's no way to do it without adding enough mass to keep the hole from reaching ##a = M##. Heuristically, this is because whatever you drop into the hole has to have some minimum energy, i.e., mass, in order to be able to drop it in in a way that will increase the angular momentum of a rapidly spinning hole.
Arman777
|
{}
|
# choosing possible routes home
• May 7th 2010, 08:09 PM
ihavvaquestion
choosing possible routes home
I have attached a question from my homework, that I am not sure how to get started.
• May 7th 2010, 08:22 PM
gmatt
subtle hint: what does every path that you can take have in common with all other such paths?
• May 7th 2010, 08:33 PM
ihavvaquestion
it goes north a certain amount and east a certain amount? it gets you home?
• May 8th 2010, 12:26 AM
gmatt
further hint: yes, take that reasoning further. Try to come up with a way to construct all walks.
• May 8th 2010, 05:03 AM
ihavvaquestion
every walk goes north 4 east 6
E6 N4
N4 E6
N3 E5 N1 E1
N3 E4 N1 E2
N3 E3 N1 E3
N3 E2 N1 E4
N3 E1 N1 E5
N2 E5 N2 E1
N2 E5 N1 E1 N1
N2 E4 N2 E2
N2 E4 N1 E2 N1
N2 E4 N1 E1 N1 E1
N2 E3 N2 E3
N2 E3 N1 E3 N1
N2 E3 N1 E2 N1 E1
N2 E3 N1 E1 N1 E2
N1 E6 N3
N1 E5 N3 E1
N1 E5 N2 E1 N1
N1 E5 N1 E1 N2
N1 E4 N3 E2
N1 E4 N2 E2 N1
N1 E4 N1 E2 N2
N1 E3 N3 E3
N1 E3 N2 E3 N1
N1 E3 N2 E2 N1 E1
N1 E3 N2 E1 N1 E2
N1 E3 N1 E3 N2
N1 E3 N1 E2 N2 E1
N1 E3 N1 E1 N1 E1 N1
N1 E2 N3 E4
N1 E2 N2 E4 N1
N1 E2 N1 E4 N3
N1
I see that I could trace every possible route, but how can i apply the fundemental counting principle to this problem?
it seems like this might be an nCr problem, but I cannot figure that out. can anyone help???
• May 8th 2010, 02:47 PM
gmatt
very strong hint:
Each walk goes 4 north 6 east as you observed. In total there are 10 steps (counting north or east.) Of these steps clearly 4 have to be north. What happens to the rest of the steps? How many ways are there to construct a path?
• May 8th 2010, 02:54 PM
ihavvaquestion
I appreciate your helping me, but I still don't get it. four steps must be north and six steps must be east???
is this an nCr problem? or is my thinking not right?
• May 8th 2010, 03:45 PM
Plato
Let me be frank with you. I usually do not open attachments.
I just figure that someone who wants help should be willing to learn to post correctly.
But in the case, given the responses you have gotten, I will step in.
How many ways to rearrange the string $EEEEEENNNN?$
Any rearrangement of that string describes a way to drive from home to school.
Notice, that is the number of ways to place six E’s into ten places.
• May 8th 2010, 03:56 PM
ihavvaquestion
thanks frank, im ray! i am definitely willing to learn how to post correctly, but how do i go about doing that? how do i get a problem like this one typed into the message box?
so there would be 10C6 or 10C4 possible routes home then, correct?
• May 8th 2010, 04:07 PM
Plato
Quote:
Originally Posted by ihavvaquestion
so there would be 10C6 or 10C4 possible routes home then, correct?
That is correct.
Why not learn to post in symbols? You can use LaTeX tags.
• May 8th 2010, 04:11 PM
gmatt
Quote:
Originally Posted by Plato
Let me be frank with you. I usually do not open attachments.
I just figure that someone who wants help should be willing to learn to post correctly.
But in the case, given the responses you have gotten, I will step in.
How many ways to rearrange the string $EEEEEENNNN?$
Any rearrangement of that string describes a way to drive from home to school.
Notice, that is the number of ways to place six E’s into ten places.
Sorry if I was being too round about, I'm not sure what level of guidance I should offer in such a case. Next time I can just give a straightforward reply like yours.
• May 8th 2010, 04:23 PM
ihavvaquestion
thanks...i was not sure how to go about posting in symbols...i will check it out...thanks to you both for your help
|
{}
|
## Ray
In geometry, a ray is a line-like figure that starts at some point - called the "endpoint" or "vertex" - and continues indefinitely in one direction. It is described in formal notation using 2 points with an arrow above it pointing in the direction of the ray, for example: $\overrightarrow{AB}$.
The order of the endpoints describes the direction of the ray. It cannot be inversed.
$\overrightarrow{AB} \neq \overrightarrow{BA}$.
As an example, this is the ray $\overrightarrow{AB}$ in Ascii art: A ---- B ->
|
{}
|
PIRSA:22060054
# On the continuum limit of spin foams: graviton dynamics and area metric induced corrections
### APA
Kogios, A. (2022). On the continuum limit of spin foams: graviton dynamics and area metric induced corrections . Perimeter Institute. https://pirsa.org/22060054
### MLA
Kogios, Athanasios. On the continuum limit of spin foams: graviton dynamics and area metric induced corrections . Perimeter Institute, Jun. 22, 2022, https://pirsa.org/22060054
### BibTex
``` @misc{ pirsa_22060054,
doi = {10.48660/22060054},
url = {https://pirsa.org/22060054},
author = {Kogios, Athanasios},
keywords = {Other},
language = {en},
title = {On the continuum limit of spin foams: graviton dynamics and area metric induced corrections },
publisher = {Perimeter Institute},
year = {2022},
month = {jun},
note = {PIRSA:22060054 see, \url{https://pirsa.org}}
}
```
Athanasios Kogios Perimeter Institute for Theoretical Physics
Collection
Talk Type
Subject
## Abstract
The semi-classical limit of spin foams leads to the Area Regge action. It was long thought that this action leads to flatness and does, in particular, not allow for propagating gravitons. I will present the first systematic studies of the continuum limit of the Area Regge action, using different versions of regular hypercubic lattices. These studies have shown that the Area Regge action does in its continuum limit, lead to leading order to general relativity, and thus to propagating gravitons. The higher order corrections depend on the choice of triangulation for the hypercubic lattice. However, there seems to be a preferred choice, for which the Area Regge action is not singular. In this case the correction term approximates the square of the Weyl curvature tensor, and can be interpreted to arise from an area metric dynamics. We therefore conjecture that the continuum limit of spin foams is described by an area metric theory.
|
{}
|
#Logistic Regression Logistic regression is a regression model that is popularly used for classification tasks. In logistic regression, the probability that a **binary target is True** is modeled as a [logistic function](http://en.wikipedia.org/wiki/Logistic_function) of a linear combination of features. The following figure illustrates how logistic regression is used to train a 1-dimensional classifier. The training data consists of positive examples (depicted in blue) and negative examples (in orange). The decision boundary (depicted in pink) separates out the data into two classes.
##### Background
Given a set of features , and a label , logistic regression interprets the probability that the label is in one class as a logistic function of a linear combination of the features:
Analogous to linear regression, an intercept term is added by appending a column of 1's to the features and L1 and L2 regularizers are supported. The composite objective being optimized for is the following:
where is the L1_penalty and is the L2_penalty.
##### Introductory Example
First, let's construct a binary target variable. In this example, we will predict if a restaurant is good or bad, with 1 and 2 star ratings indicating a bad business and 3-5 star ratings indicating a good one. We will use the following features.
• Average rating of a given business
• Average rating made by a user
• Number of reviews made by a user
• Number of reviews that concern a business
import graphlab as gl
data = gl.SFrame('https://static.turi.com/datasets/regression/yelp-data.csv')
# Restaurants with rating >=3 are good
data['is_good'] = data['stars'] >= 3
# Make a train-test split
train_data, test_data = data.random_split(0.8)
# Create a model.
model = gl.logistic_classifier.create(train_data, target='is_good',
features = ['user_avg_stars',
'user_review_count',
# Save predictions (probability estimates) to an SArray
predictions = model.classify(test_data)
# Evaluate the model and save the results into a dictionary
results = model.evaluate(test_data)
Refer to the chapter on linear regression for the following features:
We will now discuss some advanced features that are specific to logistic regression.
###### Making Predictions
Predictions using a GraphLab Create classifier is easy. The classify() method provides a one-stop shop for all that you need from a classifier.
• A class prediction
• Probability/Confidence associated with that class prediction.
In the following example, the first prediction was class 1 with a 90.5% probability.
predictions = model.classify(test_data)
print predictions
+-------+----------------+
| class | probability |
+-------+----------------+
| 1 | 0.905618772131 |
| 1 | 0.941576249302 |
| 1 | 0.948254387657 |
| 0 | 0.996952633956 |
| 1 | 0.944229260472 |
| 1 | 0.951769966846 |
| 1 | 0.905561314917 |
| 1 | 0.957697429248 |
| 1 | 0.98527411871 |
| 1 | 0.973282185166 |
| ... | ... |
+-------+----------------+
[43018 rows x 2 columns]
Note: Only the head of the SFrame is printed.
You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns.
###### Making detailed predictions
Logistic regression predictions can take one of three forms:
• Classes (default): Thresholds the probability estimate at 0.5 to predict
• a class label i.e. 0/1.
• Probabilities: A probability estimate (in the range [0,1]) that the example is in the True class. Note that this is not the same as the probability estimate in the classify function.
• Margins : Distance to the linear decision boundary learned by the model. The larger the distance, the more confidence we have that it belongs to one class or the other.
GraphLab Create's logistic regression model can return predictions for any of these types:
pred_class = model.predict(test_data, output_type = "class") # Class
pred_prob_one = model.predict(test_data, output_type = 'probability') # Probability
pred_margin = model.predict(test_data, output_type = "margin") # Margins
###### Evaluating Results
We can also evaluate our predictions by comparing them to known ratings. The results are evaluated using two metrics:
result = model.evaluate(test_data)
print "Accuracy : %s" % result['accuracy']
print "Confusion Matrix : \n%s" % result['confusion_matrix']
Accuracy : 0.860862092991
Confusion Matrix :
+--------------+-----------------+-------+
| target_label | predicted_label | count |
+--------------+-----------------+-------+
| 0 | 0 | 2348 |
| 0 | 1 | 4816 |
| 1 | 0 | 912 |
| 1 | 1 | 34942 |
+--------------+-----------------+-------+
[4 rows x 3 columns]
###### Working with imbalanced data
Many difficult real-world problems have imbalanced data, where at least one class is under-represented. GraphLab Create models can handle the imbalanced data by assigning asymmetric costs of misclassifying elements of different classes.
# The data can be downloaded using
# Label 'c' is edible
data['label'] = data['label'] == 'c'
# Make a train-test split
train_data, test_data = data.random_split(0.8)
# Create a model which weights classes based on frequency in the training data.
model = gl.logistic_classifier.create(train_data, target='label',
class_weights = 'auto')
##### Multiclass Classification
Multiclass classification is the problem of classifying instances into one of many (i.e more than two) possible instances. As an example, binary classification can be used to train a classifier that can distinguish between two classes, say "cat" or "dog", while multiclass classification can be used to train a finite set of labels at the same time, say "cat", "dog", "rat", and "cow".
import graphlab as gl
data = gl.SFrame('https://static.turi.com/datasets/mnist/sframe/train6k-array')
# Make a train-test split
train_data, test_data = data.random_split(0.8)
# Create a model.
model = gl.logistic_classifier.create(train_data, target='label')
# Save predictions to an SFrame (class and corresponding class-probabilities)
predictions = model.classify(test_data)
# Top 5 predictions with probabilities, rank, and margin
top = model.predict_topk(test_data, output_type='probability', k = 5)
top = model.predict_topk(test_data, output_type='rank', k = 5)
top = model.predict_topk(test_data, output_type='margin', k = 5)
# Evaluate the model and save the results into a dictionary
results = model.evaluate(test_data)
###### Top-k predictions
Multiclass classification provides the top-k class predictions for each class. The predictions are either margins, probabilities, or a rank for the predicted class for each example. In the following example, we provide the top 5 predictions, ordered by class probability, for each data point in the test set.
top = model.predict_topk(test_data, output_type='probability', k = 3)
print top
Columns:
id str
class str
probability float
Rows: 3711
Data:
+-----+-------+------------------+
| id | class | probability |
+-----+-------+------------------+
| 0 | 5 | 0.94296406887 |
| 0 | 1 | 0.0140330526641 |
| 0 | 8 | 0.00636249767982 |
| 1 | 8 | 0.929146865934 |
| 1 | 9 | 0.0139581314344 |
| 1 | 1 | 0.00982837828507 |
| 2 | 1 | 0.937192457289 |
| 2 | 7 | 0.0106293228679 |
| 2 | 4 | 0.00910849289074 |
| 3 | 5 | 0.900146607924 |
| ... | ... | ... |
+-----+-------+------------------+
|
{}
|
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
$\lt$
On the number line, the value $-98$ appears to the $\text{ left }$ of the value $0$. Hence, \begin{array}{l}\require{cancel} -98 \lt 0 .\end{array}
|
{}
|
# zbMATH — the first resource for mathematics
Multigrid and adaptive algorithm for solving the nonlinear Schrödinger equation. (English) Zbl 0708.65111
An application of a multigrid and adaptive algorithm for solving the initial-boundary value problem $$iu_ t-(\partial /\partial x)A(x)(\partial u/\partial x)+2| u|^ 2u+F(x,t)u=G(x,t),\quad u\in {\mathbb{R}}^ m,$$ $$x_ L<x<x_ R,\quad u|_{x_ L}=u|_{x_ R}=0,\quad u|_{t=0}=u_ 0(x),$$ where A(x) is a real diagonal matrix, is considered. There are estimates for the solution of the difference scheme. Some numerical examples are presented and compared with corresponding results obtained by a traditional method. There is a discussion of the method and the numerical results.
Reviewer: L.P.Lebedev
##### MSC:
65Z05 Applications to the sciences 65N55 Multigrid methods; domain decomposition for boundary value problems involving PDEs 65M06 Finite difference methods for initial value and initial-boundary value problems involving PDEs 35Q55 NLS equations (nonlinear Schrödinger equations)
Full Text:
##### References:
[1] Chang, Q., Sci. sinica (ser. A), 16, 687, (1983) [2] Chang, Q.; Xu, L., J. comput. math., 4, 191, (1986) [3] Brandt, A., Math. comput., 31, 333, (1977) [4] Hackbusch, W., Multigrid methods and applications, (1985), Springer-Verlag New York · Zbl 0577.65118 [5] Tana, T.R.; Ablowitz, M.J., J. comput. phys., 55, 203, (1984) [6] Brandt, A., Multigrid techniques: guide with applications to fluid dynamics, GMD-stidier no. 85, (1984), Bonn · Zbl 0581.76033 [7] Sjoberg, A., J. math. anal. appl., 29, 569, (1970) [8] Manikoff, A., Commun. pure appl. math., 25, 407, (1972)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
{}
|
# Math Help - Sinusoidal function
1. ## Sinusoidal function
Hi all,
New to the forum, hope i posted this in the correct place.
Got a problem that a question that im struggling to answer:
draw a waveform over 1 cycle.
indicate clearly the amplitude, periodic time, and frequency of the waveform.
v = Vsinωt
V = 10
ω = 30pi
I understand that V is the amplitude of the wave.
But get confused with the whole omega = 30pi. Mostly because I find pi-radians hard to understand.
Any help would be much appreciated!!
Thanks,
Chris
2. Originally Posted by chrisa112
Hi all,
New to the forum, hope i posted this in the correct place.
Got a problem that a question that im struggling to answer:
draw a waveform over 1 cycle.
indicate clearly the amplitude, periodic time, and frequency of the waveform.
v = Vsinωt
V = 10
ω = 30pi
I understand that V is the amplitude of the wave.
But get confused with the whole omega = 30pi. Mostly because I find pi-radians hard to understand.
Any help would be much appreciated!!
Thanks,
Chris
If the frequency is $f$ hertz then your signal would be:
$v(t)=V\sin(2\pi f t)$
what you have is:
$
v(t)=V\sin(\omega t)$
so: $f=\omega/(2\pi)$
CB
|
{}
|
# Difference between revisions of "Sky temperature maps"
NOTE: Text in red to be removed when filling the contents
## Product description
A general description of the product, including e.g. figures related to the contents (e.g. maps, tables), and some explanation of its scientific meaning. If there are scientific warnings about the use of the product (User’s caveats), they should also be given here, or at least references to other explanatory documents (papers etc).
Sky maps give the best estimate of the signal from the sky (unpolarised) after removal, as far as possible, of known systematic effects and of the dipole signals induced by the motion of the solar system in the CMB and of the Planck satellite in the solar system. In particular, they include the Zodiacal light emission (Zodi for short) and also the scattering from the far-side lobes of the beams (FSL). More on this below.
Sky maps are provided for the nominal Planck mission and for the first two single surveys, the third one being covered only for a small part during the nominal mission (REF). AMo: There is a table with table below that defines these coverage periods, but I believe that information should be in an earlier section that describes the mission and the events that interrupted it, the planet passages, the pointing, issues relating to timing, etc. That section should define a pointing period, and operational day, etc. There is some of that info in the HFI pre-processing section, but that may not be the best place for it.For characterization purposes, are also provided maps covering the nominal survey but using only half of the available data. These are the ringhalf_{1|2} maps, which are built using the first and second half of the stable pointing part of the data in each pointing period.
Missions and sky survey coverage periods
Name Ini_OD Ini_Ring Ini_ptgID End_OD End_ring End_ptgID
Nominal 91 240 00004200 563 14724 03180200
HFI-Full 91 240 00004200 993 27008 06344800
LFI-Full 91 240 00004200 TBD TBD TBD
SCAN1 91 240 00004200 270 5720 01059820
SCAN2 270 5721 01059830 456 11194 02114520
SCAN3 456 11195 02114530 636 16691 03193660
SCAN4 636 16692 03193670 807 21720 04243900
SCAN5 807 21721 95000020 993 27008 06344800
SCAN6 993 ---- 06344810 993 ---- TBD
All sky maps are in Healpix format, with Nside of 2048 for HFI and of 1024 for LFI, in Galactic coordinates, and Nested ordering. The signal is given in units of Kcmb for 33-353 GHz, and of MJy/sr (for a constant $\nu F_\nu$ energy distribution ) for 545 and 857 GHz. Each sky map is packaged into a BINTABLE extension of a FITS file together with a hit-count map (or hit map, for short) and a variance map, and additional information is given in the FITS file header. The structure of the FITS file is given in the FITS file structure section below.
### Types of maps
#### Full channel maps
Full channel maps are built using all the valid detectors of a frequency channel and cover the full mission (or the nominal mission for the 1st release). For HFI, the 143-8 and 545-3 bolometers are rejected entirely as they are seriously affected by RTS noise.
#### Single survey maps
Single survey maps are built using all valid detectors of a frequency channel, but cover separately the different sky surveys. The single sky surveys are defined in terms of the direction of the satellite's spin axis: the first survey covers from the beginning of the science observations (the First Light Survey) to the time when the spin axis has rotated by 180 degrees (to the nearest pointing period), the following ones covers from 180 to 360, and so on. In the case of the nominal mission, the process stops at the third survey, which is incomplete. In the case of the full mission the 4th survey was interrupted shortly before completing the 180 degree rotation (see LINK), in order to begin observing with a different scanning law. The HFI mission ended slightly before the natural end of the 5th survey, the LFI mission continued to the XXX survey. The coverage of each of these periods in terms of ring number, pointingID, and OD, is given in the table below. Note that the OD numbers are only to indicate during which OD the period boundary occurs.
#### Half-ring maps
Half-ring maps are built using only the first or the second half of the stable pointing period data. There are thus two half-ring maps per frequency channel named ringhalf_1 and ringhalf_2 respectively. These maps are built for characterization purposes in order to perform null tests. In particular, the difference between the two half-ring maps at a given frequency give a good estimate of the high frequency noise in the data.
### Caveats and known issues
#### Map zero-level
• not corrected for HFI; estimates are ??? determined from ????
• corrected for LFI (values?)
#### The Zodiacal light and the Far-Side Lobes
Insert here how these are seen in the differences of the single survey maps
#### Artifacts near caustics of the scanning strategy
TBW if still an issue??
## Production process
Description of the Pipeline used to generate the product. In particular any limitations and approximations used in the data processing should be listed. Avoiding detailed descriptions of methods and referring to other parts of the ES and/or the relevant Planck papers for the details. References however should be quite detailed (i.e. it is not enough to direct the user to a paper, but the relevant section in the paper should be provided). Sky maps are produced by combining appropriately the data of all working detectors in a frequency channel over some period of the mission. They give the best estimate of the signal from the sky (unpolarised) after removal, as far as possible, of known systematic effects and of the dipole signals induced by the motion of the solar system in the CMB and of the Planck satellite in the solar system. In particular, they include the Zodiacal light emission (Zodi for short) and also the scattering from the far-side lobes of the beams (FSL). More on this below.
### HFI processing
The inputs to the mapmaking are TOIs of signal that have been cleaned (as far as possible) of instrumental effects and calibrated in absorbed watts. While the processing involved is described in detail in the TOI processing section, we give a very brief summary here for convenience. That pipeline performs the following operations:
demodulation
this is performed around a variable level which is determined from the valid input data (a validity flag from a previous version of the processing is used for this purpose), and the data are converted to engineering units (V) using known conversion coefficients.
despiking
using the demodulated data converted to V (by the transfer function) the glitches are identified and fitted with templates. A glitch flag is produced that identifies the strongest part of the glitches, and a timeline of glitch tails is produced from the template fits, and subtracted from the demodulated timeline from step 1. Finally, the flagged ranges are replaced with data from an average over the pointing period (TBC)
dark template removal
the two dark bolometers are demodulated and despiked as above; the resulting timelines are then smoothed and used as an indicator of the overall temperature variations of the bolometer plate. Where the variations are consistent with each other, they are combined and removed from the bolometer signal timelines using appropriate coupling coefficients. The few percent of the data where they are not consistent are flagged on the timelines.
conversion to absorbed power
the timeline is converted to watts of absorbed power using the bolometer function. This includes a non-linearity correction; removal of the 4K cooler lines: the electromagnetic interference of the 4K cooler with the bolometer readout wires induces some sharp lines in the signal power spectra at frequencies of the 4K cooler's fundamental and its multiples, folded by the signal modulations. Fourier coefficients of the relevant lines are determined on a per-ring basis, and then removed from the data. The quality of the removal depends on the bolometer.
deconvolution by the time transfer function
this is done to correct for the non-instantaneous time response of the bolometers. The function itself is modeled using 4 parameters which are adjusted primarily on the planet data and also from comparisons of the northward and southward scans of the Galactic Plane. It is then removed using Fourier techniques, which has the side-effect of increasing the noise at high frequencies.
jump correction
removes some (relatively rare: 0.3 jumps per bolometer per pointing period, on average) jumps in the signal baseline. The jumps are detected characterized on smoothed TOIs, and corrected by adding a constant to part of the signal timeline. The origin of the jumps is not known.
The results of this processing are a timeline of signal (in absorbed watts) and a valid data flag timeline for each of the 50 valid bolometers processed; these timelines contain the full sky signal, i.e., including the solar and orbital dipoles, the Zodiacal light, and contributions from the Far-Side lobes. The dipoles are necessary for the flux calibration and are removed at the mapmaking stage. The remaining two bolometers (143-8 and 535-3) show semi-random jumps in the signal level, typically jumping over 2-5 different pseudo-baseline levels, a behavior known as Random Telegraphic Signal, so that these are commonly called the RTS bolometers. Finally, ring-level statistics of different types (mean, median, rms, kurtosis, etc.) are determined on a per-ring basis for all timelines, and a selection based on these statistics is used to discard anomalous rings, which are recorded in a ring-level flag for each valid bolometer timeline.
Throughout this processing, bright planets (Mars, Jupiter, Saturn, Uranus) and bright asteroids are masked in the timeline in order to avoid ringing effects in the processing. Since they move on the sky, the portion of the sky masked during one survey is observed during one, and no hole is left in the final map. In parallel, the planet data are processed in a similar way, but with different parameters for the despiking step, and without the final jump correction step. These results are processed separately to determine the beam shapes and the focal plane geometry.
The pointing is determined starting from the AHF produced by MOC, which gives the direction and orientation of the LOS of a fiducial position in the focal plane at frequencies of 8Hz during stable pointing and 4 Hz during maneuvers (TBC for details, reference). This is interpolated to the times of data observation (ref to method), corrected for the wobble and other time-dependent offsets determined from the observed positions of a large number of sources around the sky, and finally converted to the LOS of each detector using the quaternions in the IMO (which are determined from observations of bright planets - see the Focal plane reconstruction pipeline).
The mapmaking pipeline is described in detail in the Map-making section, and a brief summary is given here for convenience.
The cleaned TOIs of signal of each detector, together with their flags, produced by the TOI processing pipeline, and the TOIs of pointing (quaternions), described in Detectors pointing and beams, are the inputs to the mapmaking step.
The input signal TOIs are expressed in Watts from the sky absorbed by the bolometer, and their associated flags are used to samples or full rings to discard. Are discarded periods of unstable pointing and pointing maneuvers in general, glitched data, transits over bright planets (since they move, the hole flagged during one survey is covered during another sky survey), and some full rings are discarded if their noise properties differ significantly from the nominal value and the few rings of duration longer than 90 min, since the pointing is not sufficiently stable over such long periods (details in Discarded rings section). The preparation of input pointing TOIs is described in Detectors pointing and beams. In brief, the STR (StarTracker) pointing produced by Flight Dynamics is interpolated to the detector sampling frequency in order to obtain a tuple of pointing quaternions for each sample and corrected for certain known effects. The angular offset between the STR line of sight and that of each bolometer is reflected in the Focal Plane Geometry, which is determined from the observation of bright planets. Also, the STR pointing timeline is corrected for slowly varying offsets between the STR and the HFI focal plane using observations of all planets and of other (fixed) bright sources.
Using the pointing TOIs, the signal TOIs are first used to build Healpix rings using the nearest grid point method; each ring containing the combined data of one pointing period. These are then calibrated in brightness, cleaned of the dipole signals, and projected onto Healpix maps as explained in the following sections.
The cleaned TOIs must be calibrated in astrophysical units. At 100-353 GHz, the flux calibration gains are determined for each pointing period (or ring) from the solar-motion dipole after removal of the small dipole induced by the motion of the Planck satellite in the solar system. The solar-motion dipole from WMAP (REF) is used for this purpose. This gain by ring is then smoothed with a window of width 50 rings, which reveals an apparent variation of ~1-2% on a scale of 100s to 1000s of rings for the 100-217 GHz channels, and is applied to the Watt data. At 353GHz, where the solar motion dipole is weaker compared to the signal, no gain variation is detected, and a single fixed gain is applied to all rings. At 545 and 857 GHz the gain is determined from the observation of Uranus and Neptune (Jupiter is not used because its brightness produced some non-linearity in the bolometer response) and comparison to recent models (REF) made explicitly for this mission. A single gain is applied to all rings at these frequencies. Prior to projecting the Healpix rings (HPRs) onto a map, a destriping approach is used to remove low-frequency noise. The noise is modelled as the sum of a white noise component and a constant, or offset, per pointing period which represents the low frequency 1/f noise. The offsets are determined by minimizing the differences between HPRs at their crossings. After subtracting these offsets, calibrated data are projected onto Healpix maps, with the data of each bolometer weighted by a factor of 1/NET of that bolometer, and accounting for the slight different band transmission profiles of the bolometers in each band.
These maps provide the main mission products. A second, reduced, set of maps, cleaned of the Zodiacal emission of the FSL leakage is also produced for the nominal mission and the two single surveys, but not for the half-rings (since the contribution would be the same for the two halves of each ring). For this purpose, the the Zodiacal emission and the FSL contamination, which are not fixed on the sky, are modeled separately at HPR-level, and subtracted from the signal HPR before projecting them onto the maps.
Together with signal maps, hit count and variance maps are also produced. The hit maps give the (integer) number of valid TOI-level samples that contribute to the signal of each pixel. All valid samples are counted in the same way, i.e., there is no weighting factor applied. The variance maps project the white noise estimate, provided by the NETs, in the sky domain.
### LFI processing
LFI processing is covered in Sect. 4.5
## Inputs
A list (and brief description to the extent possible) of the input data used to generate this product (down to file names), as well as any external ancillary data sets which were used.
## Related products
A description of other products that are related and share some commonalities with the product being described here. E.g. if the description is of a generic product (e.g. frequency maps), all the products falling into that type should be listed and referenced.
see section Input test.
## Format
A detailed description of the data format of each file, including header keywords for fits files, extension names, column names, formats….
### File names
The FITS filenames are of the form {H|L}FI_SkyMap_fff_nnnn_R1.nn_{type}_{coverage}_{type}.fits, where fff are three digits to indicate the Planck frequency band, and nnnn is the Healpix Nside of the map, coverage indicates which part of the mission is covered, and the optional type indicates the subset of input data used. A full list of products, by their names, is given in the List of products below.
The list of products containing sky maps are given below, grouped by type
Outstanding: link to archive objects / LFI to fill in their products
Full channel maps
LFI maps ….
HFI_SkyMap_100_2048_R1.nn_nominal.fits
HFI_SkyMap_143_2048_R1.nn_nominal.fits
HFI_SkyMap_217_2048_R1.nn_nominal.fits
HFI_SkyMap_353_2048_R1.nn_nominal.fits
HFI_SkyMap_545_2048_R1.nn_nominal.fits
HFI_SkyMap_857_2048_R1.nn_nominal.fits
Single survey maps
LFI maps ….
HFI_SkyMap_100_2048_R1.nn_survey_1.fits
HFI_SkyMap_143_2048_R1.nn_survey_1.fits
HFI_SkyMap_217_2048_R1.nn_survey_1.fits
HFI_SkyMap_353_2048_R1.nn_survey_1.fits
HFI_SkyMap_545_2048_R1.nn_survey_1.fits
HFI_SkyMap_857_2048_R1.nn_survey_1.fits
HFI_SkyMap_100_2048_R1.nn_survey_2.fits
HFI_SkyMap_143_2048_R1.nn_survey_2.fits
HFI_SkyMap_217_2048_R1.nn_survey_2.fits
HFI_SkyMap_353_2048_R1.nn_survey_2.fits
HFI_SkyMap_545_2048_R1.nn_survey_2.fits
HFI_SkyMap_857_2048_R1.nn_survey_2.fits
Half-ring maps
LFI maps ….
HFI_SkyMap_100_2048_R1.nn_nominal_ringhalf_1.fits
HFI_SkyMap_143_2048_R1.nn_nominal_ringhalf_1.fits
HFI_SkyMap_217_2048_R1.nn_nominal_ringhalf_1.fits
HFI_SkyMap_353_2048_R1.nn_nominal_ringhalf_1.fits
HFI_SkyMap_545_2048_R1.nn_nominal_ringhalf_1.fits
HFI_SkyMap_857_2048_R1.nn_nominal_ringhalf_1.fits
HFI_SkyMap_100_2048_R1.nn_nominal_ringhalf_2.fits
HFI_SkyMap_143_2048_R1.nn_nominal_ringhalf_2.fits
HFI_SkyMap_217_2048_R1.nn_nominal_ringhalf_2.fits
HFI_SkyMap_353_2048_R1.nn_nominal_ringhalf_2.fits
HFI_SkyMap_545_2048_R1.nn_nominal_ringhalf_2.fits
HFI_SkyMap_857_2048_R1.nn_nominal_ringhalf_2.fits
Zodi and Far-side-lobes corrected maps
HFI_SkyMap_100_2048_R1.nn_nominal_ZodiCorrected.fits
HFI_SkyMap_143_2048_R1.nn_nominal_ZodiCorrected.fits
HFI_SkyMap_217_2048_R1.nn_nominal_ZodiCorrected.fits
HFI_SkyMap_353_2048_R1.nn_nominal_ZodiCorrected.fits
HFI_SkyMap_545_2048_R1.nn_nominal_ZodiCorrected.fits
HFI_SkyMap_857_2048_R1.nn_nominal_ZodiCorrected.fits
HFI_SkyMap_100_2048_R1.nn_survey_1_ZodiCorrected.fits
HFI_SkyMap_143_2048_R1.nn_survey_1_ZodiCorrected.fits
HFI_SkyMap_217_2048_R1.nn_survey_1_ZodiCorrected.fits
HFI_SkyMap_353_2048_R1.nn_survey_1_ZodiCorrected.fits
HFI_SkyMap_545_2048_R1.nn_survey_1_ZodiCorrected.fits
HFI_SkyMap_857_2048_R1.nn_survey_1_ZodiCorrected.fits
HFI_SkyMap_100_2048_R1.nn_survey_2_ZodiCorrected.fits
HFI_SkyMap_143_2048_R1.nn_survey_2_ZodiCorrected.fits
HFI_SkyMap_217_2048_R1.nn_survey_2_ZodiCorrected.fits
HFI_SkyMap_353_2048_R1.nn_survey_2_ZodiCorrected.fits
HFI_SkyMap_545_2048_R1.nn_survey_2_ZodiCorrected.fits
HFI_SkyMap_857_2048_R1.nn_survey_2_ZodiCorrected.fits
### Detector-set maps
NOT FOR 1st RELEASE … put detset table elsewhere ????
Detector-set (detset) maps are built for the full (or nominal) mission using a minimal set of detectors. This concept is applicable to polarization maps, which are built using two PSB pairs at the proper orientations. The HFI polarized channels are designed to provide two detsets (or quads) each, namely:
100–ds1: 100-1a,100-1b,100-4a,100-4b 100–ds2: 100-2a,100-2b,100-3a,100-3b 143–ds1: 143-1a,143-1b,143-3a,143-3b 143–ds2: 143-2a,143-2b,143-4a,143-4b 217–ds1: 217-5a,217-5b,217-7a,217-7b 217–ds2: 217-6a,217-6b,217-8a,217-8b 353–ds1: 353-5a,353-5b,353-3a,353-3b 353–ds2: 353-6a,353-6b,353-4a,353-4b
The LFI Detector-set maps are built using pairs of horns in the same scanning row, namely:
18_23: 18M,18S,23M,23S 19_22: 19M,19S,22M,22S 20_21: 20M,20S,21M,21S 24: 24M,24S 25_26: 25M,25S,26M,26S
### Extensions
FITS file structure
The FITS files for the sky maps contain a simple primary header with no data, and BINTABLE extension (EXTENSION 1, EXTNAME = 'FREQ-MAP') containing the data. The structure is shows schematically in the figure at right. The primary header has the form
;-----------------------------------------------------------------------------
; EXTENSION 0:
;-----------------------------------------------------------------------------
MRDFITS: Null image, NAXIS=0
SIMPLE = T /Dummy Created by MWRFITS v1.11
BITPIX = 8 /Dummy primary header created by MWRFITS
NAXIS = 0 /No data is associated with this header
EXTEND = T /Extensions may (will!) be present
END
The FREQ-MAP BINTABLE extension contains the data. The table contains 3 columns of that contain the signal, variance, and hit-count maps in Healpix format. the number of rows is 50331648 for HFI and 12582912 for LFI, corresponding to the number of pixels in a Healpix map of Nside= 2048 and 1024, respectively (N.B: Npix = 12 Nside^2). The 3 columns are I_STOKES for the intensity (or temperature) signal, II_COV for the variance, and HIT for the hit-count. The exact order of the columns in the figure is indicative only, and the details can be found in the keywords. Keywords also indicate the coordinate system (GALACTIC), the Healpix ordering scheme (NESTED), the units (K_cmb or MJy/sr) of each column, and of course the frequency channel (FREQ). The COMMENT fields give a one-line summary of the product, and some other information useful for traceability within the DPCs. The original filename is also given in the FILENAME keyword as is the md5 checksum for the extension. The BAD_DATA keyword gives the value used by Healpix to indicate pixels for which no signal is present (these will also have a hit-count value of 0).
### Keywords
A typical header for the data extension of an intensity only map is:
;-----------------------------------------------------------------------------
; EXTENSION 1: FREQ-MAP
;-----------------------------------------------------------------------------
MRDFITS: Binary table. 3 columns by 1 rows.
XTENSION= 'BINTABLE' /Written by IDL: Thu Jan 31 11:03:21 2013
BITPIX = 8 /
NAXIS = 2 /Binary table
NAXIS1 = 603979776 /Number of bytes per row
NAXIS2 = 1 /Number of rows
PCOUNT = 0 /Random parameter count
GCOUNT = 1 /Group count
TFIELDS = 3 /Number of columns
COMMENT
COMMENT *** End of mandatory fields ***
COMMENT
EXTVER = 1 /Extension version
DATE = '2013-01-31' /Creation date
COMMENT
COMMENT *** Column names ***
COMMENT
TTYPE1 = 'I_STOKES' /
TTYPE2 = 'HITS ' /
TTYPE3 = 'II_COV ' /
COMMENT
COMMENT *** Column formats ***
COMMENT
TFORM1 = '50331648E' /
TFORM2 = '50331648J' /
TFORM3 = '50331648E' /
COMMENT
COMMENT *** Column units ***
COMMENT
TUNIT1 = 'K_CMB ' /
TUNIT2 = ' ' /
TUNIT3 = 'K_CMB^2' /
COMMENT
COMMENT *** Planck params ***
COMMENT
EXTNAME = 'FREQ-MAP' / Extension name
COORSYS = 'GALACTIC' / Coordinate system
ORDERING= 'NESTED ' / Healpix ordering
NSIDE = 2048 / Healpix Nside
FIRSTPIX= 0 / First pixel # (0 based)
LASTPIX = 50331647 / Last pixel # (0 based)
FILENAME= 'HFI_SkyMap_217_2048_R1.10_nominal_ZodiCorrected.fits' / FITS filename
CHECKSUM= '3aIB6Z993aGA3Y99' / HDU checksum created 2013-01-31T10:03:22
FREQ = '217 ' / reference frequency
PROCVER = 'DX9_Delta' / Product version
COMMENT
COMMENT ------------------------------------------------------------------------
COMMENT Full channel sky map: nominal mission, corrected for Zodi & FSL
COMMENT ------------------------------------------------------------------------
COMMENT Link to description in Planck Explanatory Supplement:
COMMENT http://www.sciops.esa.int/wikiSI/planckpla/index.php?title=
COMMENT Frequency_Maps&instance=Planck_PLA_ES
COMMENT ------------------------------------------------------------------------
COMMENT HFI-DMC objects:
COMMENT in-group: MAP_v53_noZodi_2048_GALACTIC_0240_27008/
COMMENT Creation date - object name
COMMENT 13-01-03 18:33 - 217GHz_W_TauDeconv_nominal_I
COMMENT 13-01-03 18:33 - 217GHz_W_TauDeconv_nominal_H
COMMENT 13-01-03 18:33 - 217GHz_W_TauDeconv_nominal_II
COMMENT ------------------------------------------------------------------------
END
The same structure applies to all SkyMap products, independent of whether they are full channel, survey of half-ring. The distinction between the types of maps is present in the FITS filename (and in the traceability comment fields).
### Numerical Format
TBW
A comment from E. Keihänen:
Here is a list of things that should go into this section:
Insert a table of maps delivered:
*file name
*PID/OD range
*resolution
*(-polarization included or not)
*sky coverage
*baseline length
*-reference to input toi objects)
Explain the format of the files, what is in what column, in what units.
Information common for all LFImaps:
*LFI maps were constructed with the Madam map-making code (version 3.7.4).
Maps are in Healpix format, in nested pixeling scheme, in K_cmb units, and in galactic coordinate system.
*Unobserved pixels are marked by the special value -1.6375e30.
EK's comment ends ------
Cosmic Microwave background
(Planck) High Frequency Instrument
Operation Day definition is geometric visibility driven as it runs from the start of a DTCP (satellite Acquisition Of Signal) to the start of the next DTCP. Given the different ground stations and spacecraft will takes which station for how long, the OD duration varies but it is basically once a day.
(Planck) Low Frequency Instrument
To be defined / determined
Flexible Image Transfer Specification
random telegraphic signal
To be confirmed
sudden change of the baseline level inside a ring
Attitude History File
[ESA's] Mission Operation Center [Darmstadt, Germany]
Line Of Sight
Star TRacker
Noise Equivalent Temperature
Planck Legacy Archive
Data Management Component, the databases used at the HFI and LFI DPCs
|
{}
|
# Find permutations with constraints without using Permutations
EDIT: To clarify, the bottleneck right now is available RAM, so any answer should keep that in mind (I cannot store all T! lists of length T and filter out those that satisfy the condition a posteriori.)
I want to find all permutations of the elements of Range[0,T-1] that satisfy a condition, but where T may be too large for Permutations to be useable: first generating and storing all permutations simply consumes too much RAM. The condition is always such that cond = {c[1],c[2],...,c[T]} means that the first element of the permutation must be larger than or equal to c[1], the second element must be larger than or equal to c[2] etc. The condition is sorted in increasing order, and we can assume that the condition is not so strict that no permutations survive.
I have managed to implement what I want, but in a very procedural way using a recursive function (the details here are not that important):
recuPerm[level_] :=
If[level == 0,
res[[1]] = Total[avail];
Sow[res],
((res[[level + 1]] = #; avail[[First[#] + 1]] = 0;
recuPerm[level - 1];
avail[[
First[#] + 1]] = #) & /@ ({(allow[[level + 1]].avail)} /.
Plus -> Sequence));
]
and I call it from the wrapper function:
listPerm[T_, cond_] :=
Block[
{a, avail, allow, res = ConstantArray[1, T], rip},
avail = a /@ Range[0, T - 1];
allow =
Table[PadLeft[ConstantArray[1, T - cond[[i]]], T], {i,
T}];
rip = Reap[recuPerm[T - 1]][[2]];
If[rip == {}, {}, rip[[1]]]
]
(The dummy head a is simply there so I can use Total and Dot in order to pick out allowed elements.)
Do you know of an approach that is more functional in nature and/or can better take advantage of the strengths of Mathematica? If it's more memory efficient (or faster) than my (unelegant) attempt then that's of course a bonus!
• Could you give us an order of magnitude for T ? – A.G. Feb 24 '15 at 22:17
• Well, the number of permutations grows so fast with T right, so I have only dared to try T about 11 or 12. But I am also mainly interested in constrains that are such that, if I subtract the constraint from an allowed permutation, I would get something whose Total has the same order of magnitude as T. Take for instance T = 8 and cond = {0,1,2,2,3,3,4,5}. Contrast this to no constraints, cond = ConstantArray[0,T], then any perm has Total[perm - cond] = T(T-1)/2. – Marius Ladegård Meyer Feb 25 '15 at 6:44
• Thank you djp and rasher for some nice answers! I started from rashers solution and made a compiled versjon which was naturally even a bit faster. – Marius Ladegård Meyer Mar 10 '15 at 8:23
This is pretty functional:
f = Module[{comps, r = Reverse@Range[#2, #1 - 1]},
comps[l1_, l2_] := Join @@ Map[Thread[{Sequence @@ #, Complement[l2, #]}] &, l1];
Reverse /@ Fold[comps, Transpose@{First@r}, Rest@r]] &;
This is about 10-15% faster, very slightly higher memory use (but still far below your current solution):
fz = With[{r = Reverse@Range[#2, #1 - 1]},
{#1, Outer[Complement, {#2}, #1, 1][[1]]}]) &,
Transpose@{r[[1]]}, Rest@r][[All, -1 ;; 1 ;; -1]]] &;
Comparing and including djp's interesting solution:
t = 11;
c = cond = {0, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6};
lp = listPerm[t, c]; // Timing // First
Timing[
possibleElements = Range[cond, Length@cond - 1];
fn[{{last_}}] := {{last}};
fn[{most__List, last_List}] := Table[{fn[{most} /. i -> Sequence[]], i}, {i, last}];
intermediate = fn[possibleElements];
result = intermediate //. {x : {{__Integer} ..}, i_Integer} :> Sequence @@ (Append[#, i] & /@ x);
] // First
fr = f[t, c]; // Timing // First
fzr = fz[t, c]; // Timing // First
(lp /. a[x_] :> x) == result == fr == fzr
(*
8.704856
6.661243
2.839218
2.464816
True
*)
Timings on an old netbook, but ~3X faster. Memory utilization s/b close to optimal: it never grows the intermediate results list to more than the ultimate results list length. Fails gracefully - if restrictions have no results, it returns no permutations (your current code, I'm sure you're aware, goes bonkers ;-} ). This s/b easy to adapt to a staggered restriction range, that is, differing lower and upper bounds for each position, should you so desire.
Neat puzzle, BTW...
Side note: You'll beat these by doing things iteratively... bodging up a function generator that builds such a solution based on parameters was 60% faster than my own fastest on some quick tests...
• This would probably deserve a question in itself, but how would you go about commenting/documenting the code so as to make it better understandable? – A.G. Mar 3 '15 at 16:12
• @a.g.. Something like "(* Given cap and list of restrictions, produces all permutations with restructions *)". In all seriouslness, though, were it my code for my use, I'd do nothing - it's self-evident. If I knew others would be using/reading/trying to understand it, something between a version of the terse comment and a step-by-step explanation. Terse Mathematica is nothing like terse APL... – ciao Mar 3 '15 at 20:45
• @A.G. please ask it. I have struggled with this dozens of times --- my collaborators probably have sought counselling over some of my more opaque code. – djp Mar 4 '15 at 2:09
This is a question of finding a suitable algorithm rather than use of Mathematica. The challenge is to generate only the permutations that will be used, rather than generate all permutations and filter those that satisfy a criterion. Fun problem. Here's my solution:
cond = {0, 0, 1};
possibleElements = Range[cond, Length@cond-1];
(* {{0, 1, 2}, {0, 1, 2}, {1, 2}} *)
So we need to generate all series {a,b,c,...} where each element appears precisely once.
I wrote a recursive function fn to take the last element off the end of the list, and remove it from the preceding elements. That is:
fn[{a,b,c}, {a,b,c}, {b,c}] ->
{
{fn[{a, c}, {a, c}], b},
{fn[{a, b}, {a, b}], c}
}
fn[{most__List, last_List}] :=
Table[{fn[{most} /. i -> Sequence[]], i}, {i, last}]
We also want to terminate the recursion:
fn[{{a}}] -> {{a}}
fn[{{last_}}] := {{last}};
The output of fn looks like this:
intermediate = fn[possibleElements]
(* {{{{{{2}}, 0}, {{{0}}, 2}}, 1}, {{{{{1}}, 0}, {{{0}}, 1}}, 2}} *)
To reduce this, we need to make the transformation, and do so repeatedly.
{ x:{Lists of Integers}, i_Integer } -> {{x1, i}, {x2, i}, ...}
result = intermediate //. {x : {{__Integer} ..}, i_Integer} :>
Sequence @@ (Append[#, i] & /@ x)
All together:
cond = {0, 1, 2, 3, 4, 5, 6, 6, 8, 8, 10, 11, 12, 13, 14};
possibleElements = Range[cond, Length@cond-1]
fn[{{last_}}] := {{last}};
fn[{most__List, last_List}] :=
Table[{fn[{most} /. i -> Sequence[]], i}, {i, last}]
intermediate = fn[possibleElements]
result = intermediate //. {x : {{__Integer} ..}, i_Integer} :>
Sequence @@ (Append[#, i] & /@ x)
It isn't a function and doesn't fail gracefully (thanks @rasher), but you can add those if you like. I think @rasher's algorithm is the same as mine.
• Good use of MMA patterns, but as written returns trash if conditions result in no valid permutations... – ciao Mar 3 '15 at 8:42
• @rasher Thanks. I think your algorithm is much the same? – djp Mar 3 '15 at 9:45
• Yes, there's no magic bullet (a la factorial numbering systems for "normal" perms), so any algo. is going to have a similar tack. How it's done is where memory/performance bennies might be found. – ciao Mar 3 '15 at 20:39
|
{}
|
Timezone: »
Conditional gradient-based method for bilevel optimization with convex lower-level problem
Ruichen Jiang · Nazanin Abolfazli · Aryan Mokhtari · Erfan Yazdandoost Hamedani
In this paper, we study simple bilevel optimization problems, where we minimize a smooth objective function over the optimal solution set of another convex constrained optimization problem. Several iterative methods have been developed for tackling this class of problems. Alas, their convergence guarantees are not satisfactory as they are either asymptotic for the upper-level objective, or the convergence rates are slow and sub-optimal. To address this issue, in this paper, we introduce a conditional gradient-based (CG-based) method to solve the considered problem. The main idea is to locally approximate the solution set of the lower-level problem via a cutting plane, and then run a CG-type update to decrease the upper-level objective. When the upper-level objective is convex, we show that our method requires ${\mathcal{O}}(\max\{1/\epsilon_f,1/\epsilon_g\})$ iterations to find a solution that is $\epsilon_f$-optimal for the upper-level objective and $\epsilon_g$-optimal for the lower-level objective. Moreover, when the upper-level objective is non-convex, our method requires ${\mathcal{O}}(\max\{1/\epsilon_f^2,1/(\epsilon_f\epsilon_g)\})$ iterations to find an $(\epsilon_f,\epsilon_g)$-optimal solution. To the best of our knowledge, our method achieves the best-known iteration complexity for the considered bilevel problem.
|
{}
|
# Using the Method of Integration, Find the Area of the Triangular Region Whose Vertices Are (2, -2), (4, 3) and (1, 2). - Mathematics and Statistics
Using the method of integration, find the area of the triangular region whose vertices are (2, -2), (4, 3) and (1, 2).
#### Solution
Equation of line AB : -
y+2=(2+3)/2(x-2)
⇒ 2y = 5x - 14
Equation of line BC : -
y-3=1/2(x-4)
⇒ 3y = x + 5
Equation of line CA : -
(y - 2) = - 4 (x - 1)
4x + y = 6
∴ ar (ΔABC)
=int_(-2)^3(2y+14)/5dy-int_2^3 3y-5dy
=75/5-5/2-24/4
=(300-120-50)/20=130/20
=13/2
Concept: Area of the Region Bounded by a Curve and a Line
Is there an error in this question or solution?
|
{}
|
# Re: [code] [textadept] open file and latex lexer
From: Mitchell <m.att.foicica.com>
Date: Fri, 3 Jan 2014 09:31:48 -0500 (Eastern Standard Time)
Hi Oliver,
On Thu, 2 Jan 2014, Olivier Guibé wrote:
> I use very often LaTeX, so I tried compilation in Textadept
> menu. Editing a standalone LaTeX file (with error), compiling
> and clicking on the error in the message's buffer (or c+a+e)
> opens a new buffer for my LaTeX file.
> It seems that textadept considers that ~/filename and
> ~/./filename are different and then opens twice the file
> (it is possible to open another with ~/Documents/../filename).
> I made the test in a terminal. I precise that my system
> is Debian Sid 64 bits, and that Textadept 7.1 or 7.2alpha
> have the same behavior.
A variation of this issue came up recently. Textadept cannot handle
relative paths very well because the question "relative to what?" comes
up. However, in your examples, you illustrate something that can be
improved: Textadept can do better to handle './' and '../' sequences. I
will work on this. Thanks for the report.
> As far as the LaTeX lexer is concerned, \begin {environment}
> is not recognized due to the space between "\begin" and "{environment}".
> It is not recommended but LaTeX compilation is OK.
> Please find a modified version of latex.lua lexer which takes
> into account this possibility.
> Another remark is that folding sub-math environment like
> \begin{split}\end{split} is possible while folding math environment
> is not set. It is not very important.
> I tried also to "recognize" $...$ group but my lua and programming
> knowledge is not sufficient : it needs more than the very simple use of
> lpeg.{P,S,R}
Thanks for your contribution! I will commit this.
Cheers,
Mitchell
--
You are subscribed to code.att.foicica.com.
To change subscription settings, send an e-mail to code+help.att.foicica.com.
To unsubscribe, send an e-mail to code+unsubscribe.att.foicica.com.
Received on Fri 03 Jan 2014 - 09:31:48 EST
This archive was generated by hypermail 2.2.0 : Sat 04 Jan 2014 - 06:42:48 EST
|
{}
|
# \Verb{} in fvextra: Highlighting \Verb{} text to be able to break at the end of line
Searching through various packages documentation and this site I have found how to write inline verbatim text, that is (when neccessary) broken at the end of line. For this feature, I am using \Verb{} command provided by package fvextra. This works perfectly well, but I would like to achieve also highlighting the verbatim text, still preserving the automatic line-breaking.
With text, that doesnt require too much escaping, I am effectively using command \texttt{}, which is highlightable by command \hl{} provided by soul package.
Can the same effect be achived also for text with \Verb{} command?
MWE:
\documentclass [a4paper, 12pt, twoside, openright] {scrbook}
\usepackage{fontspec}
\usepackage [left=2.5cm, right=2cm, bottom=3cm, headheight=15.3pt] {geometry}
\usepackage[dvipsnames,x11names,svgnames,table]{xcolor}
\usepackage{fvextra}
\fvinlineset{breaklines,%
breakafter=\space ,
breakanywhere
}
\usepackage[htt]{hyphenat}
\usepackage{soul}
\usepackage{soulutf8}
\sethlcolor{Snow2}
\begin{document}
Test of \Verb{\Verb{}} command working at the end of line \Verb{text that should go on as long as it is forced to be broken at the end of line}.
Test of \Verb{\texttt{}} command working at the end of line, that is also highlighted \hl{\texttt{text that should go on as long as it is forced to be broken at the end of line}}.
There is some text to make sure that argument of command \Verb{\Verb{}} will be broken \hl{{\Verb{Text in Verb}}, but the highlighting with soul doesnt work.
\end{document}
|
{}
|
## How closely can I copy a game without getting in trouble? [duplicate]
• How closely can a game legally resemble another? 11 answers
I’m making a clone game of Zelda, my favorite franchise, and am wondering if what I’m doing will still earn me a cease order.
Obviously I’m not using any of the names from the original series, and my game allows character creation with clothing that will allow you to look slightly like Link if you unlock it. With the skin name being something like “Woodland warrior shirt/hat/boots.
I’m also copying the UI for links awakening pretty closely and the way dungeons look is about the same.
However, I’m making all my own textures/assets from scratch.
Will I be allowed to release this game, with the title “Legend of Dungeons …”?
Main concerns are: similar UI, Font, Some skins resemble characters slightly, and textures, even though they’re all made by my hand?
## Trouble implementing outflow boundary condition when trying to solve a pde using NDSolve
Trying to solve the following pde: $$\partial_{t}y + c\partial_{c}y = 0$$ (for simplicity $$c=1$$).
For the initial data I am using a Gaussian. The problem surges when I am trying to implement the outflow boundary condition as it was suggested to me, namely $$\frac{\partial{y}}{\partial{t}} =0$$ at $$x =0$$.
So far my code is pretty simple:
v = 1 ; L = 2; With[{y = y[t, x]}, eq = D[y, t] + vD[y, x] == 0; ic = y == Exp[-x^2] /. t -> 0; bc = {D[y, x] == 0 /. x -> 0 }]; mol[n_Integer, o_: "Pseudospectral"] := {"MethodOfLines", "SpatialDiscretization" -> {"TensorProductGrid", "MaxPoints" -> n, "MinPoints" -> n, "DifferenceOrder" -> o}}; sol = NDSolveValue[{eq, ic, bc}, y, {t, 0, 1}, {x, 0, L}, Method -> mol[100, 4]]; {t0, tend} = sol["Domain"][[1]]; Manipulate[ Plot[sol[t, x], {x, 0, L}, PlotRange -> {-10, 10}], {t, 0, tend}];
It stems from answers to questions previously asked here, and I intend later to test finite difference methods (BTCS/FTCS) as done in Schemes for nonlinear advection equation.
However I am not being able to evolve the equation do to confusion when trying to implement the BC, I get the following:
NDSolveValue: Boundary condition $$y^{(0,1)}[t,0]$$ should have derivatives of order lower than the differential order of the partial differential equation.
This is expected as I am not sure what would be the best way to impose BC on the problem.
If anyone has any suggestions they would be welcolmed.
Thanks.
## Having trouble proving a language is NP-complete from specific problem [closed]
i asked to prove that, XS = { | (S1,…Sn) They are finite groups; there are k groups that are not foreign to each other} prove XS is NP-COMPLETE.
i try to prove it by reduction from 3SAT to Clique is the true way?
how to prove it?
thank’s everyone
## Trouble reading FITS with Mathematica 12
I’ve been using Mathematica to process FITS for years, but it has changed in version 12. Import now yields an Association rather than an Image. OK, extract item 1 from the Association, that’s an Image. But it isn’t right.
ImageData[Import["test.fits"][[1]]]
yields a matrix full of tiny numbers like 2.81374*10^-307. This is impossible. The header has:
BITPIX = 32 / array data type
BSCALE = 1
BZERO = 2147483648
This is the conventional declaration of 32 bit unsigned integers. Apparently, these are misinterpreted as floating point.
Is this a bug, or have I missed another change?
## Trouble with or statements and short circuiting
I’ve my code as follows:
class Main { public static void main(String[] args) { boolean s; boolean x = true; System.out.println(x || s); } }
I’ve learned that or statements in java short-circuit once the computer finds any value to be true. Here, I’ve declared but not initialized s, but I’ve done both with x. I put an or statement with x at the front, but the computer displays an error, citing that s hasn’t been initialized. Why’s this occurring? Shouldn’t it automatically display a true once it realizes that x is true, and satisfies the or statement? Thanks.
## Trouble while running gem5 in full system mode [on hold]
I am not able to run gem5 in full system mode. I have followed the commands below —— rohit@rohit:~$mkdir full_system_images rohit@rohit:~$ cd full_system_images
rohit@rohit:~/full_system_IMAGES$wget http://www.m5sim.org/dist/current/x86/x86-system.tar.bz2 rohit@rohit:~/full_system_IMAGES$ tarjxf x86-system.tar.bz2
rohit@rohit:~/full_system_IMAGES$echo “export M5_PATH=/home/rohit/full_system_imagrohit@rohit:~/full_system_IMAGES” >> ~/.bashrc rohit@rohit:~/full_system_IMAGES$ gedit ~/.bashrc
rohit@rohit:~/full_system_IMAGES$$echo$$M5_PATH
rohit@rohit:~/full_system_IMAGES$cd rohit@rohit:~$ cd gem5
## Having trouble understanding the use of a label in Assembly
I am currently having trouble understanding what this label means in Assembly as it has no variable size with it. In the following program that declares several variables in the stack offset the variable is named SCMP_VARSIZE. I have seen many other variables that have a postfix of VARSIZE attached to them and can’t understand why it is used in programs.
/Stack Usage: OFFSET 0 SCMP_RETVAL DS.B 1 ; Return value SCMP_VARSIZE SCMP_PRY DS.W 1 ; Preserve Register Y SCMP_PRX DS.W 1 ; Preserve Register X SCMP_RA DS.W 1 ; return address SCMP_STR1 DS.W 1 ; address of first string SCMP_STR2 DS.W 1 ; address of first string strcmp: pshx ; preserve registers pshy leas -SCMP_VARSIZE,sp clr SCMP_RETVAL,sp ...
The program compares two strings but that is not important here. I just don’t understand what the VARSIZE label is used for in assembly programs.
## Having trouble with getting a image to a fit on a face of a cube
I have little problem from a unity project that I’m working on. I’m making a 3d art gallery which will be further developed to VR application. In the 3d scene I created paint canvas as a game object (cube) but having trouble, getting the images fit through out the whole canvas. When i try to apply image as applying a material it is treated as a texture and only small proposition of image is visible in the canvas
And also i am planing to make a UI so that customer can upload pics and application automatically get the images and render the gallery. So I’m kind of stuck in the image to object part.
## notebooks couple, double trouble
I’d really like a solution for my problem. I have two notebooks: one is an Asus f540sa-xx220T an the other is a Compaq 15-s004nl. The compaq was working very well but doesn’t start anymore because of a hard drive recent problem. The Asus starts but it’s working so slow that you can’t do anything anymore. The thing is I don’t want to throw them away. I really want to save at least one. So the idea was to open them and maybe exchange the inside pieces. Do like a super notebook 2.0 taking like the hard drive to change the broken one and maybe upgrading the ram, you know. The thing is the pieces doesn’t really seem compatible. For example I didn’t find a place for the ram and stuff in the other computer and all the stuff is so different. My idea is even possible? It’s really different from the PC work. If upgade is not possible than what could I do? The notebooks have been open on the table for one day already. Some advice? P. S. I’d like not to spend money if it’s possible =))
|
{}
|
Efficient Turbulent Compressible Convection in the Deep Stellar Atmosphere
# Efficient Turbulent Compressible Convection in the Deep Stellar Atmosphere
## Abstract
This paper reports an application of gas-kinetic BGK scheme to the computation of turbulent compressible convection in the stellar interior. After incorporating the Sub-grid Scale (SGS) turbulence model into the BGK scheme, we tested the effects of numerical parameters on the quantitative relationships among the thermodynamic variables, their fluctuations and correlations in a very deep, initially gravity-stratified stellar atmosphere. Comparison indicates that the thermal properties and dynamic properties are dominated by different aspects of numerical models separately. An adjustable Deardorff constant in the SGS model and an amplitude of artificial viscosity in the gas-kinetic BGK scheme are appropriate for current study. We also calculated the density-weighted auto- and cross-correlation functions in Xiong’s ([1977]) turbulent stellar convection theories based on which the gradient type of models of the non-local transport and the anisotropy of the turbulence are preliminarily studied. No universal relations or constant parameters were found for these models.
convection — hydrodynamic — turbulence — method: numerical — stars: atmosphere
\volnopage
Vol.0 (200x) No.0, 000–000
## 1 Introduction
Turbulent convection has tight relations to the unsolved problems in the theory of stellar structure and evolution, especially, for the massive stars (Deng et al. [1996a]; [1996b] and references therein). These problems cannot be settled with pure analytical ways. Along with the development of computing science, numerical simulations become a powerful tool to investigate the hydrodynamic properties of astrophysical flows. It is widely used in the study of formation of cluster, accretion disk and evolution of galaxies. The convection in stellar interior have also been studied by many authors with numerical experiments. Due to the difficulties in this problem, the progress made in this field is limited. However, it is generally believed that numerical testing of some analytical models and local high-resolution simulations can make a lot of sense to our understanding of stellar convection. So far, the numerical hydrodynamic scheme applied to the stellar convection are Lax-Wendroff scheme (Graham [1975], etc.), alternating direction implicit method on staggered mesh (ADISM) (Chan et al. [1982]; [1986], etc.), pseudo-spectrum scheme (Hossain & Mullan [1990]; [1991], etc.), piecewise-parabolic method (PPM) (Porter & Woodward [1994]; [2000], etc.), upwind scheme and so on. At the present time, the most suitable numerical scheme for the turbulent flow is the spectrum method, but it cannot handle the discontinuity in the motion of fluids.
Gas-kinetic BGK scheme is a recently matured method for computational fluid dynamics and attached much attention in the practical problems (Xu [2001]). It is accurate and robust for computing supersonic unsteady flows. However its application to astrophysical flows is not popular attributed to its complex. Theoretical analysis shows that near the surface of stellar envelope, the motion of fluids becomes supersonic which cannot be self-consistently treated by traditional mixing-length theory (MLT) (Deng & Xiong [2001]). Our original aim is to simulate the supersonic turbulent convection in the outer region of yellow giants by gas-kinetic BGK scheme which would involve many efforts in different directions. We have already extend the BGK scheme to include the gravitational acceleration (Tian et al. [2007]). Before using it to compute the supersonic stellar convection, the turbulence model, radiation transfer model and realistic input physics must be correctly implemented.
Restricted by the capacity of digital computer, we cannot afford the very high resolution numerical experiments for the stellar type of turbulent convection. Large eddy simulation (LES) which calculates the large eddies explicitly while mimics the sub-grid eddies by models may be the most feasible way in current stage. In current paper we implement the SGS turbulence model (Smagorinsky [1963]; Deardorff [1971]) into the BGK scheme and validate the three-dimensional BGK code by calculating the turbulent compressible convection in a deep stellar atmosphere. For very high Reynolds number, the behaviors of the turbulent flows are greatly affected by the numerical and physical dissipation in the scheme. A investigation of these effects is very necessary before the code is applied to the practice. By varying the Deardorff number in the SGS model and artificial viscosity parameter introduced to capture the shock, we constructed three models which are similar to those studied by Chan & Sofia ([1989], hereafter CS89; [1996], hereafter CS96). The empirical relations derived by them were re-examined. A study of density-weighted auto- and cross-correlation function, anisotropy of turbulence and diffusive type of models of non-local turbulent transports in the turbulent stellar convection theory of Xiong ([1977]) was conducted too.
In the next section, we give a description of gas-kinetic BGK scheme and mainly focus on the incorporation of SGS model. The computed physical models are formulated in Sect. 3. The numerical results are shown in Sect. 4 where the discussions are also presented. The conclusions are summarized in the last section.
## 2 Gas-kinetic BGK Scheme
General numerical method for hydrodynamic problems is to directly discretize the Navier-Stokes equations,
∂ρ/∂t = −∇⋅ρ→v, (1) ∂ρ→v/∂t = −∇⋅ρ→v→v−∇p+∇⋅→Σ+ρ→g, (2) ∂E/∂t = −∇⋅[(E+p)→v−→v⋅→Σ+→Fd]+ρ→v⋅→g, (3)
where is the density, is the velocity, is the pressure, is the gravitational acceleration and is the summation of internal energy and kinetic energy.
→Σ=2μ→σ+ς(∇⋅→v)→I
is the viscous stress tensor, where is the strain rate tensor, is the identity tensor, and are the dynamical and bulk viscosity coefficient, respectively. is the diffusive type of energy flux. Differently, the BGK scheme works on the BGK equation,
∂f∂t+→c⋅∇f+→g⋅∇→cf=feq−fτ, (4)
which is an approximation of Boltzmann equation (Bhatnagar et al.[1954]). In above expression, is the gas distribution function in the phase space, is the particle velocity, is the collision time, and . The right-hand side of equation (4) is the so-called relaxation model, which is a simplification of the complicated collision term in the Botlzmann equation (Vincent et al. [1965]). It physically means that the initially non-equilibrium distribution will approach the equilibrium state after the particles collide once. A larger corresponds to a further state from , i.e., the stronger non-equilibrium transport effects, such as viscosity and conduction. In our study, the equilibrium state, , in equation (4) is taken to be Maxwellian distribution. It can be proved mathematically that the solutions of Navier-Stokes equations (13) are automatically obtained through solving the BGK equation with the following definitions of dissipative coefficients:
μ=τp,ς=23NN+3τp,κ=N+52kmτp, (5)
where is the thermal conductivity, is the internal degree of freedom for particles, is the molecule mass and is the Boltzmann constant.
The basic idea of BGK method is to find a local approximation to the non-linear equation (4). Then evaluate the macro quantities (e.g., fluxes) using the micro distribution function . Finally, the cell average values are updated according to the conservation laws (finite volume method). In our first attempt, the BGK method (Xu [2001]) has been extended to include the external force. A detailed description of the three dimensional multidimensional gas-kinetic BGK scheme for the Navier-Stokes equations under gravitational fields was given by Tian et al. ([2007]). Here, we only outline the new implements, i.e., the incorporation of turbulence model.
In the study of convection, the combined effects of viscosity, heat conduction and temperature difference on the instability of flows are measured by Rayleigh number: , where is the thermal expansion coefficient and is the kinematic viscosity. For the polytropic gas defined in Sect. 3, the Rayleigh number can be written as:
Missing or unrecognized delimiter for \left (6)
where is the gas constant, is the depth of computational domain and is the ratio of specific heat, the meaning of other symbols can be found in Sect. 3. In the gas-kinetic BGK scheme, the viscosity is controlled by collisions of particles. We can relate the Rayleigh number to the collision time by the following way. Suppose during each time-step , the particles collide times. Then we have
Δt=βτ=δΔx(cs+v), (7)
where is Courant number, is the spatial resolution, is speed of fluids and is the sound speed. From equation (5), (6) and (7), we get
Ra≈N2PrZ2(1+Ma)2δ2[1−(γ−1)n](n+1)β2, (8)
where is the Mach number, is the vertical grids size and is Prandtl number. For current study, we have , , , , and . In efficient turbulent convection, the energy transfer by heat conduction is negligible which means the Prandtl number is very large. In all of our simulations, the is set to be . Therefore, approximately we have . Similarly, we have Reynolds number . While in the typical stellar convection, . Hence, is needed when we do a direct numerical simulation (DNS) of stellar convective flows where . These values can be reduced by increasing grids number which is very expensive for the present generation hardware. At the same time, a very small would introduce large computational error. An alternate way is the LES which simulate the large eddies directly and approximate the small eddies with models. There are a lot of approaches to perform the LES, the simplest way may be the SGS model (Smagorinsky [1963]; Deardorff [1971]).
In the BGK scheme, the viscosity is introduced through the collisions of particles. The natural way of implementing SGS model is to modify the collision time. Chen et al. ([2003]) included the renormalization group large eddy model into the BGK equation, where is the turbulent kinetic energy and is the turbulent dissipation. In the model, tow additional equations are needed to be solved. For sake of simplicity, we consider the Smagorinsky ([1963]) model. In current study, the collision time is defined as
τtot=τ+C2|pl−pr||pl+pr|Δt+μsgsp, (9)
where the first term of right-hand side represents the molecule viscosity and the second term is introduced to increase the numerical dissipation when there is a jump in pressure around the control volume boundary. is an adjustable constant, and are the reconstructed pressures at the left and right side of a cell interface (see sketch in Fig. 1). In the strong supersonic region, additional dissipation caused by this term is essential to stabilize the computing.
The last term in right-hand side of equation (9) is implemented to account for the SGS viscosity,
μsgs=ρ(cμΔ)2(2→σ:→σ)1/2, (10)
where is an adjustable constant, usually has the value: 0.1 0.2, the filter width is taken to be the local resolution, colon stands for the contract of tensor and . Above model is called Smagorinsky model or sometimes Smagoringsky-Lilly model. In our calculations, the eddy viscosity is computed in the control volume by staggered mesh strategy and then interpolated to the cell interface. In the BGK scheme, there is intrinsic diffusion caused by particle collisions. In current study, we are only interested in the turbulent properties. So, the molecule Prandtl number is set very large. And the following diffusive flux (CS96) is implemented explicitly into the BGK scheme,
→Fd=−CT∇T−CS∇S, (11)
where is the specific entropy, is the specific heat at constant pressure and is the adiabatic gradient. In the stable layer, is set to make the diffusion carry out the input energy flux. In the convection zone, is very close to zero. represents the turbulent diffusion and is set to zero in the stale region, where is the effective Prandtl number of SGS turbulence and taken to be .
## 3 Physical Models
Our physical problem is very similar to those studied by CS89 and CS96. An ideal gas () in a rectangular box is considered with gravity in the vertical direction. The side boundaries are periodic and the top and bottom boundaries are impenetrable and stress free. In order to avoid boundary effects, a very thin stable layer is placed below the upper boundary and the diffusive flux is gradually enhanced near the lower boundary to make it carry out total flux at the lower boundary. A constant energy flux is fed at the bottom. At the top, the entropy is fixed. The system is initially static:
T = (1+Z(d−z)/d)Tt, (12) ρ = (T/Tt)nρt, (13) p = (T/Tt)n+1pt, (14) g = (n+1)ptZ/(ρtd) (15)
where is the normalized parameter, and is the polytropic gas index. The gravitational acceleration comes from the hydrostatic equilibrium . The subscripts and denote top and bottom values respectively. Above solutions to Navier-Stokes equations are not stable against small perturbations. In all of our calculations, the velocity field is slightly perturbed initially. After long-time thermodynamic relaxation, the system will reach a statistical steady state. We defined a series of runs to test the effects of numerical parameters. The numerical effects of a variety of parameters were studied by Chan & Sofia ([1986], hereafter CS86) and CS89 in details. The effects of changing turbulent Prandtl number was tested by Singh & Chan([1993]). Here, we just focus on the important parameter in the SGS model and new parameter appearing in the BGK scheme. The details are given in the second line of Table 1.
All the cases we computed use a mesh. The vertical grid decreases smoothly with height (about grids per PSH) and the horizontal grid is uniform. The aspect ratio (width/depth) of the box is .
## 4 Results
In this part, we show the results from the numerical simulations. All the runs were evolved numerical time-steps, corresponding to a dimensionless time around , before the statistical analysis is performed. The statistical steady state is indicated by the balance of the input energy flux from the bottom and the outgoing energy flux through the top. In our calculations, the spatial variation of averaged total energy flux from is within (see solid line in Fig. 2). Another criterion is the averaged vertical mass flux which is less then everywhere in all cases. Hence, the system will not undergo substantial adjustment any more. The statistical average covers numerical time-steps.
Except for the instant velocity fields, all the other quantities investigated here are the mean values. For an arbitrary quantity , represents its combined horizontal and temporal mean, denotes the deviation from , stands for the root mean square (rms) fluctuation from the .
In some figures, the integral pressure scale height (PSH) is shown by the vertical lines. For example, in Fig. 2, the vertical solid line denotes the location of stable-unstable interface near the upper boundary. The second dashed line at the left side of the solid line is 2 PSHs away from the upper stable-unstable interface. Although the numerical parameters are different for case A to case C, their relaxed thermal structure are nearly equivalent and the discrepancy between the integral PSH locations is very small. So, we can plot the results from all runs in the same figures.
### 4.1 Velocity Fields
In CS86, the three-dimensional turbulent flow structure was well depicted by the pseudo stream lines. For sake of clarity, Fig. 3 shows the velocity fields projected in x-z plane and x-y plane. From the left panel of Fig. 3, it is evident to see that the high speed motions exist in the top region and are associated with the downward streams. In our calculations, the maximum Mach number occasionally exceeds one. In CS96, the SGS viscosity was enhanced by a factor to suppress the shocks occurring in the top region which would easily trigger the instability of the numerical computation, especially, during the early thermal relaxation. Based on flux-splitting (see Fig. 1), such implement can be avoided in gas-kinetic BGK scheme. If the nonlinear van Leer limiter is replaced by central interpolation, the supersonic motion cannot be handled correctly. Numerical tests also show that a less than will smooth the turbulence, the circulations become laminar. The solutions are almost unchanged when great than . In current study, we adopt . The networks of downward streams can be seen the right panel of Fig. 3.
### 4.2 Approximate Relations
Our BGK code has been extensively tested for laminar flows (Tian et al. [2007]). In order to validate the incorporation of SGS model, we re-estimate quantitatively some approximate relations among the thermodynamic variables and their fluctuations. The results are given in Table 1 where the results from CS89 are also listed for comparison. Our computational models are partially similar to CS89 and partially to CS96. Models of CS89 undergo substantially adjustment near the boundaries and we cannot afford the high-resolution desired in CS96. During the data analysis, the correlation function of quantity and is defined as . The standard deviations of these approximations () are given in the brackets. The deviation of from CS89 is calculated by . We only concentrate on the middle region in the convection zone, namely, 1 PSH from the bottom and 2 PSHs from the upper stable-unstable interface. The investigated layer expands about 3 PSHs.
The approximate relations and correlations in Table 1 are classified roughly into four categories which are indicated by Roman numerals in the last column according to the goodness of fit and discrepancy from CS89. These categories are: Category I: and %; Category II: and %; Category III: and %; Category IV: and %.
Category I contains the best relations where the thermal variables are mainly involved, i.e. , , and their fluctuations. These relations should be weekly affected by numerical scheme and details of the computational models. In the work of Kim et al. ([1995]) where a realistic equation of state (EOS) was used, these relations are different from current studies. Therefore, Category I may be dominantly determined by EOS.
Most relations of Category II are functions of pressure , velocity and their fluctuations. The amplitude of the fluctuations of the components of velocity in our computation is lower than those in CS89 (Fig. 1 therein) and CS96 (Fig. 2 therein) while the amplitude of the relative fluctuations of thermal quantities are around two times larger than those in CS89 (see Fig. 2 therein) and in Singh’s work ([1993]). Hence when the fluctuations of thermal variables are expressed in terms of , , etc., the coefficients are nearly doubled, e.g., and . This can also be seen in Fig. 3a in CS96.
Ratios of and do not deviate from CS89 very much, but the fitting is not accurate. This may be caused by the effects of the boundary and transition layers. It is obvious from Fig. 4 that the height distribution of is not flat as CS89. There is a small hump below the upper transition layer (one PHS from the unstable-stable interface) which can also be found in CS96. We believe that the unstable-stable transition is responsible for this. The large difference in Category IV should be caused by the high order powers of velocity fluctuations.
are commonly approximated in MLT. In current study, these relations are fitted by linear approximation very well. However, the slopes are different from the CS86 and CS96. The situation of can be explained as above. For , the reasons may lie in the fact that in the upper convective region, the amplitude of from CS96 is larger than ours and therefor we need a smaller . CS96 got a smaller (0.78) than CS89. In current study, it is even smaller which implies a smaller super-adiabatic gradient .
The cause of most of the discrepancies between current study and CS89 may be the larger and smaller . In the traditional theory of stellar convection, the enthalpy flux is proportional to the fluctuations of temperature and velocity, i.e., and the kinetic flux is totally neglected. When the kinetic flux is comparable to the total flux, this kind of proportional relation is not exactly held any more. Based on the following facts:
1. , , (Singh [1993], Fig. 1 and Fig. 5);
2. , , (CS89, Fig. 1 and Fig. 2);
3. , , (CS96, Fig. 2);
4. , , (current, Fig. 4);
5. , , (Kim [1995], Fig. 7 and Fig. 8),
we can conclude that the and are proportional to in a nonlinear way. Note that current study use a different numerical scheme and Kim adopted a realistic EOS. It is beyond the scope of current study to perform a quantitative analysis of such relations.
Above comparison shows that the dynamical properties and thermal properties for stellar type of convection may be affected by different aspects of the numerical models separately. Thermal structure is mainly determined by physical parameters while dynamic motions can easily affected by numerical parameters.
### 4.3 Anisotropic Turbulence
Xiong’s non-local time-dependent stellar convection theory is based on Reynolds stress method. It is dynamic theory of auto- and cross-correlation functions of turbulent velocity and temperature fluctuations. These fluctuations are defined as the derivation from the density weighted average, i.e.,
u′i=vi−⟨ρvi⟩⟨ρ⟩,~T′=T−⟨ρT⟩⟨ρ⟩. (16)
The start point of Xiong’s theory is a set of partial differential equations of
χ2=⟨w′iw′i⟩/3,Z=⟨~T′2⟩/⟨~T⟩2,V=⟨~T′w′i⟩/⟨~T⟩, (17)
where , and the summation convention is adopted. The numerical results of , and are given in Fig. 5. In the closure models of Xiong’s theory, three adjustable parameters, i.e., , and are introduced. is used to describe the anisotropic turbulent motions. In Deng’s ([2006]) work, was related to turbulent velocity by in the fully unstable zone. In the upper overshooting region, they proposed that and is independent of . In the lower overshooting zone, and decrease as decrease. There is no enough room for overshooting in our models. The ratio of from numerical simulations is given in Fig. 6 from which we can see this ratio is slightly dependent on and . In the upper efficient-inefficient convection interface (about 1 PSH from unstable-stable interface), this value approximately equals to . It takes its maximum at the location where the turbulent convection starts to become inefficient near the bottom. Its maximum is affected evidently by and . Current study cannot give a definite solution for anisotropic turbulence. Here, we present preliminary suggestions. Suppose that is held in current models, we can conclude that is infinitely large at the upper efficient-inefficient interface and decreases as the distance from the boundaries increases. The minimum of is about .
### 4.4 Non-local Transport Models
Generally, a Reynolds stress method suffers the so-called closure problem. CS96 numerically studied the popular closures and found they were poor. Some third order moments representing the non-local transport effects were approximated by gradient models in Xiong’s work ([1989]), i.e.,
NLT1 = ⟨u′kw′iw′i⟩=−χl1∇k⟨w′iw′i⟩, (18) NLT2 = ⟨u′k~T′2⟩/⟨~T⟩2=−χl3∇k(⟨~T′2⟩/⟨~T⟩2), (19) NLT3 = ⟨u′kw′i~T′⟩/⟨~T⟩=−χl5∇k(⟨w′i~T′⟩/⟨~T⟩), (20)
with , where is the Lagrangian integral length scale of turbulence. The non-local transports and coefficients from numerical simulations for all cases are shown in Fig. 7 . From panel (a) of Fig. 7, we can see that the is about one order larger than and is three orders less than . Hence, the non-local transports are dominated by turbulent kinetic energy. Panel (b)(d) of Fig. 7 show that the , and gradually increase with the distance from the top boundary and change this trend near the interface where the turbulent convection become inefficient. The variation of is slow in the lower half convection zone around the value of 1.8. But it is affected by and obviously. is about ten times larger than . Both of and vary rapidly which suggests that there may not exist the universal constant for these closure models.
### 4.5 Effects of Numerical Parameters: cμ and C2
In current LES, the local effective grid Reynolds number is enlarged by SGS model whose amplitude is controlled by Deardorff constant . An inadequately small would cause the building-up of the kinetic energy at the two-grid level and make the computation crashed. If is too large, the turbulent motions will be damped down. The proper value of is resolution- and method-dependent since the numerical dissipation also plays an important role in the behavior of turbulence. Deardorff ([1971]) suggested a value of for which was used in the CS89. In CS96, this value was increased to .
In the flux-splitting method, the discontinuity is introduced at the cell interface (see Fig. 1) by limiters. Additional dissipation (the second term in the left hand side of equation (9)) is employed to handle the strong shock waves near the sharp jumps of pressure. The typical value of is 1. In the smooth region this kind of discontinuity should be very small. However, our simulations have lower resolution and the computing zone extends about seven PSHs. So it is necessary to check if these values are adequate for studying the stellar type of convection.
Relations and in Table 1 show that the effects of changing these parameters on the thermal structure are really slight. They both affect the eddy properties with very small amplitude which can be seen from Fig. 8 where the auto-correlations of the vertical velocity are shown. Their profiles are nearly symmetrical except in the upper stable and transition zone. The half width at half maximum (HWHM) of these profiles should be sensitive on the viscosity (CS86). and in Table 1 are the and , respectively. They should be nearly equal to each other for isotropic turbulent flows. In our study, for case A (, ), the discrepancy between them is clear which becomes slight for case B (, ) and further slight for case C (, ). This kind of anisotropy comes from the initially perturbation and can also bee found in Fig.7 and Fig.11 in CS86. So it seems that is more suitable for current study. In the deep stellar convection, the shock waves are mild. Hence, the flux-splitting may enough to handle it as in our tests. We suggest that it is better to make as small as possible because it would enhance the anisotropy which can be diminished by larger .
## 5 Conclusions
In this paper, we present a preliminary application of gas-kinetic BGK scheme to the simulation of turbulent convection in stellar atmosphere. The approximate relations among thermodynamic variables, their fluctuations and correlations were examined. The anisotropy and diffusive models of non-local transport were investigated too. The effects of varying numerical parameters were also tested. The main conclusions are summarized as follows:
1. The behavior of the thermal variables and dynamic variables are affected by different aspects of the models and numerical scheme. For example, the fluctuations of density and pressure are dominantly determined by physical models while the fluctuations of velocity are sensitively dependent on numerical parameters, e.g., and .
2. There is no constant ratio of for anisotropic turbulence in current models. We suggest that take an infinite value at the boundary and approach its minimum () in the deep convective region.
3. The diffusive models for non-local transport are not applicable since the coefficients for different quantities are dramatically different. The best situation is the turbulent transport of turbulent kinetic energy where a roughly flat region exists for .
4. For current resolutions, is better than . And should be set as small as possible in any cases. A flux-splitting technique is needed to stabilized the shock waves near the top. A Rayleigh number less than may smear the turbulent motions.
Our simulations may suffer from the lower resolutions, aspect ratio and the number of testing cases. But it is enough for the purpose of validation and get some preliminary results. A further study will be performed in the future.
###### Acknowledgements.
We wish to acknowledge the contributions of K. Xu to the developing of the hydrodynamic code. We also thank the Department of Astronomy at Peking University for providing computer time on their SGI Altix 330 system. This work was partially funded by the Chinese National Natural Science Foundation (CNNSF) through 10573022, 10773029 and by the national 973 program through 2007CB815406. KLC thanks Hong Kong RGC for support.
### References
1. Bhatnagar P.L., Gross E.P., Krook M., 1954, Phys.Rev., 94, 511
2. Chan K.L., Wolff C.L., 1982, J.Comput. Phys., 47, 109
3. Chan K.L., Sofia S., 1986, \apj, 307, 222 (CS86)
4. Chan K.L., Sofia S., 1989, \apj, 336, 1022 (CS89)
5. Chan K.L., Sofia S., 1996, \apj, 466, 372 (CS96)
6. Chen H., Kandasamy S., Orszag, S., Shock, R., Succi,S., Yakhot, V., 2003, Science, 301, 633
7. Deardorff J.W., 1971, J.Comput. Phys., 7, 120
8. Deng L., Bressan A., Chiosi C., 1996a, \aap, 313, 145
9. Deng L., Bressan A., Chiosi C., 1996b, \aap, 313, 159
10. Deng L., Xiong D.R., 2001, ChJAA, 1, 50
11. Deng L., Xiong D.R., Chan K.L., 2006, \apj, 643, 426
12. Graham E., 1975, J. Fluid. Mech., 70, 689
13. Hossain M., Mullan D.J., 1990, \apj, 354, L33
14. Hossain M., Mullan D.J., 1991, \apj, 380, 631
15. Kim Y.-C., Fox P.A., Sofia S., Demarque P., 1995, \apj, 442, 422
16. Porter D.H., Woodward P.R., 1994, \apjs, 93,309
17. Porter D.H., Woodward P.R., 2000, \apjs, 127,159
18. Singh H. P., Chan K. L., 1993, \aap, 279, 107
19. Smagorinsky J.S., 1963, Mon. Weather Rev., 91, 99
20. Tian C.L., Xu K., Chan K.L., Deng,L.C., 2007, J.Comput. Phys, 226, 2003
21. Xiong D.R., 1977, Acta Astron. Sinica, 18, 86
22. Xiong D.R., 1985, \aap, 150, 133
23. Xiong D.R., 1989, \aap, 209, 126
24. Xu K., 2001, J. Comput. Phys., 171, 289
25. Vincent, W.G. and Kruger Jr., C.H., 1965, Introduction to Physical Gas Dynamics, New York, Wiley
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
{}
|
## C Specification
The VkPhysicalDeviceSurfaceInfo2KHR structure is defined as:
// Provided by VK_KHR_get_surface_capabilities2
typedef struct VkPhysicalDeviceSurfaceInfo2KHR {
VkStructureType sType;
const void* pNext;
VkSurfaceKHR surface;
} VkPhysicalDeviceSurfaceInfo2KHR;
## Members
• sType is the type of this structure.
• pNext is NULL or a pointer to a structure extending this structure.
• surface is the surface that will be associated with the swapchain.
## Description
The members of VkPhysicalDeviceSurfaceInfo2KHR correspond to the arguments to vkGetPhysicalDeviceSurfaceCapabilitiesKHR, with sType and pNext added for extensibility.
Additional capabilities of a surface may be available to swapchains created with different full-screen exclusive settings - particularly if exclusive full-screen access is application controlled. These additional capabilities can be queried by adding a VkSurfaceFullScreenExclusiveInfoEXT structure to the pNext chain of this structure when used to query surface properties. Additionally, for Win32 surfaces with application controlled exclusive full-screen access, chaining a VkSurfaceFullScreenExclusiveWin32InfoEXT structure may also report additional surface capabilities. These additional capabilities only apply to swapchains created with the same parameters included in the pNext chain of VkSwapchainCreateInfoKHR.
Valid Usage
• VUID-VkPhysicalDeviceSurfaceInfo2KHR-pNext-02672
If the pNext chain includes a VkSurfaceFullScreenExclusiveInfoEXT structure with its fullScreenExclusive member set to VK_FULL_SCREEN_EXCLUSIVE_APPLICATION_CONTROLLED_EXT, and surface was created using vkCreateWin32SurfaceKHR, a VkSurfaceFullScreenExclusiveWin32InfoEXT structure must be included in the pNext chain
• VUID-VkPhysicalDeviceSurfaceInfo2KHR-pSurfaceInfo-06526
When passed as the pSurfaceInfo parameter of vkGetPhysicalDeviceSurfaceCapabilities2KHR, if the VK_GOOGLE_surfaceless_query extension is enabled and the pNext chain of the pSurfaceCapabilities parameter includes VkSurfaceProtectedCapabilitiesKHR, then surface can be VK_NULL_HANDLE. Otherwise, surface must be a valid VkSurfaceKHR handle
• VUID-VkPhysicalDeviceSurfaceInfo2KHR-pSurfaceInfo-06527
When passed as the pSurfaceInfo parameter of vkGetPhysicalDeviceSurfaceFormats2KHR, if the VK_GOOGLE_surfaceless_query extension is enabled, then surface can be VK_NULL_HANDLE. Otherwise, surface must be a valid VkSurfaceKHR handle
• VUID-VkPhysicalDeviceSurfaceInfo2KHR-pSurfaceInfo-06528
When passed as the pSurfaceInfo parameter of vkGetPhysicalDeviceSurfacePresentModes2EXT, if the VK_GOOGLE_surfaceless_query extension is enabled, then surface can be VK_NULL_HANDLE. Otherwise, surface must be a valid VkSurfaceKHR handle
Valid Usage (Implicit)
• VUID-VkPhysicalDeviceSurfaceInfo2KHR-sType-sType
sType must be VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SURFACE_INFO_2_KHR
• VUID-VkPhysicalDeviceSurfaceInfo2KHR-pNext-pNext
Each pNext member of any structure (including this one) in the pNext chain must be either NULL or a pointer to a valid instance of VkSurfaceFullScreenExclusiveInfoEXT or VkSurfaceFullScreenExclusiveWin32InfoEXT
• VUID-VkPhysicalDeviceSurfaceInfo2KHR-sType-unique
The sType value of each struct in the pNext chain must be unique
• VUID-VkPhysicalDeviceSurfaceInfo2KHR-surface-parameter
If surface is not VK_NULL_HANDLE, surface must be a valid VkSurfaceKHR handle
|
{}
|
# Full one-loop electro-weak corrections to three-jet observables at the Z pole and beyond
@article{Calame2009FullOE,
title={Full one-loop electro-weak corrections to three-jet observables at the Z pole and beyond},
author={C. Calame and S. Moretti and F. Piccinini and D. Ross},
journal={Journal of High Energy Physics},
year={2009},
volume={2009},
pages={047-047}
}
We describe the impact of the full one-loop EW terms of O(alpha_s alpha_EM^3) entering the electron-positron into three-jet cross-section from \sqrt{s}=M_Z to TeV scale energies. We include both factorisable and non-factorisable virtual corrections, photon bremsstrahlung but not the real emission of W and Z bosons. Their importance for the measurement of alpha_S from jet rates and shape variables is explained qualitatively and illustrated quantitatively.
10 Citations
#### References
SHOWING 1-10 OF 113 REFERENCES
|
{}
|
# Multivariable Limit Problem: Find Values of k That Make Limit Exist
## Homework Statement:
f(x,y,z) = z^k(e^(x^2+y^2) -1))/(x^2+y^2+z^2)^k for when (x,y,z) does not equal (0,0,0). f(x,y,z)=0 otherwise.
a) Find all the real, positive values of k where the function is continuous at the origin. Thus, find the values of k where the limit at (x,y,z) -> (0,0,0) equals zero.
b) Find the values of k where each of the partials, fx, fy, fz evaluated at (0,0,0) exist. Find the values of these partials
## Relevant Equations:
L'Hopital?
Parameterization?
fx(0,0) = lim t->0 = f(t,0) -f(0,0)/t
(a) I thought perhaps a parameterization would be the place to begin given all the squared terms.
x=rcos(u)sin(v)
y=rsin(u)sin(v)
z=rcos(v)
That would yield: r^k(cos(v))^k*(e^(r^2*(sin(v))^2))/(r^(2k))
Canceling a r^k at each level: (cos(v))^k*(e^(r^2*(sin(v))^2))/(r^(k))
I'm not sure how important the trig terms are, since they are bounded between (-1,1), but they may be necessary to keep. My focus is drawn to the e^(r^2)/r^k term. When I graph that, regardless of exponent, that terms since to tend toward infinity as r->0, which makes sense. So I'm not sure what form of analysis or technique I should use to further simplify this.
Intuition from 1-D Calc says maybe L'Hopital? But a little unsure how to apply with MV, or if it's even relevant. w.r.t. 'r' the top would become (cos(v))^k*(e^(r^2*(sin(v))^2))*(2r*(sin(v))^2) and the bottom would become (k-1)*r^(k-1). I'm not sure how that helps, because then it's still effectively (0)(infinity)/(infinity).
(b) I have a feeling this part would depend on part (a), and I should use the limit definition formula.
Last edited:
Related Calculus and Beyond Homework Help News on Phys.org
Office_Shredder
Staff Emeritus
Gold Member
For part (I) at least, it seems like it would be a simpler first step to use cylindrical coordinates instead. Then it might help to think about the two cases when z=0 and r goes to zero, vs r=0 and z goes to zero.
Ooh, yeah, thanks. Substituting those parameters gives me:
z^k(e^r^2 -1)/(r^2+z^2)^k
So for the limit (r=0, z->0) would be z^k(0)/z^(2k)= 0/z^k, since (e^(r^2)-1)=0 at r=0. As z->0, isn't this of the form 0/0? I would think L'Hopital, but am a little unsure because there isn't really a function left on top.
On the other hand, for the limit(r->0, z=0), we'd get (e^(r^2)-1)/r^(2k). I would imagine the exponential term minus 1 would more strongly go to zero, independent of k though. I graphed this function for some pretty large k values and it still seems to tend to infinity.
But my feeling is that isn't quite right, because I'm not finding k dependence.
As z->0, we have .
Office_Shredder
Staff Emeritus
Gold Member
For ##\frac{e^{r^2}-1}{r^{2k}}## you should try writing out the Taylor polynomial of the exponential. Remember, in the limit at r goes to zero only the smallest degree term matters. Or you can use L'hopital's rule, it's the same thing as using Taylor polynomial expansions.
For the case where r=0, you are correct. The numerator is just 0, so the limit is zero. This is actually pretty helpful, as if you want the limit to exist you need the limit to be zero when ##r\neq0## as well.
Thanks once again! The numerator then becomes (1+r^2)-1= r^2. So the expression reduces to r^2/r^(2k). In order for the limit to go to zero, then then numerator should have the higher degree. So, the expression equals r^(2-2k), and when k<1, the limit would equal zero (very small number to positive exponent. If that makes sense, I feel good about part (a) and might try exploring the limit definition of the derivative for part (b), trying similar techniques.
For part (I) at least, it seems like it would be a simpler first step to use cylindrical coordinates instead. Then it might help to think about the two cases when z=0 and r goes to zero, vs r=0 and z goes to zero.
For ##\frac{e^{r^2}-1}{r^{2k}}## you should try writing out the Taylor polynomial of the exponential. Remember, in the limit at r goes to zero only the smallest degree term matters. Or you can use L'hopital's rule, it's the same thing as using Taylor polynomial expansions.
For the case where r=0, you are correct. The numerator is just 0, so the limit is zero. This is actually pretty helpful, as if you want the limit to exist you need the limit to be zero when ##r\neq0## as well.
Wait, now I'm doubting myself and thinking I made a mistake with the z=0, r -> 0 part. Won't the numerator actually become 0(e^(r^2)-1)/r^(2k), which is again 0 and independent of k?
Office_Shredder
Staff Emeritus
Gold Member
Whoops good point. I should have suggested thinking about the case where z=r! (Then you're going to want to use the Taylor polynomial expansion)
Whoops good point. I should have suggested thinking about the case where z=r! (Then you're going to want to use the Taylor polynomial expansion)
OK cool! So I see it this way then: If z=r, it simplifies to r^k(e^(r^2)-1)/(4r^(2k). Using the expansion e^(r^2) ≈ 1 +r^2, we get the expression to become r^2/(4r^(2k)= 1/4(r^(2-k), where as long as 2-k>0, the limit is zero.
benorin
Homework Helper
Gold Member
Nice problem! I wish it would have been in my book when I took this class 20 yrs ago lol. I'll be around if u need any help with the next part.
Office_Shredder
Staff Emeritus
|
{}
|
• Computer Science
• Published 2002
# A NEW PERMEABILITY FOR PERMANENT MAGNETS AND ANOTHER THEOREM OF REFRACTION IN ISOTROPIC MATERIALS WITH PERMANENT MAGNETIZATION
```@inproceedings{Bere2002ANP,
title={A NEW PERMEABILITY FOR PERMANENT MAGNETS AND ANOTHER THEOREM OF REFRACTION IN ISOTROPIC MATERIALS WITH PERMANENT MAGNETIZATION},
author={Ioan Bere},
year={2002}
}```
A new relative magnetic permeability is defined for permanent magnets, which advantageously allows to approach the non-linearity of demagnetization curve of permanent magnets. Also, using the defined quantity, we have demonstrated another form of the theorem of refraction for the surface of separation between two isotropic materials with permanent magnetization. A practical example where the defined quantities are used is presented.
## THE PERMITTIVITY FOR ANISOTROPIC DIELECTRICS WITH PERMANENT POLARIZATION
VIEW 8 EXCERPTS
CITES BACKGROUND
HIGHLY INFLUENCED
|
{}
|
vignettes/bsseq_analysis.Rnw
9804152f %\VignetteIndexEntry{Analyzing WGBS with bsseq} %\VignetteDepends{bsseq} %\VignetteDepends{bsseqData} %\VignettePackage{bsseq} c5d31d05 %\VignetteEngine{knitr::knitr} 9804152f \documentclass[12pt]{article} c5d31d05 <>= 81449250 BiocStyle::latex() @ 9804152f \title{Analyzing WGBS with the bsseq package} 81449250 \author{Kasper Daniel Hansen\\ \texttt{kasperdanielhansen@gmail.com}} \date{Modified: June 10, 2015. Compiled: \today} 9804152f \begin{document} \maketitle \section*{Introduction} c5d31d05 This document discusses the ins and outs of an analysis of a whole-genome shotgun bisulfite sequencing (WGBS) dataset, using the BSmooth algorithm, which was first used in \cite{Hansen:2011} and more formally presented and evaluated in \cite{Hansen:2012}. The intention with the document is to focus on analysis-related tasks and questions. Basic usage of the \Rpackage{bsseq} package is covered in The bsseq user's guide''. It may be useful to consult the user's guide while reading this analysis guide. 9804152f c5d31d05 In this vignette we analyze chromosome 21 and 22 from \cite{Hansen:2011}. This is primary data from 3 patients with colon cancer. For each patient we have normal colon tissue as well as cancer colon. The samples were run on ABI SOLiD and we generated 50bp single-end reads. The reads were aligned using the Merman aligner in the BSmooth suite (\url{http://www.rafalab.org/bsmooth}). See the primary publication for more details \cite{Hansen:2011}. 9804152f This data is contained in the \Rpackage{bsseqData} c5d31d05 <>= 9804152f library(bsseq) library(bsseqData) @ c5d31d05 The \Rpackage{bsseqData} contains a script, \texttt{inst/script/create\_BS.cancer.R}, describing how this data is created from the Merman alignment output (also contained in the package). Note that the current version of the BSmooth pipeline uses a slightly different alignment output format. 9804152f The following object contains the unsmoothed raw'' summarized alignment data. <>= a1916248 data(BS.cancer.ex) BS.cancer.ex <- updateObject(BS.cancer.ex) 9804152f BS.cancer.ex pData(BS.cancer.ex) @ 7a8bad99 If you use this package, please cite our BSmooth paper \cite{Hansen:2012}. 7d0d20f5 9804152f \section*{Smoothing} The first step of the analysis is to smooth the data <>= BS.cancer.ex.fit <- BSmooth(BS.cancer.ex, mc.cores = 1, verbose = TRUE) @ This particular piece of code is not being run when the vignette is being created. It takes roughly 2 minutes per sample. If you have 6 cores available, use \Rcode{mc.cores = 6} and the total run time will be roughly 2 minutes. Note that setting \Rcode{mc.cores} to a value greater than 1 is not support on MS Windows due to a limitation of the operating system. For ease of use, the \Rpackage{bsseqData} includes the result of this command: <>= data(BS.cancer.ex.fit) a1916248 BS.cancer.ex.fit <- updateObject(BS.cancer.ex.fit) 9804152f BS.cancer.ex.fit @ c5d31d05 This step uses parallelization where each sample is run on a separate core using \Rcode{mclapply} from the \Rpackage{parallel} package. This form of parallelization is built into \Rpackage{bsseq}, and (as written) requires access to a machine with 6 cores and enough RAM. The smoothing step is being done completely independently on each sample, so if you have a lot of samples (or other circumstances), an alternative is to split the computations manually. A later subsection shows some example code for doing that. 9804152f c5d31d05 Let us discuss coverage and representation. The \Rcode{BS.cancer.ex} object contains all annotated CpGs on human chromosome 21 and 22, whether or not there is actual data. Since we have multiple samples, we can roughly divide the genome into 3 categories: CpGs where all samples have data, CpGs where none of the samples have data and CpGs where some, but not all, of the samples have data. Examining the object at hand, we get 9804152f <>= ## The average coverage of CpGs on the two chromosomes round(colMeans(getCoverage(BS.cancer.ex)), 1) ## Number of CpGs in two chromosomes length(BS.cancer.ex) ## Number of CpGs which are covered by at least 1 read in all 6 samples sum(rowSums(getCoverage(BS.cancer.ex) >= 1) == 6) ## Number of CpGs with 0 coverage in all samples sum(rowSums(getCoverage(BS.cancer.ex)) == 0) @ The CpG coverage is roughly 4x, so we would expect many zero coverage CpGs by chance. although that should not necessarily occur in all 6 samples at the same CpG. If we assume that coverage genome-wide is Poisson distributed with a parameter (lambda) of 4, we would expect <>= logp <- ppois(0, lambda = 4, lower.tail = FALSE, log.p = TRUE) round(1 - exp(6 * logp), 3) @ of the CpGs to have at least one sample with zero coverage. There are roughly 130k CpGs with no data at all in any of the 6 samples. This can happen either because of chance (although that is unlikely) or because the CpG is unmappable. Since we are dealing with bisulfite converted reads, the unmappable portion of the genome is greater than with normal DNA-sequencing. For this experiment we only used 50bp single-end reads (in our experience using 100bp paired-end reads greatly increases the mappable percentage of the genome). These CpGs (with zero coverage in all samples) are in some sense easy to deal with: one should of course be careful drawing conclusions about CpGs with no data. We have roughly $959 - 573 - 136 = 250$k CpGs where some (but not all) of the samples have zero coverage, and these are in some sense harder to deal with. Since we have very low coverage to begin with, it may happen just by chance that a single sample have zero coverage, and it may be too restrictive to just exclude these CpGs from an analysis. Smoothing is done separately for each sample, only using the data where the coverage (for that sample) is non-zero. This estimates a genome-wide methylation profile, which is then \emph{evaluated} in all CpGs in the \Rclass{BSseq} object. As a result, after smoothing, every CpG in the object has an estimated methylation value. This is very nice for the situation where you want to compare a single CpG across multiple samples, but one or two of the samples have zero coverage by chance. But note that these smoothed methylation profiles makes less sense in the parts of the genome where there are no covered CpGs nearby. We fix this by removing these CpGs after smoothing, see below. Other arguments to the \Rcode{BSmooth} function are \Rcode{mc.cores}, \Rcode{mc.preschedule}, \Rcode{parallelBy} which controls the parallelization built into the function as well as \Rcode{ns}, \Rcode{h}, \Rcode{maxGap} which controls the smoothing. \Rcode{ns} is the minimum number of CpGs contained in each window, \Rcode{h} is half the minimum window with (the actual window width is either 2 times \Rcode{h} or wide enough to contain \Rcode{ns} covered CpGs, whichever is greater). Note that the window width is different at each position in the genome and may also be different for different samples at the same position, since it depends on how many nearby CpGs with non-zero coverage. Per default, a smoothing cluster is a whole chromosome. By cluster'' we mean a set of CpGs which are processed together. This means that even if there is a large distance between two CpGs, we borrow strength between them. By setting \Rcode{maxGap} this can be prevented since the argument describes the longest distance between two CpGs before a cluster is broken up into two clusters. \subsubsection*{Manually splitting the smoothing computation} An example, only showing sample 1 and 2 for brevity, is (this example is not being run when the vignette is being created): <>= ## Split datag BS1 <- BS.cancer.ex[, 1] save(BS1, file = "BS1.rda") BS2 <- BS.cancer.ex[, 2] save(BS1, file = "BS1.rda") ## done splitting ## Do the following on each node ## node 1 load("BS1.rda") BS1.fit <- BSmooth(BS1) save(BS1.fit) save(BS1.fit, file = "BS1.fit.rda") ## done node 1 ## node 2 load("BS2.rda") BS2.fit <- BSmooth(BS2) save(BS2.fit, file = "BS2.fit.rda") ## done node 2 ## join; in a new R session load("BS1.fit.rda") load("BS2.fit.rda") BS.fit <- combine(BS1.fit, BS2.fit) @ c5d31d05 This still requires that you have one node with enough RAM to hold all samples in memory. 9804152f \section*{Computing t-statistics} c5d31d05 Before computing t-statistics, we will remove CpGs with little or no coverage. If this is not done, you may find many DMRs in areas of the genome with very little coverage, which are most likely false positives. It is open to personal preferences exactly which CpGs to remove, but for this analysis we will only keep CpGs where at least 2 cancer samples and at least 2 normal samples have at least 2x in coverage. For readability, we store the coverage in a separate matrix (this is just due to line breaks in Sweave) 9804152f <>= BS.cov <- getCoverage(BS.cancer.ex.fit) 464849d3 keepLoci.ex <- which(rowSums(BS.cov[, BS.cancer.ex$Type == "cancer"] >= 2) >= 2 & rowSums(BS.cov[, BS.cancer.ex$Type == "normal"] >= 2) >= 2) 9804152f length(keepLoci.ex) BS.cancer.ex.fit <- BS.cancer.ex.fit[keepLoci.ex,] @ c5d31d05 (the \Rcode{keepLoci.ex} is also available for direct inspection in the \Rpackage{bsseqData} package.) 9804152f We are now ready to compute t-statistics, by <>= BS.cancer.ex.tstat <- BSmooth.tstat(BS.cancer.ex.fit, group1 = c("C1", "C2", "C3"), group2 = c("N1", "N2", "N3"), estimate.var = "group2", local.correct = TRUE, verbose = TRUE) BS.cancer.ex.tstat @ c5d31d05 (the \Rcode{BS.cancer.ex.tstat} is also available for direct inspection in the \Rpackage{bsseqData} package.) 9804152f c5d31d05 The arguments to \Rcode{BSmooth.tstat} are simple. \Rcode{group1} and \Rcode{group2} contain the sample names of the two groups being compared (it is always group1 - group2), and indices may be used instead of sample names. \Rcode{estimate.var} describes which samples are being used to estimate the variability. Because this is a cancer dataset, and cancer have higher variability than normals, we only use the normal samples to estimate the variability. Other choices of \Rcode{estimate.var} are \Rcode{same} (assume same variability in each group) and \Rcode{paired} (do a paired t-test). The argument \Rcode{local.correct} describes whether we should use a large-scale (low-frequency) mean correction. This is especially important in cancer where we have found many large-scale methylation differences between cancer and normals. 9804152f We can look at the marginal distribution of the t-statistic by c5d31d05 <>= 9804152f plot(BS.cancer.ex.tstat) @ c5d31d05 The blocks'' of hypomethylation are clearly visible in the marginal distribution of the uncorrected t-statistics. 9804152f c5d31d05 Even in comparisons where we do not observe these large-scale methylation differences, it often improves the marginal distribution of the t-statistics to locally correct them (improves'' in the sense of making them more symmetric). 9804152f \section*{Finding DMRs} Once t-statistics have been computed, we can compute differentially methylated regions (DMRs) by thresholding the t-statistics. Here we use a cutoff of $4.6$, which was chosen by looking at the ed78e79e quantiles of the t-statistics (for the entire genome)\footnote{See \url{https://support.bioconductor.org/p/78227/} for further discussion on the choice of cutoff. }. 9804152f <>= dmrs0 <- dmrFinder(BS.cancer.ex.tstat, cutoff = c(-4.6, 4.6)) dmrs <- subset(dmrs0, n >= 3 & abs(meanDiff) >= 0.1) nrow(dmrs) head(dmrs, n = 3) @ Here, we filter out DMRs that do not have at least 3 CpGs in them and at least a mean difference (across the DMR) in methylation between normal and cancers of at least 0.1. While the exact values of these two filters can be debated, it is surely a good idea to use something like this. Other arguments to \Rcode{dmrFinder} are \Rcode{qcutoff} which chooses a quantile-based cutoff (for example \Rcode{qcutoff = c(0.01, 0.99)}) and \Rcode{maxGap} which makes sure that a DMR is being split if there are two CpGs with more than \Rcode{maxGap} between them (default of 300bp). We rank DMRs by the column \Rcode{areaStat} which is the sum of the t-statistics in each CpG. This is kind of the area of the DMR, except that it is weighted by the number of CpGs and not by genomic length. This is currently the best statistic we know, although it is far from perfect (we would like to do something better). \section*{Plotting} It is \emph{always} a good idea to look at the DMRs. One way of encoding standard plotting parameters like \Rcode{col}, \Rcode{lty}, and \Rcode{lwd} is to add columns to the \Rcode{pData}, like <>= pData <- pData(BS.cancer.ex.fit) pData\$col <- rep(c("red", "blue"), each = 3) pData(BS.cancer.ex.fit) <- pData @ Once this is setup, we can plot a single DMR like c5d31d05 <>= 9804152f plotRegion(BS.cancer.ex.fit, dmrs[1,], extend = 5000, addRegions = dmrs) @ c5d31d05 \Rcode{extend} tells us how many bp to extend to either side of the plotting region. \Rcode{addRegions} is a \Rclass{data.frame} or \Rcode{GRanges} listing additional regions that should be highlighted. Typically, we plot hundreds of DMRs in a single PDF file and use external tools to look at them. For this purpose, \Rcode{plotManyRegions} is very useful since it is much faster than plotting individual DMRs with \Rcode{plotRegion}. An example (not run) is 9804152f <>= pdf(file = "dmrs_top200.pdf", width = 10, height = 5) plotManyRegions(BS.cancer.ex.fit, dmrs[1:200,], extend = 5000, addRegions = dmrs) dev.off() @ which plots the top200. \section*{Question and answers} \textbf{1. The BSmooth algorithm is supposed to give smooth methylation estimates. Yet, when I plot the smoothed values, I see jagged lines, which do not look smooth to me.} We estimate a genome-wide methylation profile that is a smooth function of the genomic position. However, this profile is not stored in the \Rclass{BSseq} objects. Instead, we evaluate this smooth profile in the methylation loci in the object. An example (made-up values) is \begin{verbatim} pos meth 1 0.1 3 0.1 5 0.1 200 0.6 203 0.6 205 0.6 \end{verbatim} For plotting we do linear interpolation between this points. The end result is that the methylation profile may appear jagged especially if there is a big'' distance between two CpGs (between pos \texttt{5} and \texttt{200} above). If we wanted to plot truly smooth profiles we would have to store the methylation profile evaluated at a regular grid across the genome. This takes up a lot of space and would add complications to the internal data structures. %% %% Backmatter %% \bibliography{bsseq} \section*{SessionInfo} c5d31d05 <>= 9804152f toLatex(sessionInfo()) @ \end{document} % Local Variables: % LocalWords: LocalWords bisulfite methylation methylated CpG CpGs DMR bsseq bp % LocalWords: DMRs differentially ABI SOLiD dataset WGBS BSmooth % LocalWords: parallelization Bioconductor hypomethylation genomic pos PDF thresholding % LocalWords: mappable unmappable indices normals quantile % End:
|
{}
|
creation.matrix: Creation Matrix In matrixcalc: Collection of functions for matrix calculations
Description
This function returns the order n creation matrix, a square matrix with the sequence 1, 2, ..., n - 1 on the sub-diagonal below the principal diagonal.
Usage
1 creation.matrix(n)
Arguments
n a positive integer greater than 1
Details
The order n creation matrix is also called the derivation matrix and is used in numerical mathematics and physics. It arises in the solution of linear dynamical systems. The form of the matrix is ≤ft[ {\begin{array}{*{20}{c}} 0&0&0& \cdots &0&0\\ 1&0&0& \cdots &0&0\\ 0&2&0& \cdots &0&0\\ 0&0&3& \ddots &0&0\\ \vdots & \vdots & \vdots & \ddots & \ddots &{}\\ 0&0&0& \cdots &{n - 1}&0 \end{array}} \right].
Value
An order n matrix.
Note
If the argument n is not an integer that is greater than 1, the function presents an error message and stops.
Author(s)
Frederick Novomestky fnovomes@poly.edu
References
Aceto, L. and D. Trigiante (2001). Matrices of Pascal and Other Greats, American Mathematical Monthly, March 2001, 108(3), 232-245.
Weinberg, S. (1995). The Quantum Theory of Fields, Cambridge University Press.
Examples
1 2 H <- creation.matrix( 10 ) print( H )
Example output
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 0 0 0 0 0 0 0 0 0 0
[2,] 1 0 0 0 0 0 0 0 0 0
[3,] 0 2 0 0 0 0 0 0 0 0
[4,] 0 0 3 0 0 0 0 0 0 0
[5,] 0 0 0 4 0 0 0 0 0 0
[6,] 0 0 0 0 5 0 0 0 0 0
[7,] 0 0 0 0 0 6 0 0 0 0
[8,] 0 0 0 0 0 0 7 0 0 0
[9,] 0 0 0 0 0 0 0 8 0 0
[10,] 0 0 0 0 0 0 0 0 9 0
matrixcalc documentation built on May 2, 2019, 1:45 p.m.
|
{}
|
diophantus
Hello, this is beta version of diophantus. If you want to report about a mistake, please, write to hello@diophantus.org
A Faddeev Calculation for Pentaquark $\Theta^+$ in Diquark Picture with Nambu-Jona-Lasinio Type Interaction
09 Apr 2007 nucl-th, hep-ph arxiv.org/abs/0704.1072
Abstract. A Bethe-Salpeter-Faddeev (BSF) calculation is performed for the pentaquark $\Theta^+$ in the diquark picture of Jaffe and Wilczek in which $\Theta^+$ is a diquark-diquark-${\bar s}$ three-body system. Nambu-Jona-Lasinio (NJL) model is used to calculate the lowest order diagrams in the two-body scatterings of ${\bar s}D$ and $D D$. With the use of coupling constants determined from the meson sector, we find that ${\bar s}D$ interaction is attractive while $DD$ interaction is repulsive, and there is no bound $\frac 12^+$ pentaquark state. A bound pentaquark $\Theta^+$ can only be obtained with unphysically strong vector mesonic coupling constants.
Reviews
There are no reviews yet.
|
{}
|
# Approximate orbital period for earth's moon
Does anyone know what would be the approximate orbital period (time for one complete orbit) for an apple placed in orbit around Earth at the moon’s distance from Earth?
|
{}
|
# Mechanism for oxidation of primary alcohols to carboxylic acids
My teacher told me that the following is the mechanism for oxidation of primary alcohols to carboxylic acids:
I've searched in books and online and didn't find a similar mechanism. For example, in Wikipedia:
Which is correct?
Clayden et al., Organic Chemistry (2ed), p. 545 gives the reaction pathway as:
Here's some evidence that supports this pathway. One of the most well-known methods to selectively oxidise primary alcohols to aldehydes, without further oxidation to the carboxylic acid, is by using pyridinium chlorochromate in dichloromethane as solvent. This presumably works because water is excluded, which prevents the hydrate from being formed. Clayden writes:
Aqueous methods like the Jones oxidation [n.b.: the Jones oxidation is $\ce{CrO3}/\text{aq. }\ce{H2SO4}$] are no good for this, since the aldehyde that forms is further oxidized to acid via its hydrate. The oxidizing agent treats the hydrate as an alcohol, and oxidizes it to the acid. The key thing is to avoid water, so PCC in dichloromethane works quite well. The related reagent PDC (pyridinium dichromate) is particularly suitable for oxidation to aldehydes.
If the reaction pathway was as your teacher taught you, then there would be no point in excluding water, since in that pathway water is not needed for the over-oxidation to the carboxylic acid.
The actual mechanism for the oxidation step is as follows (Clayden, p. 195):
If water is present, then the aldehyde product simply forms the hydrate and the mechanism for oxidation to the carboxylic acid is exactly the same, except that one of the hydrogens is replaced with an $\ce{-OH}$. Note that you need an $\ce{-OH}$ group on the starting material to form a chromate ester - that means that aldehydes will not undergo oxidation, but their hydrates (which are geminal diols) will.
|
{}
|
Volver a Classical Cryptosystems and Core Concepts
## Opiniones y comentarios de aprendices correspondientes a Classical Cryptosystems and Core Concepts por parte de Sistema Universitario de Colorado
4.6
estrellas
383 calificaciones
## Acerca del Curso
Welcome to Introduction to Applied Cryptography. Cryptography is an essential component of cybersecurity. The need to protect sensitive information and ensure the integrity of industrial control processes has placed a premium on cybersecurity skills in today’s information technology market. Demand for cybersecurity jobs is expected to rise 6 million globally by 2019, with a projected shortfall of 1.5 million, according to Symantec, the world’s largest security software vendor. According to Forbes, the cybersecurity market is expected to grow from $75 billion in 2015 to$170 billion by 2020. In this specialization, you will learn basic security issues in computer communications, classical cryptographic algorithms, symmetric-key cryptography, public-key cryptography, authentication, and digital signatures. These topics should prove especially useful to you if you are new to cybersecurity Course 1, Classical Cryptosystems, introduces you to basic concepts and terminology related to cryptography and cryptanalysis. It is recommended that you have a basic knowledge of computer science and basic math skills such as algebra and probability....
## Principales reseñas
AS
12 de ago. de 2020
Interesting course on an Interesting topic taken in a pleasant, enjoyable, and simple manner.
Never did I feel bored or out of my depth even though I am a stranger to the topic.
MR
6 de nov. de 2017
The lectures were educational, and interesting without speaking down to you. I learned a lot without being overwhelmed
Filtrar por:
## 1 - 25 de 82 revisiones para Classical Cryptosystems and Core Concepts
27 de mar. de 2020
perfect and the tutor is absolutely amazing, got good information from the videos and different links provided by them... feeling good to spend my time over this course i think this specialization can give good knowledge in terms of Cryptography
por Fubara T F
26 de jun. de 2020
Honestly did not expect to enjoy the class as well as I did. Such a wealth of information. Truly worth it.
por Jonathan A F
30 de jun. de 2020
This course was very interesting and enlightening. The concepts have been explained very well.
por Ahmed A
6 de may. de 2018
too pooring , not interactive , it will be better if it integrate solving problem with programming
por Tianxiang X
24 de may. de 2020
An excellent introduction to the field of cryptography, from historical ciphers to ones commonly used today. Moving from weaker ciphers to stronger ones, and pointing out the weaknesses that led to the development of increasingly strong ciphers, lays out the motivation for continual cryptographic research very well.
por Daniela B O
18 de ago. de 2020
This was a great introduction course to cryptography basics. It provided me with so much knowledge, from some history and how this field began, to different types of ciphers and hash functions, and even teaching me about modern uses of cryptography. I'm left with excitement and drive to learn more about it.
por Ajaykrishnan E
13 de ago. de 2020
Interesting course on an Interesting topic taken in a pleasant, enjoyable, and simple manner.
Never did I feel bored or out of my depth even though I am a stranger to the topic.
por Slavisa D
31 de dic. de 2019
Interesting course and a good overview about cryptography. The last week is a little bit too fast-paced if you are complete newbie.
por Manuel A D R
14 de jul. de 2019
Excelente Curso en la cual a fortalecido mis conocimientos en el mundo de la Criptología. Saludos Mnauel Antonio Diaz Ricalde
por Michael J R
7 de nov. de 2017
The lectures were educational, and interesting without speaking down to you. I learned a lot without being overwhelmed
por Sangeeth S V
23 de sep. de 2020
Nice delivery of concepts. Interesting historical concepts was available which made it very engaging and educating.
por Dr. P V L
20 de jun. de 2020
I teach cryptosystems in my University. I found it very useful, more than a textbook content. Nice lectures.
por Krishna S M
13 de abr. de 2020
Everything was crystal clear but have to keep up with everything the professor says can't miss anything.
por Atharva S B
5 de may. de 2020
very nice course, i was not having any tech background , but learned loads of good things.
14 de ene. de 2019
Thank you for taking the time to upload your videos. I hope you create a capstone course.
por Michal Š
1 de jun. de 2020
Previous course I have done on cryptography I liked much more, but it was also fine.
por Girish K
19 de abr. de 2020
Very nice course! Learned a lot about cryptosystems and cryptography fundamentals.
por Kritesh S R
14 de sep. de 2020
The course and materials are really good. The concepts are explained very well
por Ranil A
26 de may. de 2021
Excellent introduction to cryptography, cryptanalysis and hash functions.
por MAALOLAN K 1
8 de jun. de 2020
The basics were explained very clearly and the References were also nice!
por Doug S
16 de ene. de 2021
Very Good instructor Really tries to explain it so anyone can understand
por Jai V
25 de nov. de 2017
Cover every little details of Cryptography specially hash functions
por Maryam J
10 de feb. de 2022
Great Learning.
I learned a lot without being overwhelmed
Thank you.
por Mouath m a m
7 de dic. de 2021
it is very beautiful course and I learn so much thank you so much
por Alejandro S
18 de jun. de 2020
Well explained, good overall insight into real world cryptography
|
{}
|
Earthquake Reaearch in China 2018, Vol. 32 Issue (4): 510-520
Research on the Crustal Velocity Model of the Yunnan Region and Its Application
Yang Jingqiong, Yang Zhousheng, Zhang Huiyuan
Yunnan Earthquake Agency, Kunming 650203, China
Abstract: This paper selects the records of 7, 412 earthquakes, each recorded by more than 10 stations in Yunnan between 2009 and 2014 to acquire the traveltime curves. Meanwhile, for improving precision, linear analysis, reduced traveltime curve and interval stability analysis are conducted focusing on the records of 83 earthquakes with ML ≥ 3.0 recorded each by ≥ 80% of the stations, and by combining predecessors' research results, the initial crustal velocity model of the study area is obtained. By selecting 200 earthquakes with M ≥ 3.0 occurring in Yunnan between 2010 and 2014, using the Hyposat batch location processing method to iterate the initial velocity model, and performing fitting to S waves layered velocity structure, we obtain the crustal velocity model for the Yunnan region, namely, the 2015 Yunnan model, with:vP1=6.01km/s, vP2=6.60km/s, vPn=7.89km/s, H1=20km, H2=21km, vS1=3.52km/s, vS2=3.86km/s, vSn=4.43km/s. Analysis on earthquake relocations based on the new model shows that most earthquakes occurring in Yunnan are at a depth of 10km-20km of the upper crust. The March 10, 2011 MS5.8 Yingjiang and August 3, 2014 MS6.5 Ludian earthquakes are relocated, and the focal depths determined with the new model are respectively close to the precise positioning result and hypocentral distance to the strong motion stations at the epicenters, indicating that the new one-dimensional velocity model can better reflect the average velocity structure of the study area.
Key words: The Yunnan area Crustal velocity model Earthquake location
INTRODUCTION
Yunnan is located in the southeast margin of the Qinghai-Tibetan Plateau, on the east side of the collision between the Indian plate and the Eurasian plate, where the geological structure is complex, a series of fault zones, such as the Nujiang fault, Ailaoshan-Honghe fault and Xiaojiang fault, are formed and developed (Deng Qidong et al., 1994; Cai Linsun et al., 2002). The seismic activity noted as its great intensity, high frequency, severe disaster and wide distribution, which is one of the provinces with frequent destructive earthquakes and the most serious geological disasters in China. Accurate determination of the earthquake epicenter and focal depth has always been a hot issue for seismologists (Zhu Yuanqing et al., 1997a), and a suitable crustal velocity model can help seismologists accurately determine the precision of seismic parameters (Zhu Yuanqing et al., 1997b; Zhang Tianzhong et al., 2007).
For a long time, lots of studies have been done in this area by geoscientists, such as artificial seismic sounding (Hu Hongxiang et al., 1986, 1993, 1994; Zhang Xiankang et al., 2002; Wang Fuyun et al., 2005; Zhang Zhongjie et al., 2005a; Wang Chunyong et al., 2009), body wave and surface wave tomography (Wang Chunyong et al., 2002; Bai Zhiming et al., 2003; Huang Jinli et al., 2002; He Zhengqin et al., 2005; Zhang Zhongjie et al., 2011; Huang Jing et al., 2012) and receiver function inversion (Hu Jiafu et al., 2003; Li Yonghua et al., 2009), which provides important data for studying tectonic background and crustal velocity model of Yunnan.
In this paper, the velocity model is measured and corrected by the use of abundant observation data from the Yunnan Digital Seismic Network, and the results are applied to the observation of seismic networks in practice, in order to improve the quality of earthquake locating.
1 ESTABLISHMENT OF REGIONAL VELOCITY MODEL 1.1 Data Selection
The 49 fixed stations of the Yunnan Digital Seismic Network are distributed evenly according to the land area of Yunnan Province, about 50km-200km apart. Among them, stations are relative densely distributed in Kunming, in central Yunnan, and Dali in western Yunnan, which are about 50km-100km apart, and relatively sparse in Zhaotong in northeast Yunnan, Shangri-la in northwest Yunnan and southwest Yunnan, which are about 100km-200km apart.
The 7, 412 earthquakes recorded at least by 10 seismic stations during 2009-2014 since the establishment of digital networks of the 10th "Five-year Plan" are selected for the research, and epicenter distribution is shown in Fig. 1. Among them, 83 earthquakes are recorded by more than 80% of these stations, with ML greater than 3.0.
Fig. 1 Distribution of selected earthquakes
1.2 Determination of Initial Crustal Velocity Model
The 95, 116 Pg seismic phases, 1, 813 Pb seismic phases, 8, 674 Pn seismic phases, 98, 570 Sg seismic phases, 19 Sb seismic phases and 573 Sn seismic phases are extracted from selected data of 7, 412 earthquakes, and their travel time and epicentral distances are used respectively for linear fitting to acquire corresponding fitting velocity, which is as below.
$T = {\text{A}} + \left({\Delta /v} \right)$ (1)
where, T denotes seismic phase travel time, Δ is epicentral distance, v is fitting velocity, A is constant.
Velocities fitted by curves of Pg, Pb, Pn, Sg, Sb and Sn seismic phase travel time are respectively vP1(P-wave velocity in the upper crust)=6.04km/s, vP2(P-wave velocity in the lower crust)=6.68km/s, vPn(P-wave velocity at the top of upper mantle)=7.89km/s, vS1(S-wave velocity in the upper crust)=3.53km/s, vS2(S-wave velocity in the lower crust)=3.93km/s and vSn(S-wave velocity at the top of upper mantle)=4.51km/s.
To improve the accuracy, seismic data with high positioning accuracy, that is, each of the 83 earthquakes with ML≥3.0 recorded at more than 80% stations, are selected for further analysis, and corresponding velocities fitted by Pg, Pb and Pn seismic phase travel time curves are vP1=6.03km/s, vP2=6.68km/s and vPn=7.91km/s (Fig. 2).
Fig. 2 Travel time curves
According to velocity results of straight line fitting, reduced travel time method is used to adjust initial model and determine trial and error scope of model, that is, to provide variation range of parameters of crustal velocity model and ensure that real crustal model parameters are included in the range of variable parameters.
Reduced travel time is calculated according to Pg, Pb and Pn seismic phases of the 83 earthquakes.
${T_Z} = {T_{\rm{G}}} - \left({\Delta /v} \right)$ (2)
where, TZ represents reduced travel time, TG observed travel time, v wave velocity.
In order to investigate the fitting of actual travel time and theoretical travel time in velocity fitting straight line for vP1, vP2 and vPn at different focal depths, focal depths of 5km, 10km, 15km and 20km are selected and upper crust thickness H1 and lower crust thickness H2 are adjusted, and it is found that the focal depths of earthquakes in Yunnan are basically concentrated in 5km-20km, the theoretical travel time of Pg, Pb and Pn seismic phases are roughly parallel to the observed travel time, and data points are also concentrated. Studies of the seismic location carried out based on small and moderate earthquakes (Wu Jianping et al., 2004) also show that earthquake focuses are generally located in brittle middle-upper crust in this region, less than 20km deep, and the initial crustal velocity model thus obtained in Yunnan is as shown in Table 1.
Fig. 3 Reduced travel time curves
Table 1 Initial model
1.3 Analysis of Velocity Stability
Velocity stability of the initial model is analyzed. The stability of vP1, vP2 and vPn under the following conditions is investigated (Fig. 4): ① changes as epicentral distance increases; ② changes in different area coverage (150km as window and 50km as step length); ③ changes in different area coverage (200km as window and 50km as step length). It can be seen from Fig. 4 that vP1, vP2 and vPn are relatively stable in different area coverage.
Fig. 4 Changes of vP1, vP2 and vPn in different area coverage (ɑ) Changes as epicentral distance increases. (b) Changes in different area coverage (150km as window and 50km as step length). (c) Changes in different area coverage (200km as window and 50km as step length)
1.4 Determination of Optimal Velocity Model
The 200 earthquake events with M≥3.0 in Yunnan between 2010 and 2014 are selected, and radiation coverage of selected earthquakes and seismic stations is shown in Fig. 5. The Hyposat batch location method is used for iteration for the initial velocity model, and computation limits are as below. vP1=5.8-6.4km/s, vP2=6.5-6.9km/s, vPn=7.8-8.2km/s, depth of Conrad discontinuity=16-24km and depth of Conrad discontinuity=38-42km. Calculation results are obtained (Table 2).
Fig. 5 Distribution of P-wave propagation paths
Table 2 Results obtained by 2 iterations with Hyposat batch processing
The results obtained by the second computation is taken as the optimal model, which, together with the daily catalogue model, are used to calculate 200 earthquake events using the Hyposat batch location approach (Fig. 6 and Fig. 7), to obtain RMS of the average travel time residual of positioning, which is 0.459 for the optimal model and 0.528 for the catalogue model. There are only 3 earthquakes with a difference in epicenter location of more than 10km between the two, and according to the recheck, it is concluded that the earthquakes with large errors have less seismic phases and the gap angle of seismic stations is large.
Fig. 6 Residual comparison between optimal model and catalog model after batch processing
Fig. 7 Comparison of epicenter difference between optimal model and catalog model after batch processing
1.5 Establishment of a Complete Regional Velocity Model
The S-wave velocity obtained by the method of uniform wave velocity ratio mainly reflects S-wave velocity in the upper crust (vS1). In order to further obtain S-wave velocity in the upper crust, lower crust and the top of the upper mantle (vS1, vS2 and vSn), keeping the epicenter location within the original error range, the obtained P-wave layered velocities and thickness of each layer, as well as the same seismic phase data used in original batch processing, are used to continuously adjust wave velocity ratio of each layer to try out vS1, vS2 and vSn of S-wave in stratified layers, and the complete crustal velocity model of Yunnan is obtained through further testing in trial calculation (the 2015 Yunnan model), which is shown in Table 3.
Table 3 Crustal velocity model of Yunnan (the 2015 Yunnan model)
2 EARTHQUAKE RELOCATION ANALYSIS BASED ON THE NEW MODEL 2.1 Analysis of Focal Depth
The 2015 Yunnan model is adopted to remeasure the depths of 322 ML≥4.0 earthquakes occurred in Yunnan and its adjacent areas during 2008-2016 using the PTD method (Zhu Yuanqing et al., 1990), which are compared with the distribution of focal depths in catalogue reports (Fig. 8).
Fig. 8 Distribution of focal depths measured by new and old models (a) Three-dimensional distribution of focal depths; (b) Focal depth projection along latitude; (c) Focal depth projection along longitude
Fig. 8(b) and Fig. 8(c) show respectively projections of focal depths along the longitude and latitude. It can be seen from the graph that most of the focal depth values calculated from the catalogue model are concentrated around 5km and 15km, and obviously distributed in two bands, while depth values calculated from the 2015 Yunnan model show a wide distribution, which are uniformly distributed within 20km. It is therefore clear that the depth distribution measured by the new model is more reasonable.
2.2 Analysis of Typical Earthquake Location
(1) The new model (the 2015 Yunnan model) is used to locate the Yingjiang M5.8 earthquake on March 10, 2011, and the result is compared with catalogue results and fine positioning results (Fang Lihua et al., 2011) (Table 4). Obviously, the positioning depth of the new model is the closest to fine positioning results.
Table 4 Comparison of measurement results for the Yingjiang earthquake
(2) The new model is adopted to locate the Ludian MS6.5 earthquake on August 3, 2014 using the PTD and Hyposat method respectively, and the measured depths are 7km and 8.1km. Both of them are smaller than the hypocentral distance of Longtoushan strong motion network in the epicentral area, which is 8.3km (Table 5, Fig. 9), indicating that the new one-dimensional velocity model can better reflect the average velocity structure of the study region. Longtoushan station of Yunnan strong motion network well documented this earthquake (Table 6, Fig. 10).
Table 5 Comparison of measurement results for the Ludian earthquake
Fig. 9 PTD relocating for the Ludian earthquake
Table 6 Parameters of Longtoushan station
Fig. 10 Waveform records at Longtoushan station
3 DISCUSSION AND CONCLUSION
(1) The 7, 412 earthquakes recorded by at least 10 seismic stations in Yunnan during 2009-2014 are used for linear fitting. Meanwhile, to improve accuracy, seismic data with high positioning accuracy of 83 earthquakes with ML≥3.0 recorded by more than 80% of stations were selected for further analysis, and their theoretical reduced travel time and actual reduced travel time are calculated. By selecting different focal depths and adjusting the thickness of both the upper and lower crust, it is found that the focal depths of earthquakes in Yunnan are basically concentrated in 5km-20km. The theoretical travel time of Pg, Pb and Pn seismic phases are roughly parallel to the observed travel time, and data points are also concentrated, and thus the initial crustal velocity model of Yunnan is obtained: vP1=6.03km/s, vP2=6.68km/s, vPn=7.91km/s, H1=20km, H2=21km. By setting different sliding windows, it is observed that vP1, vP2 and vPn of the initial model are relatively stable.
(2) The 200 earthquake events with M≥3.0 in Yunnan are selected and the Hyposat batch location method is used for iteration for the initial velocity model, and better results are obtained and taken as the optimal model. In the meantime, by continuously adjusting the wave velocity ratio of each layer to try out vS1, vS2 and vSn of S-wave in stratified layers, the complete crustal velocity model of Yunnan is obtained through further testing in trial calculation (the 2015 Yunnan model): vP1=6.01km/s, vP2=6.60km/s, vPn=7.89km/s, H1=20km, H2=21km, vS1=3.52km/s, vS2=3.86km/s, vSn=4.43km/s.
(3) Based on the 2015 Yunnan model obtained in this study, 322 M≥4.0 earthquakes in Yunnan between 2008 and 2016 are relocated using the PTD method. The obtained depths are no longer distributed in two bands at depths of around 5km and 15km, but evenly distributed within 20km. The depth distribution measured by the new model is more reasonable. The new model is adopted to relocate the Yingjiang MS5.8 earthquake on March 10, 2011, and the focal depth is calculated to be 14.8km, which is the closest to the fine positioning result of 13.4km. The new model is adopted to locate the Ludian MS6.5 earthquake on August 3, 2014 using the PTD and Hyposat method, and the measured depths are 7km and 8.1km respectively. Both of them are smaller than the hypocentral distance of the Longtoushan strong motion network in the epicentral area, which is 8.3km, indicating that the new one-dimensional velocity model (the 2015 Yunnan model) can better reflect the average velocity structure of the study region, which provides basis for more detailed work in future.
ACKNOWLEDGEMENTS
The author is grateful to Research Professor Zhu Yuanqing from Shanghai Earthquake Agency and the project team of Construction and Promotion of Nationwide One-dimensional Velocity Model for their vigorous support for this study.
REFERENCES
Bai Zhiming, Wang Chunyong. Tomographic investigation of the upper crustal structure and seismotectonic environments in Yunnan Province[J]. Acta Seismologica Sinica, 2003, 25(2): 117–127 (in Chinese with English abstract). Cai Linsun, Li Xinglin. Geology of Yunnan Province[M]. In: Ma Lifang (Editor). Beijing: Geological Publishing House, 2002. 293-300 (in Chinese). Deng Qidong, Xu Xiwei, Yu Guihua. The Zoning Characteristics and Genesis of Active Faults in Chinese Mainland[A]. In: Seismological Geology Specialized Committee, China Seismological Society (Editor) Study on Active Faults of China[G]. Beijing: Seismological Press, 1994. 1-14 (in Chinese). Fang Lihua, Wu Jianping, Zhang Tianzhong, Huang Jing, Wang Changzai, Yang Ting. Relocation of mainshock and aftershocks of the 2011 Yingjiang MS5.8 earthquake in Yunnan[J]. Acta Seismologica Sinica, 2011, 33(2): 262–267 (in Chinese with English abstract). He Zhengqin, Ye Tailan, Su Wei. 3-D velocity structure of the middle and upper crust in the Yunnan region, China[J]. Pure and Applied Geophysics, 2005, 162(12): 2355–2368. DOI:10.1007/s00024-005-2780-x. Hu Hongxiang, Lu Hanxing, Wang Chunyong, He Zhengqin, Zhu Liangbao, Yan Qizhong, Fan Yuexin, Zhang Guoqing, Deng Ying'e. Explosion investigation of the crustal structure in western Yunnan Province[J]. Acta Geophysica Sinica, 1986, 29(2): 133–144 (in Chinese with English abstract). Hu Hongxiang, Gao Shiyu. The investigation of fine velocity structure of the basement layer of earth's crust in western Yunnan Region[J]. Earthquake Research in China, 1993, 19(4): 356–363 (in Chinese with English abstract). Hu Hongxiang, Li Xueqing. Base Velocity Fine Structure of Menglian-Simao-Malong Section in the Yunnan Region[M]. In: Chen Yuntai, Kan Rongju, Teng Jiwen (Editors). Beijing: Maritime Press, 1994. 100-106 (in Chinese). Hu Jiafu, Su Youjin, Zhu Xiongguan, Chen Yun. S-wave velocity and Poisson's ratio structure of crust in Yunnan and its implication[J]. Science in China (Series D:Earth Sciences), 2005, 48(2): 210–218. DOI:10.1360/03yd0062. Huang Jinli, Zhao Dapeng, Zheng Sihua. Lithospheric structure and its relationship to seismic and volcanic activity in Southwest China[J]. Journal of Geophysical Research, 2002, 107(B10): 2255. Huang Jing, Liu Xuejun, Su Youjin, Wang Baoshan. Imaging 3-D crustal P-wave velocity structure of western Yunnan with bulletin data[J]. Earthquake Science, 2012, 25(2): 151–160. Li Yonghua, Wu Qingju, Tian Xiaobo, Zhang Ruiqing, Pan Jiatie, Zeng Rongsheng. Crustal structure in the Yunnan region determined by modeling receiver functions[J]. Chinese Journal of Geophysics, 2009, 52(1): 67–80 (in Chinese with English abstract). Wang Chunyong, Mooney W.D., Wang Xili, Wu Jianping, Lou Hai, Wang Fei. Study on 3-D velocity structure of crust and upper mantle in Sichuan-Yunnan region, China[J]. Acta Seismologica Sinica, 2002, 24(1): 1–16 (in Chinese with English abstract). Wang Fuyun, Zhang Xiankang, Chen Qifu, Chen Yong, Zhao Jinren, Yang Zhuoxin, Pan Suzhen. Fine tomographic inversion of the upper crust 3-D structure around Beijing[J]. Chinese Journal of Geophysics, 2005, 48(2): 359–366 (in Chinese with English abstract). Wang Chunyong, Lou Hai, Wang Xili, Qin Jiazheng, Yang Runhai, Zhao Jinming. Crustal structure in Xiaojiang fault zone and its vicinity[J]. Earthquake Science, 2009, 22(4): 347–356. DOI:10.1007/s11589-009-0347-0. Wu Jianping, Ming Yuehong, Wang Chunyong. Source mechanism of small-to-moderate earthquakes and tectonic stress field in Yunnan Province[J]. Acta Seismologica Sinica, 2004, 26(5): 457–465 (in Chinese with English abstract). Zhang Xiankang, Zhao Jinren, Zhang Chengke, Ren Qingfang, Nie Wenying, Cheng Shuangxi, Pan Suzhen, Tang Zhouqiong. Crustal structure at the northeast side of the Pamirs[J]. Chinese Journal of Geophysics, 2002, 45(5): 665–671 (in Chinese with English abstract). Zhang Zhongjie, Bai Zhiming, Wang Chunyong, Teng Jiwen, Lü Qingtian, Li Jiliang, Sun Shanxue, Wang Xinzheng. Crustal structure of Gondwana-and Yangtze-typed blocks:an example by wide-angle seismic profile from Menglian to Malong in western Yunnan[J]. Science in China (Series D:Earth Sciences), 2005a, 48(11): 1828–1836. DOI:10.1360/03yd0547. Zhang Tianzhong, Wu Bater, Huang Yuan, Jiang Changsheng. Effect of the data recorded at nearby stations on earthquake relative location[J]. Chinese Journal of Geophysics, 2007, 50(4): 1123–1130 (in Chinese with English abstract). Zhang Zhongjie, Deng Yangfan, Teng Jiwen, Wang Chunyong, Gao Rui, Chen Yun, Fan Weiming. An overview of the crustal structure of the Tibetan plateau after 35 years of deep seismic soundings[J]. Journal of Asian Earth Sciences, 2011, 40(4): 977–989. DOI:10.1016/j.jseaes.2010.03.010. Zhu Yuanqing, Xia Congjun, Li Ping. A PTD method a new method for determinating focal depth and its application[J]. Seismological and Geomangnetic Observation and Research, 1997a, 18(3): 21–29 (in Chinese with English abstract). Zhu Yuanqing, Zhao Zhonghe. Research on the new method to raise earthquake location accuracy[J]. Seismological and Geomangnetic Observation and Research, 1997b, 18(5): 59–67 (in Chinese with English abstract).
|
{}
|
PLY - Maple Help
PLY (.ply) File Format
PLY file format
Description
• PLY (Polygon File Format or Stanford Triangle Format) is a file format for 3-D computer-aided design and modeling.
• This format has both text-based and binary variants.
• It represents geometric data as a collection of lines and polygons.
• The plottools[importplot] and plottools[exportplot] commands can be used for data exchange between this format and Maple 3-D plots. Both the text-based and binary variants are fully supported.
• The general-purpose commands Import and Export also support this format.
• With both the plottools[exportplot] and Export commands, you can specify whether the text-based or binary format is desired using the encoding option.
Examples
Import a geometric object from a PLY file as a 3-D plot.
> $\mathrm{Import}\left("example/gear.ply",\mathrm{base}=\mathrm{datadir}\right)$
|
{}
|
# Deep learning with Tensorflow: training with big data sets
Goal
I am trying to build a neural network that recognizes multiple label within a given image. I started with a database composed of 1800 images (each image is an array of shape (204,204,3). I trained my model and concluded that data used wasn't enough in order to build a good model ( with respect to chosen metric). So i decided to apply data augmentation technique in order to get more images. I managed to get 25396 images ( all of them are of shape (204,204,3)). I stored all of them in arrays . I obtained (X,Y) where X are the training examples (is an array of shape (25396,204,204,3)) and Y are the labels ( an array of shape (25396,39) : the number 39 refers to the possible labels in a given image).
Issues
My data (X,Y) weights approximately arround 26 giga bytes. I successfully managed to use them . However, when i try to do manipulation (like permutations) I encounter memory Error in python. Exemple
1. I started jupyter and successfully imported my data (X,Y)
x=np.load('x.npy')
output: x is an np.array of shape (25396,204,204,3) and y is an np.array of shape (25396,39).
2. I divide my dataSet in train and test by using sklearn built in function train_test_split
X_train, X_valid, Y_train, Y_valid= train_test_split(x_train,y_train_augmented,test_size=0.3, random_state=42)
output
-------------testing size of different elements et toplogie:
-------------x size: (25396, 204, 204, 3)
-------------y size: (25396, 39)
-------------X_train size: (17777, 204, 204, 3)
-------------X_valid size: (7619, 204, 204, 3)
-------------Y_train size: (17777, 39)
-------------Y_valid size: (7619, 39)
3. I am creating a list composed of random batches extracted from (X,Y) and then iterate over the batches in order to complete the learning process for a given epoch :'this opperation is done in each epoch of the training part. Here is the function used in order to create the list of random batches:
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
"""
np.random.seed(seed)
m = X.shape[0]
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[permutation,:]
shuffled_Y = Y[permutation,:]
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
mini_batch_X = shuffled_X[k * mini_batch_size : (k + 1) * mini_batch_size, :]
mini_batch_Y = shuffled_Y[k * mini_batch_size : (k + 1) * mini_batch_size, :]
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
'''
mini_batches.append((X[permutation,:][k * mini_batch_size : (k + 1) * mini_batch_size, :], Y[permutation,:][k * mini_batch_size : (k + 1) * mini_batch_size, :]))
'''
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[ num_complete_minibatches * mini_batch_size:, :]
mini_batch_Y = shuffled_Y[ num_complete_minibatches * mini_batch_size:, :]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
'''
mini_batches.append((X[permutation,:][ num_complete_minibatches * mini_batch_size:, :], Y[permutation,:][ num_complete_minibatches * mini_batch_size:, :]))
'''
shuffled_X=None
shuffled_Y=None
return mini_batches
## 4. I am creating a loop (of 4 iterations) and i am testing the random_mini_batch function in each iteration. At the end of each iteration I am assigning None values to the list of mini_batches in order to liberate memory and redo the random_mini_batch_function in the next iteration .So these line of codes works fine and I ve got no memory issues:
minibatch_size=32
seed=2
for i in range(4):
seed=seed+1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
minibatches=None
minibatches_valid=create_mini_batches(X_valid, Y_valid, minibatch_size)
print(i)
minibatches_valid=None
## 5. If I add iteration over the different batches! then I am getting a memory issue. In other words, if a run this code i get an error:
minibatch_size=32
seed=2
for i in range(4):
seed=seed+1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
#added code: iteration over mini_batches
for minibatch in minibatches:
print('batch training number ')
#end of added code
minibatches=None
minibatches_valid=create_mini_batches(X_valid, Y_valid, minibatch_size)
print(i)
minibatches_valid=None
MemoryError Traceback (most recent call last)
<ipython-input-13-9c1942cdf0bc> in <module>()
3 for i in range(4):
4 seed=seed+1
----> 5 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
6
7 for minibatch in minibatches:
<ipython-input-3-2056fee14def> in random_mini_batches(X, Y, mini_batch_size, seed)
23
---> 24 shuffled_X = X[permutation,:]
25 shuffled_Y = Y[permutation,:]
26
MemoryError:
Does any one knows what's the issue with np.arrays ? And why does the simple fact of adding an loop (iterating over the list of batches) result in a memory error.
Questions
1.Is it a good idea to load the whole dataset and then proceed to training? ( I need to create random batches in each epoch, so I don't see how to do so if the data is not preloaded ? You take random mini-batches from preloaded data, right?) 2. Are there any possible solutions guys?
shuffled_X = X[permutation,:] makes copies, so it will allocate new array each time you do a permutation and blow up your memory.
If you don't have problems with storing whole dataset in memory you should be fine if you create batches just by using random indices, not shuffling the entire data matrix (np.random.choice is your friend).
If your data fits into memory, then yes. You might want to try to learn what to do when that's not the case though - I personally find Keras - stuff from keras.preprocessing.image useful for that (at least for loading images).
• "What if at each time I delete the array allocated" - try del Python keyword. I'm not entirely sure it would work though because of garbage collector. In general you probably shouldn't do that because some arrays you use layer are subarrays of the array you want to delete, so you may lose them as well. – Jakub Bartczuk May 29 '18 at 15:31
|
{}
|
New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Representation Theory and Harmonic Analysis on Semisimple Lie Groups
Edited by: Paul J. Sally, Jr. and David A. Vogan, Jr.
SEARCH THIS BOOK:
Mathematical Surveys and Monographs
1989; 350 pp; hardcover
Volume: 31
ISBN-10: 0-8218-1526-1
ISBN-13: 978-0-8218-1526-7
List Price: US$124 Member Price: US$99.20
Order Code: SURV/31
This book brings together five papers that have been influential in the study of Lie groups. Though published more than 20 years ago, these papers made fundamental contributions that deserve much broader exposure. In addition, the subsequent literature that has subsumed these papers cannot replace the originality and vitality they contain. The editors have provided a brief introduction to each paper, as well as a synopsis of the major developments which have occurred in the area covered by each paper.
Included here are the doctoral theses of Arthur, Osborne, and Schmid. Arthur's thesis is closely related to Trombi's paper insofar as both deal with harmonic analysis on real semisimple Lie groups, and, in particular, analysis on the Schwartz space of Harish-Chandra. Arthur's thesis is concerned with the image under the Fourier transform of the Schwartz space of a semisimple Lie group of real rank one, while Trombi's paper provides an expository account of the harmonic analysis associated to the decomposition of the Schwartz space under the regular representation. In his thesis, Osborne extends the Atiyah-Bott fixed point theorem for elliptic complexes to obtain a fixed point formula for complexes that are not elliptic. Schmid proves a generalization of the Borel-Weil theorem concerning an explicit and geometric realization of the irreducible representations of a compact, connected semisimple Lie group. Langlands's fundamental paper provides a classification of irreducible, admissible representations of real reductive Lie groups.
|
{}
|
+0
# Solve:
0
107
3
Solve:
y=-3x-6
3x-y=8
Guest Apr 17, 2017
Sort:
#1
0
Plug in -3x-6 for y. Simplifying you should get 6x-6=8 solving that gives you 14/6 or 7/3 for x. Plug that back in for x in the first equation and 2.3333333333333333*3-6 = 0.9999999999999999 or 1 rounded :). Hope this helps
Guest Apr 17, 2017
#2
0
Thank you! :D
Guest Apr 17, 2017
#3
+6765
0
\(y = -3x - 6\\ -y = 3x + 6\\ 3x - y = 8\\ 3x + (3x + 6) = 8\\ 6x + 6 = 8\\ x +1 = \dfrac{4}{3}\\ x = \dfrac{1}{3}\\ y = -3(\dfrac{1}{3})-6 = -7\\ \therefore x = \dfrac{1}{3};y=-7\)
MaxWong Apr 18, 2017
### 6 Online Users
We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. See details
|
{}
|
## flumech2 Group Title How do you find the velocity of fluid flow at one end of a vertical pipe due to a pressure difference? one year ago one year ago
1. primeralph Group Title
bernoulli's
2. primeralph Group Title
h is height difference
3. primeralph Group Title
|dw:1367696277532:dw|
4. primeralph Group Title
|dw:1367696443648:dw|
5. primeralph Group Title
just the vertical height difference, irrespective of shape
6. primeralph Group Title
post the full question
7. primeralph Group Title
pressure difference in both systems.
8. agent0smith Group Title
iirc, change in pressure is equal to pgh, and equal to 0.5pv^2, you just need to find v$\Large \Delta P = \rho g h = \frac{ 1 }{ 2 } \rho v ^2$
|
{}
|
• #### Journal of Computational Neuroscience
in Journal of Computational Neuroscience on May 01, 2021 12:00 AM.
• #### Correction to: Monosynaptic inference via finely-timed spikes
Correction to: Journal of Computational Neuroscience
in Journal of Computational Neuroscience on May 01, 2021 12:00 AM.
### Abstract
The principle of constraint-induced therapy is widely practiced in rehabilitation. In hemiplegic cerebral palsy (CP) with impaired contralateral corticospinal projection due to unilateral injury, function improves after imposing a temporary constraint on limbs from the less affected hemisphere. This type of partially-reversible impairment in motor control by early brain injury bears a resemblance to the experience-dependent plastic acquisition and modification of neuronal response selectivity in the visual cortex. Previously, such mechanism was modeled within the framework of BCM (Bienenstock-Cooper-Munro) theory, a rate-based synaptic modification theory. Here, we demonstrate a minimally complex yet sufficient neural network model which provides a fundamental explanation for inter-hemispheric competition using a simplified spike-based model of information transmission and plasticity. We emulate the restoration of function in hemiplegic CP by simulating the competition between cells of the ipsilateral and contralateral corticospinal tracts. We use a high-speed hardware neural simulation to provide realistic numbers of spikes and realistic magnitudes of synaptic modification. We demonstrate that the phenomenon of constraint-induced partial reversal of hemiplegia can be modeled by simplified neural descending tracts with 2 layers of spiking neurons and synapses with spike-timing-dependent plasticity (STDP). We further demonstrate that persistent hemiplegia following unilateral cortical inactivation or deprivation is predicted by the STDP-based model but is inconsistent with BCM model. Although our model is a highly simplified and limited representation of the corticospinal system, it offers an explanation of how constraint as an intervention can help the system to escape from a suboptimal solution. This is a display of an emergent phenomenon from the synaptic competition.
in Journal of Computational Neuroscience on May 01, 2021 12:00 AM.
• #### Editorial: new article type “perspective”
in Journal of Computational Neuroscience on May 01, 2021 12:00 AM.
### Abstract
Pain is a complex, multidimensional experience that involves dynamic interactions between sensory-discriminative and affective-emotional processes. Pain experiences have a high degree of variability depending on their context and prior anticipation. Viewing pain perception as a perceptual inference problem, we propose a predictive coding paradigm to characterize evoked and non-evoked pain. We record the local field potentials (LFPs) from the primary somatosensory cortex (S1) and the anterior cingulate cortex (ACC) of freely behaving rats—two regions known to encode the sensory-discriminative and affective-emotional aspects of pain, respectively. We further use predictive coding to investigate the temporal coordination of oscillatory activity between the S1 and ACC. Specifically, we develop a phenomenological predictive coding model to describe the macroscopic dynamics of bottom-up and top-down activity. Supported by recent experimental data, we also develop a biophysical neural mass model to describe the mesoscopic neural dynamics in the S1 and ACC populations, in both naive and chronic pain-treated animals. Our proposed predictive coding models not only replicate important experimental findings, but also provide new prediction about the impact of the model parameters on the physiological or behavioral read-out—thereby yielding mechanistic insight into the uncertainty of expectation, placebo or nocebo effect, and chronic pain.
in Journal of Computational Neuroscience on May 01, 2021 12:00 AM.
### Abstract
An important problem in systems neuroscience is to understand how information is communicated among brain regions, and it has been proposed that communication is mediated by neuronal oscillations, such as rhythms in the gamma band. We sought to investigate this idea by using a network model with two components, a source (sending) and a target (receiving) component, both built to resemble local populations in the cerebral cortex. To measure the effectiveness of communication, we used population-level correlations in spike times between the source and target. We found that after correcting for a response time that is independent of initial conditions, spike-time correlations between the source and target are significant, due in large measure to the alignment of firing events in their gamma rhythms. But, we also found that regular oscillations cannot produce the results observed in our model simulations of cortical neurons. Surprisingly, it is the irregularity of gamma rhythms, the absence of internal clocks, together with the malleability of these rhythms and their tendency to align with external pulses — features that are known to be present in gamma rhythms in the real cortex — that produced the results observed. These findings and the mechanistic explanations we offered are our primary results. Our secondary result is a mathematical relationship between correlations and the sizes of the samples used for their calculation. As improving technology enables recording simultaneously from increasing numbers of neurons, this relationship could be useful for interpreting results from experimental recordings.
in Journal of Computational Neuroscience on May 01, 2021 12:00 AM.
• #### Correction to: A modeling study of spinal motoneuron recruitment regulated by ionic channels during fictive locomotion
The authors find several printing errors in the equations in the final versions on line and in print proof. However, there were no such errors in the submitted proof.
in Journal of Computational Neuroscience on May 01, 2021 12:00 AM.
### Abstract
Excitatory synaptic signaling in cortical circuits is thought to be metabolically expensive. Two fundamental brain functions, learning and memory, are associated with long-term synaptic plasticity, but we know very little about energetics of these slow biophysical processes. This study investigates the energy requirement of information storing in plastic synapses for an extended version of BCM plasticity with a decay term, stochastic noise, and nonlinear dependence of neuron’s firing rate on synaptic current (adaptation). It is shown that synaptic weights in this model exhibit bistability. In order to analyze the system analytically, it is reduced to a simple dynamic mean-field for a population averaged plastic synaptic current. Next, using the concepts of nonequilibrium thermodynamics, we derive the energy rate (entropy production rate) for plastic synapses and a corresponding Fisher information for coding presynaptic input. That energy, which is of chemical origin, is primarily used for battling fluctuations in the synaptic weights and presynaptic firing rates, and it increases steeply with synaptic weights, and more uniformly though nonlinearly with presynaptic firing. At the onset of synaptic bistability, Fisher information and memory lifetime both increase sharply, by a few orders of magnitude, but the plasticity energy rate changes only mildly. This implies that a huge gain in the precision of stored information does not have to cost large amounts of metabolic energy, which suggests that synaptic information is not directly limited by energy consumption. Interestingly, for very weak synaptic noise, such a limit on synaptic coding accuracy is imposed instead by a derivative of the plasticity energy rate with respect to the mean presynaptic firing, and this relationship has a general character that is independent of the plasticity type. An estimate for primate neocortex reveals that a relative metabolic cost of BCM type synaptic plasticity, as a fraction of neuronal cost related to fast synaptic transmission and spiking, can vary from negligible to substantial, depending on the synaptic noise level and presynaptic firing.
in Journal of Computational Neuroscience on May 01, 2021 12:00 AM.
### Abstract
An inverse procedure is developed and tested to recover functional and structural information from global signals of brains activity. The method assumes a leaky-integrate and fire model with excitatory and inhibitory neurons, coupled via a directed network. Neurons are endowed with a heterogenous current value, which sets their associated dynamical regime. By making use of a heterogenous mean-field approximation, the method seeks to reconstructing from global activity patterns the distribution of in-coming degrees, for both excitatory and inhibitory neurons, as well as the distribution of the assigned currents. The proposed inverse scheme is first validated against synthetic data. Then, time-lapse acquisitions of a zebrafish larva recorded with a two-photon light sheet microscope are used as an input to the reconstruction algorithm. A power law distribution of the in-coming connectivity of the excitatory neurons is found. Local degree distributions are also computed by segmenting the whole brain in sub-regions traced from annotated atlas.
in Journal of Computational Neuroscience on May 01, 2021 12:00 AM.
### Abstract
Observations of finely-timed spike relationships in population recordings have been used to support partial reconstruction of neural microcircuit diagrams. In this approach, fine-timescale components of paired spike train interactions are isolated and subsequently attributed to synaptic parameters. Recent perturbation studies strengthen the case for such an inference, yet the complete set of measurements needed to calibrate statistical models is unavailable. To address this gap, we study features of pairwise spiking in a large-scale in vivo dataset where presynaptic neurons were explicitly decoupled from network activity by juxtacellular stimulation. We then construct biophysical models of paired spike trains to reproduce the observed phenomenology of in vivo monosynaptic interactions, including both fine-timescale spike-spike correlations and firing irregularity. A key characteristic of these models is that the paired neurons are coupled by rapidly-fluctuating background inputs. We quantify a monosynapse’s causal effect by comparing the postsynaptic train with its counterfactual, when the monosynapse is removed. Subsequently, we develop statistical techniques for estimating this causal effect from the pre- and post-synaptic spike trains. A particular focus is the justification and application of a nonparametric separation of timescale principle to implement synaptic inference. Using simulated data generated from the biophysical models, we characterize the regimes in which the estimators accurately identify the monosynaptic effect. A secondary goal is to initiate a critical exploration of neurostatistical assumptions in terms of biophysical mechanisms, particularly with regards to the challenging but arguably fundamental issue of fast, unobservable nonstationarities in background dynamics.
in Journal of Computational Neuroscience on May 01, 2021 12:00 AM.
• #### Deep neural network-based generalized sidelobe canceller for dual-channel far-field speech recognition
Publication date: Available online 19 April 2021
Source: Neural Networks
Author(s): Guanjun Li, Shan Liang, Shuai Nie, Wenju Liu, Zhanlei Yang
in Neural Networks on April 20, 2021 06:00 PM.
• #### In This Issue [This Week in PNAS]
ENGINEERING When stimulated, each electrocyte of the electric eel can generate 150 mV via ion transportation through highly selective ion channels on cell membranes. Printable power inspired by eels Printed energy storage devices can provide power to wearable devices. Lu Yang et al. used electric eels, which rely on sodium...
in PNAS on April 20, 2021 03:09 PM.
• #### Vessel network extraction and analysis of mouse pulmonary vasculature via X-ray micro-computed tomographic imaging
by Eric A. Chadwick, Takaya Suzuki, Michael G. George, David A. Romero, Cristina Amon, Thomas K. Waddell, Golnaz Karoubi, Aimy Bazylak
In this work, non-invasive high-spatial resolution three-dimensional (3D) X-ray micro-computed tomography (μCT) of healthy mouse lung vasculature is performed. Methodologies are presented for filtering, segmenting, and skeletonizing the collected 3D images. Novel methods for the removal of spurious branch artefacts from the skeletonized 3D image are introduced, and these novel methods involve a combination of distance transform gradients, diameter-length ratios, and the fast marching method (FMM). These new techniques of spurious branch removal result in the consistent removal of spurious branches without compromising the connectivity of the pulmonary circuit. Analysis of the filtered, skeletonized, and segmented 3D images is performed using a newly developed Vessel Network Extraction algorithm to fully characterize the morphology of the mouse pulmonary circuit. The removal of spurious branches from the skeletonized image results in an accurate representation of the pulmonary circuit with significantly less variability in vessel diameter and vessel length in each generation. The branching morphology of a full pulmonary circuit is characterized by the mean diameter per generation and number of vessels per generation. The methods presented in this paper lead to a significant improvement in the characterization of 3D vasculature imaging, allow for automatic separation of arteries and veins, and for the characterization of generations containing capillaries and intrapulmonary arteriovenous anastomoses (IPAVA).
in PLoS Computational Biology on April 20, 2021 02:00 PM.
• #### How accurately can we assess zoonotic risk?
by Michelle Wille, Jemma L. Geoghegan, Edward C. Holmes
Identifying the animal reservoirs from which zoonotic viruses will likely emerge is central to understanding the determinants of disease emergence. Accordingly, there has been an increase in studies attempting zoonotic “risk assessment.” Herein, we demonstrate that the virological data on which these analyses are conducted are incomplete, biased, and rapidly changing with ongoing virus discovery. Together, these shortcomings suggest that attempts to assess zoonotic risk using available virological data are likely to be inaccurate and largely only identify those host taxa that have been studied most extensively. We suggest that virus surveillance at the human–animal interface may be more productive.
in PLoS Biology on April 20, 2021 02:00 PM.
• #### Magnetic induction inspires a schematic theory for crosstalk-driven relaxation dynamics in cells
Author(s): Kevin R. Pilkiewicz and Michael L. Mayo
Establishing formal mathematical analogies between disparate physical systems can be a powerful tool, allowing for the well studied behavior of one system to be directly translated into predictions about the behavior of another that may be harder to probe. In this paper we lay the foundation for suc...
[Phys. Rev. E 103, 042417] Published Tue Apr 20, 2021
in Physical Review E: Biological physics on April 20, 2021 10:00 AM.
• #### Caring for People With Dementia Under COVID-19 Restrictions: A Pilot Study on Family Caregivers
Introduction
The present pilot study examined to what extent the COVID-19 lockdown affected the behavioral and psychological symptoms of dementia (BPSD) in people with dementia and worsened their family caregivers’ distress. The associations between changes in the BPSD of relatives with dementia (RwD) and in their caregivers’ distress, and sense of social and emotional loneliness, and resilience were also investigated.
Materials and Methods
Thirty-five caregivers of RwD attending formal healthcare services before the COVID-19 lockdown volunteered for the study, and were interviewed by phone during the lockdown. Caregivers completed the NeuroPsychiatric Inventory (NPI) to assess their care recipients’ BPSD and their own distress, and two questionnaires assessing their social and emotional loneliness, and their resilience.
Results
No clear changes emerged in either the BPSD of the RwD or the caregivers’ distress during lockdown compared with before the pandemic. Caregivers reporting more frequent and severe BPSD in their RwD before the lockdown scored higher on emotional loneliness. Those reporting more frequent and severe BPSD under lockdown, especially men and those taking care of RwD with more advanced dementia, scored higher on both social and emotional loneliness. A significant negative correlation also emerged between caregivers’ resilience and changes in their level of distress due to the lockdown, with female caregivers reporting greater resilience.
Discussion
Our findings offer preliminary insight on the effects of loneliness and resilience, and on the influence of individual characteristics on the experience and consequences of informal caregiving for RwD in times of restrictions imposed by a pandemic.
in Frontiers in Ageing Neuroscience on April 20, 2021 05:02 AM.
• #### What can human minimal videos tell us about dynamic recognition models?. (arXiv:2104.09447v1 [cs.CV])
In human vision objects and their parts can be visually recognized from purely spatial or purely temporal information but the mechanisms integrating space and time are poorly understood. Here we show that human visual recognition of objects and actions can be achieved by efficiently combining spatial and motion cues in configurations where each source on its own is insufficient for recognition. This analysis is obtained by identifying minimal videos: these are short and tiny video clips in which objects, parts, and actions can be reliably recognized, but any reduction in either space or time makes them unrecognizable. State-of-the-art deep networks for dynamic visual recognition cannot replicate human behavior in these configurations. This gap between humans and machines points to critical mechanisms in human dynamic vision that are lacking in current models.
in arXiv: Quantitative Biology: Neurons and Cognition on April 20, 2021 01:30 AM.
• #### Modeling the Nervous System as An Open Quantum System. (arXiv:2104.09424v1 [q-bio.NC])
We propose a neural model of multi-spin interacting system simulating neurons that interact each other through the surroundings. In such open system approach, we consider the neurons coupled with their surroundings including the axons connecting between neurons and the glial cells which distribute around the neurons. The surrounding environment is physically modeled as a collection of all kinds of vibrational modes. By solving the dynamics of neurons, we analyze the neural collective behavior. The action potential is taken into account for simulating the environmental effect from the surroundings, which mimics the neural firing mechanism. We find that this model can generate random neuron-neuron interactions and is proper to describe the process of information transmission in the nervous system. The physical meaning behind the scene can also be explained.
in arXiv: Quantitative Biology: Neurons and Cognition on April 20, 2021 01:30 AM.
• #### Activity stabilization in a population model of working memory by sinusoidal and noisy inputs. (arXiv:2104.09218v1 [q-bio.NC])
According to mechanistic theories of working memory (WM), information is retained as persistent spiking activity of cortical neural networks. Yet, how this activity is related to changes in the oscillatory profile observed during WM tasks remains an open issue. We explore joint effects of input gamma-band oscillations and noise on the dynamics of several firing rate models of WM. The considered models have a metastable active regime, i.e. they demonstrate long-lasting transient post-stimulus firing rate elevation. We start from a single excitatory-inhibitory circuit and demonstrate that either gamma-band or noise input could stabilize the active regime, thus supporting WM retention. We then consider a system of two circuits with excitatory intercoupling. We find that fast coupling allows for better stabilization by common noise compared to independent noise and stronger amplification of this effect by in-phase gamma inputs compared to anti-phase inputs. Finally, we consider a multi-circuit system comprised of two clusters, each containing a group of circuits receiving a common noise input and a group of circuits receiving independent noise. Each cluster is associated with its own local gamma generator, so all its circuits receive gamma-band input in the same phase. We find that gamma-band input differentially stabilizes the activity of the "common-noise" groups compared to the "independent-noise" groups. If the inter-cluster connections are fast, this effect is more pronounced when the gamma-band input is delivered to the clusters in the same phase rather than in the anti-phase. Assuming that the common noise comes from a large-scale distributed WM representation, our results demonstrate that local gamma oscillations can stabilize the activity of the corresponding parts of this representation, with stronger effect for fast long-range connections and synchronized gamma oscillations.
in arXiv: Quantitative Biology: Neurons and Cognition on April 20, 2021 01:30 AM.
• #### A devil's advocate view on 'self-organized' brain criticality. (arXiv:2104.09188v1 [q-bio.NC])
Stationarity of the constituents of the body and of its functionalities is a basic requirement for life, being equivalent to survival in first place. Assuming that the resting state activity of the brain serves essential functionalities, stationarity entails that the dynamics of the brain needs to be regulated on a time-averaged basis. The combination of recurrent and driving external inputs must therefore lead to a non-trivial stationary neural activity, a condition which is fulfilled for afferent signals of varying strengths only close to criticality. In this view, the benefits of working vicinity of a second-order phase transition, such as signal enhancements, are not the underlying evolutionary drivers, but side effects of the requirement to keep the brain functional in first place. It is hence more appropriate to use the term 'self-regulated' in this context, instead of 'self-organized'.
in arXiv: Quantitative Biology: Neurons and Cognition on April 20, 2021 01:30 AM.
• #### An Uncertainty-aware Hierarchical Probabilistic Network for Early Prediction, Quantification and Segmentation of Pulmonary Tumour Growth. (arXiv:2104.08789v1 [cs.CV])
Early detection and quantification of tumour growth would help clinicians to prescribe more accurate treatments and provide better surgical planning. However, the multifactorial and heterogeneous nature of lung tumour progression hampers identification of growth patterns. In this study, we present a novel method based on a deep hierarchical generative and probabilistic framework that, according to radiological guidelines, predicts tumour growth, quantifies its size and provides a semantic appearance of the future nodule. Unlike previous deterministic solutions, the generative characteristic of our approach also allows us to estimate the uncertainty in the predictions, especially important for complex and doubtful cases. Results of evaluating this method on an independent test set reported a tumour growth balanced accuracy of 74%, a tumour growth size MAE of 1.77 mm and a tumour segmentation Dice score of 78%. These surpassed the performances of equivalent deterministic and alternative generative solutions (i.e. probabilistic U-Net, Bayesian test dropout and Pix2Pix GAN) confirming the suitability of our approach.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 20, 2021 01:30 AM.
• #### Monte Carlo Elites: Quality-Diversity Selection as a Multi-Armed Bandit Problem. (arXiv:2104.08781v1 [cs.NE])
A core challenge of evolutionary search is the need to balance between exploration of the search space and exploitation of highly fit regions. Quality-diversity search has explicitly walked this tightrope between a population's diversity and its quality. This paper extends a popular quality-diversity search algorithm, MAP-Elites, by treating the selection of parents as a multi-armed bandit problem. Using variations of the upper-confidence bound to select parents from under-explored but potentially rewarding areas of the search space can accelerate the discovery of new regions as well as improve its archive's total quality. The paper tests an indirect measure of quality for parent selection: the survival rate of a parent's offspring. Results show that maintaining a balance between exploration and exploitation leads to the most diverse and high-quality set of solutions in three different testbeds.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 20, 2021 01:30 AM.
• #### ARCH-Elites: Quality-Diversity for Urban Design. (arXiv:2104.08774v1 [cs.NE])
This paper introduces ARCH-Elites, a MAP-Elites implementation that can reconfigure large-scale urban layouts at real-world locations via a pre-trained surrogate model instead of costly simulations. In a series of experiments, we generate novel urban designs for two real-world locations in Boston, Massachusetts. Combining the exploration of a possibility space with real-time performance evaluation creates a powerful new paradigm for architectural generative design that can extract and articulate design intelligence.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 20, 2021 01:30 AM.
• #### A Novel Non-population-based Meta-heuristic Optimizer Inspired by the Philosophy of Yi Jing. (arXiv:2104.08564v1 [cs.NE])
Drawing inspiration from the philosophy of Yi Jing, Yin-Yang pair optimization (YYPO) has been shown to achieve competitive performance in single objective optimizations. Besides, it has the advantage of low time complexity when comparing to other population-based optimization. As a conceptual extension of YYPO, we proposed the novel Yi optimization (YI) algorithm as one of the best non-population-based optimizer. Incorporating both the harmony and reversal concept of Yi Jing, we replace the Yin-Yang pair with a Yi-point, in which we utilize the Levy flight to update the solution and balance both the effort of the exploration and the exploitation in the optimization process. As a conceptual prototype, we examine YI with IEEE CEC 2017 benchmark and compare its performance with a Levy flight-based optimizer CV1.0, the state-of-the-art dynamical Yin-Yang pair optimization in YYPO family and a few classical optimizers. According to the experimental results, YI shows highly competitive performance while keeping the low time complexity. Hence, the results of this work have implications for enhancing meta-heuristic optimizer using the philosophy of Yi Jing, which deserves research attention.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 20, 2021 01:30 AM.
• #### Emergence of Lie symmetries in functional architectures learned by CNNs. (arXiv:2104.08537v1 [q-bio.NC])
In this paper we study the spontaneous development of symmetries in the early layers of a Convolutional Neural Network (CNN) during learning on natural images. Our architecture is built in such a way to mimic the early stages of biological visual systems. In particular, it contains a pre-filtering step $\ell^0$ defined in analogy with the Lateral Geniculate Nucleus (LGN). Moreover, the first convolutional layer is equipped with lateral connections defined as a propagation driven by a learned connectivity kernel, in analogy with the horizontal connectivity of the primary visual cortex (V1). The layer $\ell^0$ shows a rotational symmetric pattern well approximated by a Laplacian of Gaussian (LoG), which is a well-known model of the receptive profiles of LGN cells. The convolutional filters in the first layer can be approximated by Gabor functions, in agreement with well-established models for the profiles of simple cells in V1. We study the learned lateral connectivity kernel of this layer, showing the emergence of orientation selectivity w.r.t. the learned filters. We also examine the association fields induced by the learned kernel, and show qualitative and quantitative comparisons with known group-based models of V1 horizontal connectivity. These geometric properties arise spontaneously during the training of the CNN architecture, analogously to the emergence of symmetries in visual systems thanks to brain plasticity driven by external stimuli.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 20, 2021 01:30 AM.
• #### Exact imposition of boundary conditions with distance functions in physics-informed deep neural networks. (arXiv:2104.08426v1 [math.NA])
In this paper, we introduce a new approach based on distance fields to exactly impose boundary conditions in physics-informed deep neural networks. The challenges in satisfying Dirichlet boundary conditions in meshfree and particle methods are well-known. This issue is also pertinent in the development of physics informed neural networks (PINN) for the solution of partial differential equations. We introduce geometry-aware trial functions in artifical neural networks to improve the training in deep learning for partial differential equations. To this end, we use concepts from constructive solid geometry (R-functions) and generalized barycentric coordinates (mean value potential fields) to construct $\phi$, an approximate distance function to the boundary of a domain. To exactly impose homogeneous Dirichlet boundary conditions, the trial function is taken as $\phi$ multiplied by the PINN approximation, and its generalization via transfinite interpolation is used to a priori satisfy inhomogeneous Dirichlet (essential), Neumann (natural), and Robin boundary conditions on complex geometries. In doing so, we eliminate modeling error associated with the satisfaction of boundary conditions in a collocation method and ensure that kinematic admissibility is met pointwise in a Ritz method. We present numerical solutions for linear and nonlinear boundary-value problems over domains with affine and curved boundaries. Benchmark problems in 1D for linear elasticity, advection-diffusion, and beam bending; and in 2D for the Poisson equation, biharmonic equation, and the nonlinear Eikonal equation are considered. The approach extends to higher dimensions, and we showcase its use by solving a Poisson problem with homogeneneous Dirichlet boundary conditions over the 4D hypercube. This study provides a pathway for meshfree analysis to be conducted on the exact geometry without domain discretization.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 20, 2021 01:30 AM.
• #### I Only Have Eyes for You: The Impact of Masks On Convolutional-Based Facial Expression Recognition. (arXiv:2104.08353v1 [cs.CV])
The current COVID-19 pandemic has shown us that we are still facing unpredictable challenges in our society. The necessary constrain on social interactions affected heavily how we envision and prepare the future of social robots and artificial agents in general. Adapting current affective perception models towards constrained perception based on the hard separation between facial perception and affective understanding would help us to provide robust systems. In this paper, we perform an in-depth analysis of how recognizing affect from persons with masks differs from general facial expression perception. We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks. In Our analysis, we evaluate different training and fine-tuning schemes to understand better the impact of masked facial expressions. We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 20, 2021 01:30 AM.
• #### Deep Transformer Networks for Time Series Classification: The NPP Safety Case. (arXiv:2104.05448v2 [cs.LG] UPDATED)
A challenging part of dynamic probabilistic risk assessment for nuclear power plants is the need for large amounts of temporal simulations given various initiating events and branching conditions from which representative feature extraction becomes complicated for subsequent applications. Artificial Intelligence techniques have been shown to be powerful tools in time-dependent sequential data processing to automatically extract and yield complex features from large data. An advanced temporal neural network referred to as the Transformer is used within a supervised learning fashion to model the time-dependent NPP simulation data and to infer whether a given sequence of events leads to core damage or not. The training and testing datasets for the Transformer are obtained by running 10,000 RELAP5-3D NPP blackout simulations with the list of variables obtained from the RAVEN software. Each simulation is classified as "OK" or "CORE DAMAGE" based on the consequence. The results show that the Transformer can learn the characteristics of the sequential data and yield promising performance with approximately 99% classification accuracy on the testing dataset.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 20, 2021 01:30 AM.
• #### Evolutionary Variational Optimization of Generative Models. (arXiv:2012.12294v2 [stat.ML] UPDATED)
We combine two popular optimization approaches to derive learning algorithms for generative models: variational optimization and evolutionary algorithms. The combination is realized for generative models with discrete latents by using truncated posteriors as the family of variational distributions. The variational parameters of truncated posteriors are sets of latent states. By interpreting these states as genomes of individuals and by using the variational lower bound to define a fitness, we can apply evolutionary algorithms to realize the variational loop. The used variational distributions are very flexible and we show that evolutionary algorithms can effectively and efficiently optimize the variational bound. Furthermore, the variational loop is generally applicable ("black box") with no analytical derivations required. To show general applicability, we apply the approach to three generative models (we use noisy-OR Bayes Nets, Binary Sparse Coding, and Spike-and-Slab Sparse Coding). To demonstrate effectiveness and efficiency of the novel variational approach, we use the standard competitive benchmarks of image denoising and inpainting. The benchmarks allow quantitative comparisons to a wide range of methods including probabilistic approaches, deep deterministic and generative networks, and non-local image processing methods. In the category of "zero-shot" learning (when only the corrupted image is used for training), we observed the evolutionary variational algorithm to significantly improve the state-of-the-art in many benchmark settings. For one well-known inpainting benchmark, we also observed state-of-the-art performance across all categories of algorithms although we only train on the corrupted image. In general, our investigations highlight the importance of research on optimization methods for generative models to achieve performance improvements.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 20, 2021 01:30 AM.
• #### Beyond Rescorla-Wagner: the ups and downs of learning. (arXiv:2004.05069v2 [q-bio.QM] UPDATED)
We check the robustness of a recently proposed dynamical model of associative Pavlovian learning that extends the Rescorla-Wagner (RW) model in a natural way and predicts progressively damped oscillations in the response of the subjects. Using the data of two experiments, we compare the dynamical oscillatory model (DOM) with an oscillatory model made of the superposition of the RW learning curve and oscillations. Not only do data clearly show an oscillatory pattern, but they also favor the DOM over the added oscillation model, thus pointing out that these oscillations are the manifestation of an associative process. The latter is interpreted as the fact that subjects make predictions on trial outcomes more extended in time than in the RW model, but with more uncertainty.
in arXiv: Quantitative Biology: Neurons and Cognition on April 20, 2021 01:30 AM.
• #### Probabilistic Tools for the Analysis of Randomized Optimization Heuristics. (arXiv:1801.06733v5 [cs.DS] UPDATED)
This chapter collects several probabilistic tools that proved to be useful in the analysis of randomized search heuristics. This includes classic material like Markov, Chebyshev and Chernoff inequalities, but also lesser known topics like stochastic domination and coupling or Chernoff bounds for geometrically distributed random variables and for negatively correlated random variables. Most of the results presented here have appeared previously, some, however, only in recent conference publications. While the focus is on collecting tools for the analysis of randomized search heuristics, many of these may be useful as well in the analysis of classic randomized algorithms or discrete random structures.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 20, 2021 01:30 AM.
• #### Author Correction: MINSTED fluorescence localization and nanoscopy
Nature Photonics, Published online: 20 April 2021; doi:10.1038/s41566-021-00816-9
Author Correction: MINSTED fluorescence localization and nanoscopy
in Nature Photomics on April 20, 2021 12:00 AM.
• #### Acoustic sensing with light
Nature Photonics, Published online: 20 April 2021; doi:10.1038/s41566-021-00804-z
Optical acoustic sensors have gained interest for use in photoacoustic imaging systems, but can they dethrone conventional piezoelectric sensors altogether?
in Nature Photomics on April 20, 2021 12:00 AM.
• #### How deep ocean-land coupling controls the generation of secondary microseism Love waves
Nature Communications, Published online: 20 April 2021; doi:10.1038/s41467-021-22591-5
The authors here study the origin of seismic Love waves induced by ocean waves. The study finds Love waves to originate along steep bathymetry and underlying geological interfaces, particularly sedimentary basins, yielding spatio-temporal information about ocean-land coupling in deep water.
in Nature Communications on April 20, 2021 12:00 AM.
• #### An integrative analysis of the age-associated multi-omic landscape across cancers
Nature Communications, Published online: 20 April 2021; doi:10.1038/s41467-021-22560-y
Our understanding of the age-related molecular alterations in cancer is still limited. Here, the authors perform a pan-cancer analysis of age-associated genomic, transcriptomic, and epigenetic alterations, linking age-related gene expression changes to age-related DNA methylation alterations
in Nature Communications on April 20, 2021 12:00 AM.
• #### Epigenomic landscape of human colorectal cancer unveils an aberrant core of pan-cancer enhancers orchestrated by YAP/TAZ
Nature Communications, Published online: 20 April 2021; doi:10.1038/s41467-021-22544-y
The role of epigenetic deregulation in colorectal cancer (CRC) is not fully understood yet. Here the authors use patient-derived organoids, epigenomics and single-cell RNA-seq to reveal that YAP/TAZ are key regulators that bind to active enhancers in CRC and promote tumour survival.
in Nature Communications on April 20, 2021 12:00 AM.
• #### Suppression of pancreatic ductal adenocarcinoma growth and metastasis by fibrillar collagens produced selectively by tumor cells
Nature Communications, Published online: 20 April 2021; doi:10.1038/s41467-021-22490-9
Pancreatic ductal adenocarcinoma has a collagen-rich dense extracellular matrix that promotes malignancy of cancer cells. Here, the authors show that fibrillar collagen that is cancer-cell-derived, but not stroma-derived, selectively restrains tumor growth under control of their pC-proteinase, BMP1.
in Nature Communications on April 20, 2021 12:00 AM.
• #### SMART transfer method to directly compare the mechanical response of water-supported and free-standing ultrathin polymeric films
Nature Communications, Published online: 20 April 2021; doi:10.1038/s41467-021-22473-w
Intrinsic mechanical properties of sub-100 nm thin films are markedly difficult to obtain, yet an ever-growing necessity for emerging fields such as soft organic electronics. Here, the authors present a shear motion assisted transfer technique for fabricating free-standing sub-100 nm thin films and measuring their inherent structural–mechanical properties.
in Nature Communications on April 20, 2021 12:00 AM.
• #### ARIH1 signaling promotes anti-tumor immunity by targeting PD-L1 for proteasomal degradation
Nature Communications, Published online: 20 April 2021; doi:10.1038/s41467-021-22467-8
The regulation of PD-L1 via proteasomal degradation is unclear. Here, the authors show that EGFR inhibition activates GSK3 α to promote PD-L1 phosphorylation, which leads to PD-L1 ubiquitination and proteasome mediated degradation by ARIH1 E3 ligase.
in Nature Communications on April 20, 2021 12:00 AM.
• #### Prostaglandin in the ventromedial hypothalamus regulates peripheral glucose metabolism
Nature Communications, Published online: 20 April 2021; doi:10.1038/s41467-021-22431-6
The ventromedial hypothalamus regulates systemic glucose metabolism. Here the authors show that cytosolic phospholipase A2 mediated phospholipid metabolism contributes to this regulation in healthy animals but exert deteriorating effects on glucose homeostasis under high-fat-diet feeding.
in Nature Communications on April 20, 2021 12:00 AM.
• #### Loss of α2-6 sialylation promotes the transformation of synovial fibroblasts into a pro-inflammatory phenotype in arthritis
Nature Communications, Published online: 20 April 2021; doi:10.1038/s41467-021-22365-z
Dysregulation of synovial fibroblasts is thought to be an important step in the pathogenesis of rheumatoid arthritis. Here the authors implicate α2-6 sialylation in this process by studying the glycome of these cells in patients and in a mouse model of inflammatory joint disease.
in Nature Communications on April 20, 2021 12:00 AM.
• #### Association of sleep duration in middle and old age with incidence of dementia
Nature Communications, Published online: 20 April 2021; doi:10.1038/s41467-021-22354-2
Sleep dysregulation has been linked to dementia, but it is unknown whether sleep duration earlier in life is associated with dementia risk. Here, the authors show higher dementia risk associated with short sleep duration (six hours or less) in a longitudinal study of middle and older age adults.
in Nature Communications on April 20, 2021 12:00 AM.
• #### Flying a helicopter on Mars: NASA's Ingenuity
Nature, Published online: 20 April 2021; doi:10.1038/d41586-021-01060-5
First powered flight on another planet opens the door for a new era of exploration
in Nature on April 20, 2021 12:00 AM.
• #### Elusive cancer cells dissected using developmental-biology toolkit
Nature, Published online: 20 April 2021; doi:10.1038/d41586-021-01029-4
Unpicking how cancer stem cells divide and spread could help to explain how tumours grow and evade treatments.
in Nature on April 20, 2021 12:00 AM.
• #### Sharing COVID data? Check these recommendations and guidelines
Nature, Published online: 20 April 2021; doi:10.1038/d41586-021-01028-5
Sharing COVID data? Check these recommendations and guidelines
in Nature on April 20, 2021 12:00 AM.
• #### China should track impact of pollution on health and the environment
Nature, Published online: 20 April 2021; doi:10.1038/d41586-021-01027-6
China should track impact of pollution on health and the environment
in Nature on April 20, 2021 12:00 AM.
• #### China’s publications: fewer but better
Nature, Published online: 20 April 2021; doi:10.1038/d41586-021-01026-7
China’s publications: fewer but better
in Nature on April 20, 2021 12:00 AM.
• #### Support Myanmar’s embattled scientists and students
Nature, Published online: 20 April 2021; doi:10.1038/d41586-021-01025-8
Support Myanmar’s embattled scientists and students
in Nature on April 20, 2021 12:00 AM.
• #### Blanket bans on fossil-fuel funds will entrench poverty
Nature, Published online: 20 April 2021; doi:10.1038/d41586-021-01020-z
Africa needs reliable energy infrastructure, not rich-world hypocrisy.
in Nature on April 20, 2021 12:00 AM.
• #### Cnr2 Is Important for Ribbon Synapse Maturation and Function in Hair Cells and Photoreceptors
The role of the cannabinoid receptor 2 (CNR2) is still poorly described in sensory epithelia. We found strong cnr2 expression in hair cells (HCs) of the inner ear and the lateral line (LL), a superficial sensory structure in fish. Next, we demonstrated that sensory synapses in HCs were severely perturbed in larvae lacking cnr2. Appearance and distribution of presynaptic ribbons and calcium channels (Cav1.3) were profoundly altered in mutant animals. Clustering of membrane-associated guanylate kinase (MAGUK) in post-synaptic densities (PSDs) was also heavily affected, suggesting a role for cnr2 for maintaining the sensory synapse. Furthermore, vesicular trafficking in HCs was strongly perturbed suggesting a retrograde action of the endocannabinoid system (ECs) via cnr2 that was modulating HC mechanotransduction. We found similar perturbations in retinal ribbon synapses. Finally, we showed that larval swimming behaviors after sound and light stimulations were significantly different in mutant animals. Thus, we propose that cnr2 is critical for the processing of sensory information in the developing larva.
in Frontiers in Molecular Neuroscience on April 20, 2021 12:00 AM.
• #### Functional 3-Dimensional Retinal Organoids: Technological Progress and Existing Challenges
Stem cell scientists have developed methods for the self-formation of artificial organs, often referred to as organoids. Organoids can be used as model systems for research in multiple biological disciplines. Yoshiki Sasai’s innovation for deriving mammalian retinal tissue from in vitro stem cells has had a large impact on the study of the biology of vision. New developments in retinal organoid technology provide avenues for in vitro models of human retinal diseases, studies of pathological mechanisms, and development of therapies for retinal degeneration, including electronic retinal implants and gene therapy. Moreover, these innovations have played key roles in establishing models for large-scale drug screening, studying the stages of retinal development, and providing a human model for personalized therapeutic approaches, like cell transplants to replace degenerated retinal cells. Here, we first discuss the importance of human retinal organoids to the biomedical sciences. Then, we review various functional features of retinal organoids that have been developed. Finally, we highlight the current limitations of retinal organoid technologies.
in Frontiers in Neuroscience: Neurodegeneration on April 20, 2021 12:00 AM.
• #### Vascular Senescence: A Potential Bridge Between Physiological Aging and Neurogenic Decline
The adult mammalian brain contains distinct neurogenic niches harboring populations of neural stem cells (NSCs) with the capacity to sustain the generation of specific subtypes of neurons during the lifetime. However, their ability to produce new progeny declines with age. The microenvironment of these specialized niches provides multiple cellular and molecular signals that condition NSC behavior and potential. Among the different niche components, vasculature has gained increasing interest over the years due to its undeniable role in NSC regulation and its therapeutic potential for neurogenesis enhancement. NSCs are uniquely positioned to receive both locally secreted factors and adhesion-mediated signals derived from vascular elements. Furthermore, studies of parabiosis indicate that NSCs are also exposed to blood-borne factors, sensing and responding to the systemic circulation. Both structural and functional alterations occur in vasculature with age at the cellular level that can affect the proper extrinsic regulation of NSCs. Additionally, blood exchange experiments in heterochronic parabionts have revealed that age-associated changes in blood composition also contribute to adult neurogenesis impairment in the elderly. Although the mechanisms of vascular- or blood-derived signaling in aging are still not fully understood, a general feature of organismal aging is the accumulation of senescent cells, which act as sources of inflammatory and other detrimental signals that can negatively impact on neighboring cells. This review focuses on the interactions between vascular senescence, circulating pro-senescence factors and the decrease in NSC potential during aging. Understanding the mechanisms of NSC dynamics in the aging brain could lead to new therapeutic approaches, potentially include senolysis, to target age-dependent brain decline.
in Frontiers in Neuroscience: Neurodegeneration on April 20, 2021 12:00 AM.
• #### Lipoprotein-Associated Phospholipase A2 Is a Risk Factor for Patients With Parkinson’s Disease
Objective
To explore the association between lipoprotein-related phospholipase A2 (Lp-PLA2) and the risk of Parkinson’s disease (PD).
Methods
A case-control study involving 58 hospitalized PD patients and 60 healthy controls was carried out. Serum Lp-PLA2 level was detected. According to the disease course and severity, PD patients were subdivided to analyze the clinical value of Lp-PLA2. Relationship between Lp-PLA2 and PD risk was analyzed by logistic regression. Diagnostic value of Lp-PLA2 in PD patients was investigated using receiver’s operator characteristic curves.
Results
Lp-PLA2 level was significantly higher in the PD patients compared with the controls, and was significantly and positively correlated with the Hoehn-Yahr (H&Y) stage. The serum Lp-PLA2 level and H&Y stage of PD patients with a longer disease course were significantly higher than those with a shorter disease course. PD patients with milder conditions had significantly lower serum Lp-PLA2 levels than patients with severe conditions. Multivariable logistic regression analysis indicated higher Lp-PLA2 level was an independent risk factor of PD patients. Moreover, the area under the curve for Lp-PLA2 was 0.703, which was between those of homocysteine and serum amylase A.
Conclusion
To our knowledge, this is the first study to show that increased level of Lp-PLA2 is associated with the risk of PD. Lp-PLA2 may be used for early detection of PD, and provides an effective intervention target for clinical treatment of PD.
in Frontiers in Neuroscience: Neurodegeneration on April 20, 2021 12:00 AM.
• #### Voxel-Wise Feature Selection Method for CNN Binary Classification of Neuroimaging Data
Voxel-wise group analysis is presented as a novel feature selection (FS) technique for a deep learning (DL) approach to brain imaging data classification. The method, based on a voxel-wise two-sample t-test and denoted as t-masking, is integrated into the learning procedure as a data-driven FS strategy. t-Masking has been introduced in a convolutional neural network (CNN) for the test bench of binary classification of very-mild Alzheimer’s disease vs. normal control, using a structural magnetic resonance imaging dataset of 180 subjects. To better characterize the t-masking impact on CNN classification performance, six different experimental configurations were designed. Moreover, the performances of the presented FS method were compared to those of similar machine learning (ML) models that relied on different FS approaches. Overall, our results show an enhancement of about 6% in performance when t-masking was applied. Moreover, the reported performance enhancement was higher with respect to similar FS-based ML models. In addition, evaluation of the impact of t-masking on various selection rates has been provided, serving as a useful characterization for future insights. The proposed approach is also highly generalizable to other DL architectures, neuroimaging modalities, and brain pathologies.
in Frontiers in Neuroscience: Brain Imaging Methods on April 20, 2021 12:00 AM.
• #### Ventral Striatal Activation During Reward Anticipation of Different Reward Probabilities in Adolescents and Adults
in Frontiers in Human Neuroscience on April 20, 2021 12:00 AM.
• #### Lost in Translation: Simple Steps in Experimental Design of Neurorehabilitation-Based Research Interventions to Promote Motor Recovery Post-Stroke
Stroke continues to be a leading cause of disability. Basic neurorehabilitation research is necessary to inform the neuropathophysiology of impaired motor control, and to develop targeted interventions with potential to remediate disability post-stroke. Despite knowledge gained from basic research studies, the effectiveness of research-based interventions for reducing motor impairment has been no greater than standard of practice interventions. In this perspective, we offer suggestions for overcoming translational barriers integral to experimental design, to augment traditional protocols, and re-route the rehabilitation trajectory toward recovery and away from compensation. First, we suggest that researchers consider modifying task practice schedules to focus on key aspects of movement quality, while minimizing the appearance of compensatory behaviors. Second, we suggest that researchers supplement primary outcome measures with secondary measures that capture emerging maladaptive compensations at other segments or joints. Third, we offer suggestions about how to maximize participant engagement, self-direction, and motivation, by embedding the task into a meaningful context, a strategy more likely to enable goal-action coupling, associated with improved neuro-motor control and learning. Finally, we remind the reader that motor impairment post-stroke is a multidimensional problem that involves central and peripheral sensorimotor systems, likely influenced by chronicity of stroke. Thus, stroke chronicity should be given special consideration for both participant recruitment and subsequent data analyses. We hope that future research endeavors will consider these suggestions in the design of the next generation of intervention studies in neurorehabilitation, to improve translation of research advances to improved participation and quality of life for stroke survivors.
in Frontiers in Human Neuroscience on April 20, 2021 12:00 AM.
• #### BCCT: A GUI Toolkit for Brain Structural Covariance Connectivity Analysis on MATLAB
Brain structural covariance network (SCN) can delineate the brain synchronized alterations in a long-range time period. It has been used in the research of cognition or neuropsychiatric disorders. Recently, causal analysis of structural covariance network (CaSCN), winner-take-all and cortex–subcortex covariance network (WTA-CSSCN), and modulation analysis of structural covariance network (MOD-SCN) have expended the technology breadth of SCN. However, the lack of user-friendly software limited the further application of SCN for the research. In this work, we developed the graphical user interface (GUI) toolkit of brain structural covariance connectivity based on MATLAB platform. The software contained the analysis of SCN, CaSCN, MOD-SCN, and WTA-CSSCN. Also, the group comparison and result-showing modules were included in the software. Furthermore, a simple showing of demo dataset was presented in the work. We hope that the toolkit could help the researchers, especially clinical researchers, to do the brain covariance connectivity analysis in further work more easily.
in Frontiers in Human Neuroscience on April 20, 2021 12:00 AM.
• #### Pathophysiological Bases of Comorbidity in Migraine
Despite that it is commonly accepted that migraine is a disorder of the nervous system with a prominent genetic basis, it is comorbid with a plethora of medical conditions. Several studies have found bidirectional comorbidity between migraine and different disorders including neurological, psychiatric, cardio- and cerebrovascular, gastrointestinal, metaboloendocrine, and immunological conditions. Each of these has its own genetic load and shares some common characteristics with migraine. The bidirectional mechanisms that are likely to underlie this extensive comorbidity between migraine and other diseases are manifold. Comorbid pathologies can induce and promote thalamocortical network dysexcitability, multi-organ transient or persistent pro-inflammatory state, and disproportionate energetic needs in a variable combination, which in turn may be causative mechanisms of the activation of an ample defensive system with includes the trigeminovascular system in conjunction with the neuroendocrine hypothalamic system. This strategy is designed to maintain brain homeostasis by regulating homeostatic needs, such as normal subcortico-cortical excitability, energy balance, osmoregulation, and emotional response. In this light, the treatment of migraine should always involves a multidisciplinary approach, aimed at identifying and, if necessary, eliminating possible risk and comorbidity factors.
in Frontiers in Human Neuroscience on April 20, 2021 12:00 AM.
• #### Application of the Mirror Technique for Three-Dimensional Electron Microscopy of Neurochemically Identified GABA-ergic Dendrites
In the nervous system synaptic input arrives chiefly on dendrites and their type and distribution have been assumed pivotal in signal integration. We have developed an immunohistochemistry (IH)-correlated electron microscopy (EM) method – the “mirror” technique – by which synaptic input to entire dendrites of neurochemically identified interneurons (INs) can be mapped due preserving high-fidelity tissue ultrastructure. Hence, this approach allows quantitative assessment of morphometric parameters of synaptic inputs along the whole length of dendrites originating from the parent soma. The method exploits the fact that adjoining sections have truncated or cut cell bodies which appear on the common surfaces in a mirror fashion. In one of the sections the histochemical marker of the GABAergic subtype, calbindin was revealed in cell bodies whereas in the other section the remaining part of the very same cell bodies were subjected to serial section EM to trace and reconstruct the synaptology of entire dendrites. Here, we provide exemplary data on the synaptic coverage of two dendrites belonging to the same calbindin-D28K immunopositive IN and determine the spatial distribution of asymmetric and symmetric synapses, surface area and volume of the presynaptic boutons, morphometric parameters of synaptic vesicles, and area extent of the active zones.
in Frontiers in Neuroanatomy on April 20, 2021 12:00 AM.
• #### Being the Family Caregiver of a Patient With Dementia During the Coronavirus Disease 2019 Lockdown
Background: Family caregivers of patients with dementia are at high risk of stress and burden, and quarantine due to the coronavirus disease 2019 (COVID-19) pandemic may have increased the risk of psychological disturbances in this population. The current study was carried out during the national lockdown declared in March 2020 by the Italian government as a containment measure of the first wave of the coronavirus pandemic and is the first nationwide survey on the impact of COVID-19 lockdown on the mental health of dementia informal caregivers.
Methods: Eighty-seven dementia centers evenly distributed on the Italian territory enrolled 4,710 caregiver–patient pairs. Caregivers underwent a telephone interview assessing classical symptoms of caregiver stress and concern for the consequences of COVID-19 infection on patient’s health. We calculated prevalence of symptoms and regressed them on various potential stress risk factors: caregivers’ sociodemographic characteristics and lifestyle, patients’ clinical features, and lockdown-related elements, like discontinuity in medical care.
Results: Approximately 90% of caregivers reported at least one symptom of stress, and nearly 30% reported four or more symptoms. The most prevalent symptoms were concern for consequences of COVID-19 on patient’s health (75%) and anxiety (46%). The main risk factors for stress were identified as a conflicting relationship with the patient and discontinuity in assistance, but caregiver’s female sex, younger age, lower education, and cohabitation with the patient also had an impact. Availability of help from institutions or private individuals showed a protective effect against sense of abandonment but a detrimental effect on concern about the risk for the patient to contract COVID-19. The only protective factor was mild dementia severity, which was associated with a lower risk of feeling isolated and abandoned; type of dementia, on the other hand, did not affect stress risk.
Conclusion: Our results demonstrate the large prevalence of stress in family caregivers of patients with dementia during the COVID-19 pandemic and have identified both caregivers and situations at a higher risk of stress, which should be taken into account in the planning of interventions in support of quarantined families and patients.
in Frontiers in Ageing Neuroscience on April 20, 2021 12:00 AM.
• #### Co-registration Analysis of Fluorodopa and Fluorodeoxyglucose Positron Emission Tomography for Differentiating Multiple System Atrophy Parkinsonism Type From Parkinson's Disease
It is difficult to differentiate between Parkinson's disease and multiple system atrophy parkinsonian subtype (MSA-P) because of the overlap of their signs and symptoms. Enormous efforts have been made to develop positron emission tomography (PET) imaging to differentiate these diseases. This study aimed to investigate the co-registration analysis of 18F-fluorodopa and 18F-flurodeoxyglucose PET images to visualize the difference between Parkinson's disease and MSA-P. We enrolled 29 Parkinson's disease patients, 28 MSA-P patients, and 10 healthy controls, who underwent both 18F-fluorodopa and 18F-flurodeoxyglucose PET scans. Patients with Parkinson's disease and MSA-P exhibited reduced bilateral striatal 18F-fluorodopa uptake (p < 0.05, vs. healthy controls). Both regional specific uptake ratio analysis and statistical parametric mapping analysis of 18F-flurodeoxyglucose PET revealed hypometabolism in the bilateral putamen of MSA-P patients and hypermetabolism in the bilateral putamen of Parkinson's disease patients. There was a significant positive correlation between 18F-flurodeoxyglucose uptake and 18F-fluorodopa uptake in the contralateral posterior putamen of MSA-P patients (rs = 0.558, p = 0.002). Both 18F-flurodeoxyglucose and 18F-fluorodopa PET images showed that the striatum was rabbit-shaped in the healthy control group segmentation analysis. A defective rabbit-shaped striatum was observed in the 18F-fluorodopa PET image of patients with Parkinson's disease and MSA-P. In the segmentation analysis of 18F-flurodeoxyglucose PET image, an intact rabbit-shaped striatum was observed in Parkinson's disease patients, whereas a defective rabbit-shaped striatum was observed in MSA-P patients. These findings suggest that there were significant differences in the co-registration analysis of 18F-flurodeoxyglucose and 18F-fluorodopa PET images, which could be used in the individual analysis to differentiate Parkinson's disease from MSA-P.
in Frontiers in Ageing Neuroscience on April 20, 2021 12:00 AM.
• #### Elevated Neutrophil to Lymphocyte Ratio Associated With Increased Risk of Recurrent Vascular Events in Older Minor Stroke or TIA Patients
Background
The risk of recurrent stroke following a minor stroke or transient ischemic attack (TIA) is high, when inflammation might play an important role. We aimed to evaluate the value of neutrophil to lymphocyte ratio (NLR) in predicting composite cardiovascular events in patients with minor stroke and TIA.
Methods
Consecutive patients with acute minor stroke or TIA admitted within 24 h of symptoms onset during a 5-year period in a prospective stroke registry were analyzed. We calculated the NLR dividing absolute neutrophil count by absolute lymphocyte count tested within 24 h of admission. NLR ≥4th quartile was defined as high NLR. A composite outcome was defined as stroke, acute coronary syndrome or vascular death within 1 year. We investigated associations between NLR and the composite outcome in univariate and multivariate analyses, among all patients and in those aged over 60 years (i.e., older patients).
Results
Overall, 841 patients (median age 68 years; 60.4% males) were recruited. No significant independent association was found between NLR and the composite outcome in multivariate analysis in the overall cohort. Among the 612 older patients (median age 73 years; 59.2% males), the median NLR was 2.76 (interquartile range 1.96−4.00) and 148 (24.2%) patients had high NLR. The composite outcome occurred in 77 (12.6%) older patients, who were more likely to have a high NLR (39.0% versus 22.1%; p = 0.001) than those without a composite outcome. In multivariate logistic regression, high NLR (adjusted odds ratio 2.00; 95% confidence interval 1.07−3.75; p = 0.031) was independently associated with the composite outcome in older patients.
Conclusion
In older (aged ≥60 years) patients with acute minor stroke or TIA, a higher NLR, a marker of systemic inflammation that can be easily obtained in routine blood tests, is an independent predictor of subsequent cardiovascular events.
in Frontiers in Ageing Neuroscience on April 20, 2021 12:00 AM.
• #### Potential Fluid Biomarkers and a Prediction Model for Better Recognition Between Multiple System Atrophy-Cerebellar Type and Spinocerebellar Ataxia
Objective
This study screened potential fluid biomarkers and developed a prediction model based on the easily obtained information at initial inspection to identify ataxia patients more likely to have multiple system atrophy-cerebellar type (MSA-C).
Methods
We established a retrospective cohort with 125 ataxia patients from southwest China between April 2018 and June 2020. Demographic and laboratory variables obtained at the time of hospital admission were screened using Least Absolute Shrinkage and Selection Operator (LASSO) regression and logistic regression to construct a diagnosis score. The receiver operating characteristic (ROC) and decision curve analyses were performed to assess the accuracy and net benefit of the model. Also, independent validation using 25 additional ataxia patients was carried out to verify the model efficiency. Then the model was translated into a visual and operable web application using the R studio and Shiny package.
Results
From 47 indicators, five variables were selected and integrated into the prediction model, including the age of onset (AO), direct bilirubin (DBIL), aspartate aminotransferase (AST), eGFR, and synuclein-alpha. The prediction model exhibited an area under the curve (AUC) of 0.929 for the training cohort and an AUC of 0.917 for the testing cohort. The decision curve analysis (DCA) plot displayed a good net benefit for this model, and external validation confirmed its reliability. The model also was translated into a web application that is freely available to the public.
Conclusion
The prediction model that was developed based on laboratory and demographic variables obtained from ataxia patients at admission to the hospital might help improve the ability to differentiate MSA-C from spinocerebellar ataxia clinically.
in Frontiers in Ageing Neuroscience on April 20, 2021 12:00 AM.
• #### Therapeutic inhibition of keratinocyte TRPV3 sensory channel by local anesthetic dyclonine
The multimodal sensory channel transient receptor potential vanilloid-3 (TRPV3) is expressed in epidermal keratinocytes and implicated in chronic pruritus, allergy, and inflammation-related skin disorders. Gain-of-function mutations of TRPV3 cause hair growth disorders in mice and Olmsted Syndrome in human. We here report that mouse and human TRPV3 channel is targeted by the clinical medication dyclonine that exerts a potent inhibitory effect. Accordingly, dyclonine rescued cell death caused by gain-of-function TRPV3 mutations and suppressed pruritus symptoms in vivo in mouse model. At the single-channel level, dyclonine inhibited TRPV3 open probability but not the unitary conductance. By molecular simulations and mutagenesis, we further uncovered key residues in TRPV3 pore region that could toggle the inhibitory efficiency of dyclonine. The functional and mechanistic insights obtained on dyclonine-TRPV3 interaction will help to conceive updated therapeutics for skin inflammation.
in eLife on April 20, 2021 12:00 AM.
• #### Bacterial-fungal interactions in the neonatal gut influence asthma outcomes later in life
Bacterial members of the infant gut microbiota and bacterial-derived short-chain fatty acids (SCFAs) have been shown to be protective against childhood asthma, but a role for the fungal microbiota in asthma etiology remains poorly defined. We recently reported an association between overgrowth of the yeast Pichia kudriavzevii in the gut microbiota of Ecuadorian infants and increased asthma risk. In the present study, we replicated these findings in Canadian infants and investigated a causal association between early life gut fungal dysbiosis and later allergic airway disease (AAD). In a mouse model, we demonstrate that overgrowth of P. kudriavzevii within the neonatal gut exacerbates features of type-2 and -17 inflammation during AAD later in life. We further show that P. kudriavzevii growth and adherence to gut epithelial cells are altered by SCFAs. Collectively, our results underscore the potential for leveraging inter-kingdom interactions when designing putative microbiota-based asthma therapeutics.
in eLife on April 20, 2021 12:00 AM.
• #### Insights from a Pan India Sero-Epidemiological survey (Phenome-India Cohort) for SARS-CoV2
To understand the spread of SARS-CoV2, in August and September 2020, the Council of Scientific and Industrial Research (India), conducted a sero-survey across its constituent laboratories and centers across India. Of 10,427 volunteers, 1058 (10.14%) tested positive for SARS CoV2 anti-nucleocapsid (anti-NC) antibodies; 95% of which had surrogate neutralization activity. Three-fourth of these recalled no symptoms. Repeat serology tests at 3 (n=607) and 6 (n=175) months showed stable anti-NC antibodies but declining neutralization activity. Local sero-positivity was higher in densely populated cities and was inversely correlated with a 30 day change in regional test positivity rates (TPR). Regional seropositivity above 10% was associated with declining TPR. Personal factors associated with higher odds of sero-positivity were high-exposure work (Odds Ratio, 95% CI, p value; 2∙23, 1∙92–2∙59, <0.0001), use of public transport (1∙79, 1∙43–2∙24, <0.0001), not smoking (1∙52, 1∙16–1∙99, 0∙0257), non-vegetarian diet (1∙67, 1∙41–1∙99, <0.0001), and B blood group (1∙36,1∙15-1∙61, 0∙001).
in eLife on April 20, 2021 12:00 AM.
• #### Nutrient dominance governs the assembly of microbial communities in mixed nutrient environments
A major open question in microbial community ecology is whether we can predict how the components of a diet collectively determine the taxonomic composition of microbial communities. Motivated by this challenge, we investigate whether communities assembled in pairs of nutrients can be predicted from those assembled in every single nutrient alone. We find that although the null, naturally additive model generally predicts well the family-level community composition, there exist systematic deviations from the additive predictions that reflect generic patterns of nutrient dominance at the family level. Pairs of more-similar nutrients (e.g. two sugars) are on average more additive than pairs of more dissimilar nutrients (one sugar–one organic acid). Furthermore, sugar–acid communities are generally more similar to the sugar than the acid community, which may be explained by family-level asymmetries in nutrient benefits. Overall, our results suggest that regularities in how nutrients interact may help predict community responses to dietary changes.
in eLife on April 20, 2021 12:00 AM.
• #### Sensitivity of ID NOW and RT-PCR for detection of SARS-CoV-2 in an ambulatory population
Diagnosis of SARS-CoV-2 (COVID-19) requires confirmation by Reverse-Transcription Polymerase Chain Reaction (RT-PCR). Abbott ID NOW provides fast results but has been criticized for low sensitivity. Here we determine the sensitivity of ID NOW in an ambulatory population presenting for testing. The study enrolled 785 symptomatic patients, 21 of whom were positive by both ID NOW and RT-PCR, and 2 only by RT-PCR. All 189 asymptomatic patients tested negative. The positive percent agreement between the ID NOW assay and the RT-PCR assay was 91.3%, and negative percent agreement was 100%. The results from the current study were included into a larger systematic review of literature where at least 20 subjects were simultaneously tested using ID NOW and RT-PCR. The overall sensitivity for ID NOW assay was calculated at 84% (95% CI 55- 96%) and had the highest correlation to RT-PCR at viral loads most likely to be associated with transmissible infections.
in eLife on April 20, 2021 12:00 AM.
• #### Plant-associated CO2 mediates long-distance host location and foraging behaviour of a root herbivore
Insect herbivores use different cues to locate host plants. The importance of CO2 in this context is not well understood. We manipulated CO2 perception in western corn rootworm (WCR) larvae through RNAi and studied how CO2 perception impacts their interaction with their host plant. The expression of a carbon dioxide receptor, DvvGr2, is specifically required for dose-dependent larval responses to CO2. Silencing CO2 perception or scrubbing plant-associated CO2 has no effect on the ability of WCR larvae to locate host plants at short distances (<9 cm), but impairs host location at greater distances. WCR larvae preferentially orient and prefer plants that grow in well-fertilized soils compared to plants that grow in nutrient-poor soils, a behaviour that has direct consequences for larval growth and depends on the ability of the larvae to perceive root-emitted CO2. This study unravels how CO2 can mediate plant–herbivore interactions by serving as a distance-dependent host location cue.
in eLife on April 20, 2021 12:00 AM.
• #### Characterization and functional analysis of cathelicidin-MH, a novel frog-derived peptide with anti-septicemic properties
Antimicrobial peptides form part of the innate immune response and play a vital role in host defense against pathogens. Here we report a new antimicrobial peptide belonging to the cathelicidin family, cathelicidin-MH (cath-MH), from the skin of Microhyla heymonsivogt frog. Cath-MH has a single α-helical structure in membrane-mimetic environments and is antimicrobial against fungi and bacteria, especially Gram-negative bacteria. In contrast to other cathelicidins, cath-MH suppresses coagulation by affecting the enzymatic activities of tissue plasminogen activator, plasmin, β-tryptase, elastase, thrombin, and chymase. Cath-MH protects against lipopolysaccharide (LPS)- and cecal ligation and puncture-induced sepsis, effectively ameliorating multiorgan pathology and inflammatory cytokine through its antimicrobial, LPS-neutralizing, coagulation suppressing effects as well as suppression of MAPK signaling. Taken together, these data suggest that cath-MH is an attractive candidate therapeutic agent for the treatment of septic shock.
in eLife on April 20, 2021 12:00 AM.
• #### A cross-sectional study of functional and metabolic changes during aging through the lifespan in male mice.
Aging is associated with distinct phenotypical, physiological, and functional changes, leading to disease and death. The progression of aging-related traits varies widely among individuals, influenced by their environment, lifestyle, and genetics. In this study, we conducted physiologic and functional tests cross-sectionally throughout the entire lifespan of male C57BL/6N mice. In parallel, metabolomics analyses in serum, brain, liver, heart, and skeletal muscle were also performed to identify signatures associated with frailty and age-dependent functional decline. Our findings indicate that declines in gait speed as a function of age and frailty are associated with a dramatic increase in the energetic cost of physical activity and decreases in working capacity. Aging and functional decline prompt organs to rewire their metabolism and substrate selection and towards redox-related pathways, mainly in liver and heart. Collectively, the data provide a framework to further understand and characterize processes of aging at the individual organism and organ levels.
in eLife on April 20, 2021 12:00 AM.
• #### In vitro proteasome processing of neo-splicetopes does not predict their presentation in vivo
Proteasome catalyzed peptide splicing (PCPS) of cancer-driving antigens could generate attractive neoepitopes to be targeted by TCR-based adoptive T cell therapy. Based on a spliced peptide prediction algorithm TCRs were generated against putative KRASG12V and RAC2P29L derived neo-splicetopes with high HLA-A*02:01 binding affinity. TCRs generated in mice with a diverse human TCR repertoire specifically recognized the respective target peptides with high efficacy. However, we failed to detect any neo-splicetope specific T cell response when testing the in vivo neo-splicetope generation and obtained no experimental evidence that the putative KRASG12V- and RAC2P29L-derived neo-splicetopes were naturally processed and presented. Furthermore, only the putative RAC2P29L-derived neo-splicetopes was generated by in vitro PCPS. The experiments pose severe questions on the notion that available algorithms or the in vitro PCPS reaction reliably simulate in vivo splicing and argue against the general applicability of an algorithm-driven 'reverse immunology' pipeline for the identification of cancer-specific neo-splicetopes.
in eLife on April 20, 2021 12:00 AM.
• #### The biphasic and age-dependent impact of Klotho on hallmarks of aging and skeletal muscle function
Aging is accompanied by disrupted information flow, resulting from accumulation of molecular mistakes. These mistakes ultimately give rise to debilitating disorders including skeletal muscle wasting, or sarcopenia. To derive a global metric of growing 'disorderliness' of aging muscle, we employed a statistical physics approach to estimate the state parameter, entropy, as a function of genes associated with hallmarks of aging. Escalating network entropy reached an inflection point at old age, while structural and functional alterations progressed into oldest-old age. To probe the potential for restoration of molecular 'order' and reversal of the sarcopenic phenotype, we systemically overexpressed the longevity protein, Klotho, via AAV. Klotho overexpression modulated genes representing all hallmarks of aging in old and oldest-old mice, but pathway enrichment revealed directions of changes were, for many genes, age-dependent. Functional improvements were also age-dependent. Klotho improved strength in old mice, but failed to induce benefits beyond the entropic tipping point.
in eLife on April 20, 2021 12:00 AM.
• #### Robust and distributed neural representation of action values
Studies in rats, monkeys, and humans have found action-value signals in multiple regions of the brain. These findings suggest that action-value signals encoded in these brain structures bias choices toward higher expected rewards. However, previous estimates of action-value signals might have been inflated by serial correlations in neural activity and also by activity related to other decision variables. Here, we applied several statistical tests based on permutation and surrogate data to analyze neural activity recorded from the striatum, frontal cortex, and hippocampus. The results show that previously identified action-value signals in these brain areas cannot be entirely accounted for by concurrent serial correlations in neural activity and action value. We also found that neural activity related to action value is intermixed with signals related to other decision variables. Our findings provide strong evidence for broadly distributed neural signals related to action value throughout the brain.
in eLife on April 20, 2021 12:00 AM.
• #### What canonical online and offline measures of statistical learning can and cannot tell us
Statistical learning (SL) allows individuals to rapidly detect regularities in the sensory environment. We replicated previous findings showing that adult participants become sensitive to the implicit structure in a continuous speech stream of repeating tri-syllabic pseudowords within minutes, as measured by standard tests in the SL literature: a target detection task and a 2AFC word recognition task. Consistent with previous findings, we found only a weak correlation between these two measures of learning, leading us to question whether there is overlap between the information captured by these two tasks. Representational similarity analysis on reaction times measured during the target detection task revealed that reaction time data reflect sensitivity to transitional probability, triplet position, word grouping, and duplet pairings of syllables. However, individual performance on the word recognition task was not predicted by similarity measures derived for any of these four features. We conclude that online detection tasks provide richer and multi-faceted information about the SL process, as compared with 2AFC recognition tasks, and may be preferable for gaining insight into the dynamic aspects of SL.
in bioRxiv: Neuroscience on April 20, 2021 12:00 AM.
• #### Pooling in a predictive model of V1 explains functional and structural diversity across species
Neurons in the primary visual cortex are selective to orientation with various degrees of selectivity to the spatial phase, from high selectivity in simple cells to low selectivity in complex cells. Various computational models have suggested a possible link between the presence of phase invariant cells and the existence of cortical orientation maps in higher mammals' V1. These models, however, do not explain the emergence of complex cells in animals that do not show orientation maps. In this study, we build a model of V1 based on a convolutional network called Sparse Deep Predictive Coding (SDPC) and show that a single computational mechanism, pooling, allows the SDPC model to account for the emergence of complex cells as well as cortical orientation maps in V1, as observed in distinct species of mammals. By using different pooling functions, our model developed complex cells in networks that exhibit orientation maps (e.g., like in carnivores and primates) or not (e.g., rodents and lagomorphs). The SDPC can therefore be viewed as a unifying framework that explains the diversity of structural and functional phenomena observed in V1. In particular, we show that orientation maps emerge naturally as the most cost-efficient structure to generate complex cells under the predictive coding principle.
in bioRxiv: Neuroscience on April 20, 2021 12:00 AM.
• #### Cellular and molecular mechanisms involved in LTP induced by mild theta-burst stimulation in hippocampal slices from young male rats: from weaning to adulthood.
Long-term potentiation (LTP) is a highly studied phenomenon yet the essential vs. modulatory transduction and GABAergic pathways involved in LTP elicited by theta-burst stimulation (TBS) in the CA1 area of the hippocampus are still unclear, due to the use of different TBS intensities and patterns or of different rodent/cellular models. We now characterized the essential transduction and GABAergic pathways in mild TBS-induced LTP in the CA1 area of the rat hippocampus. LTP induced by TBS (5x4) (5 bursts of 4 pulses delivered at 100Hz) lasted for up to 3h and was increasingly greater from weaning to adulthood. Stronger TBS patterns - TBS (15x4) or three TBS (15x4) separated by 6 min induced nearly maximal LTP not being the best choice to study the value of LTP-enhancing drugs. LTP induced by TBS (5x4) was fully dependent on NMDA receptor and CaMKII activity but independent on PKA or PKC activity. In addition, it was partially dependent on GABAB receptor activation and was potentiated by GABAA receptor blockade and less by GAT-1 transporter blockade. AMPA GluA1 phosphorylation on Ser831 (CaMKII target) but not GluA1 Ser845 (PKA target) was essential for LTP expression. The phosphorylation of the Kv4.2 channel was observed at Ser438 (targeted by CaMKII) but not at Thr602 or Thr607 (ERK/MAPK pathway). This suggests that cellular kinases like PKA, PKC or kinases of the ERK/MAPK family although important modulators of TBS (5x4)-induced LTP are not essential for its expression in the CA1 area of the hippocampus.
in bioRxiv: Neuroscience on April 20, 2021 12:00 AM.
• #### The Psychoacoustics of Automatic Speech Recognition
Automatic speech recognition (ASR) software has been suggested as a candidate model of the human auditory system thanks to dramatic improvements in performance in recent years. To test this hypothesis, we compared several state-of-the-art ASR systems to results from humans on a barrage of standard psychoacoustic experiments. While some systems showed qualitative agreement with humans in some tests, in others all tested systems diverged markedly from humans. In particular, none of the models used spectral invariance, temporal fine structure or speech periodicity in a similar way to humans. We conclude that none of the tested ASR systems are yet ready to act as a strong proxy for human speech recognition. However, we note that the more recent systems with better performance also tend to better match human results, suggesting that continued cross-fertilisation of ideas between human and automatic speech recognition may be fruitful. Our software is released as an open-source toolbox to allow researchers to assess future ASR systems or add additional psychoacoustic measures.
in bioRxiv: Neuroscience on April 20, 2021 12:00 AM.
• #### MRI-derived brain age as a biomarker of ageing in rats: validation using a healthy lifestyle intervention
MRI data can be used as input to machine learning models to accurately predict brain age in healthy human subjects. A large difference between predicted and chronological brain age (the so-called BrainAGE score) has been associated with disease and neurodegeneration, indicating the potential utility of neuroimaging-based ageing biomarkers. So far, most brain age prediction studies have been carried out on humans. However, it is important for such a biomarker to be validated on laboratory animals too, in order to better account for specific environmental or genetic factors within a more controlled laboratory framework. In this work, we developed a new algorithm for rat brain age prediction based on the combination of Gaussian process regression and a logistic regression classifier. The algorithm was trained on a cohort of 31 normal rats. High prediction accuracy was achieved using leave-one-out cross-validation (mean absolute error = 4.87 weeks, correlation between predicted and chronological age r = 0.92), supporting the validity and potential of the method. Furthermore, the trained model was tested on two independent groups of 24 rats each: a new normal control group and a "healthy lifestyle" group that underwent long-term environmental enrichment and dietary restriction (EEDR) between 3 and 17 months of age. After fitting a linear mixed-effects model, the BrainAGE values were found to increase more slowly with chronological age in the EEDR group than in the controls (slope = 0.52 vs. 0.61; p = 0.015 for the interaction term). When survival analysis was performed with a Cox regression model, the BrainAGE score at 5 months of age had a significant prediction power (p = 0.03). Our results demonstrate that BrainAGE, as computed by the proposed approach, is significantly modulated by EEDR intervention, hence it is a sensitive marker of biological ageing. These findings also support the potential of lifestyle-related prevention approaches to slow down the brain ageing process. Moreover, the results of the survival analysis further demonstrate that BrainAGE is indeed a predictor of ageing outcome.
in bioRxiv: Neuroscience on April 20, 2021 12:00 AM.
• #### Bone innervation and vascularization regulated by osteoclasts contribute to refractive pain-related behavior in the collagen antibody-induced arthritis model
Objective: Rheumatoid arthritis is often characterized by eroded joints and chronic pain that outlasts disease activity. Whilst several reports show strong associations between bone resorption and nociception, the underlying mechanisms remain to be unraveled. Here, we used the collagen antibody-induced arthritis (CAIA) model to examine the contribution of osteoclasts in pain regulation. The antinociceptive effects of osteoclasts inhibitors and their mechanisms of actions involving bone vascularization and innervation were also explored. Methods: BALB/c female mice were subjected to CAIA by intravenous injection of a collagen type-II antibody cocktail, followed by intraperitoneal injection of lipopolysaccharide. Degree of arthritis, bone resorption, mechanical hypersensitivity, vascularization and innervation in the ankle joint were assessed. Animals were treated with osteoclast inhibitors, zoledronate and cathepsin K inhibitor (T06), and netrin-1 neutralizing antibody. Potential pronociceptive factors were examined in primary osteoclast cultures. Results: CAIA induced local bone loss in the calcaneus with ongoing increased osteoclast activity during the inflammatory phase of the model, but not after inflammation has resolved. Mechanical hypersensitivity was reversed by zoledronate in late but not inflammatory phase CAIA. This effect was coupled to the ability of osteoclasts to modulate bone vascularization and innervation, which was inhibited by osteoclast inhibitors. CAIA-induced hypersensitivity in the late phase was also reversed by anti-netrin-1 antibody. Conclusion: Osteoclasts induce pain-like behavior in the CAIA model independent of inflammation via effects on bone vascularization and innervation. Keywords: pain, rheumatoid arthritis, osteoclast, vascularization, innervation
in bioRxiv: Neuroscience on April 20, 2021 12:00 AM.
• #### Improved post-stroke spontaneous recovery by astrocytic extracellular vesicles
Spontaneous recovery after a stroke accounts for a major part of the neurological recovery in patients. However limited, the spontaneous recovery is mechanistically driven by axonal restorative processes for which several molecular cues have been previously described. We report the acceleration of spontaneous recovery in a preclinical model of ischemia/reperfusion in rats via a single intracerebroventricular administration of extracellular vesicles released from primary cortical astrocytes. We used MRI, confocal and multiphoton microscopy to correlate the structural remodeling of the corpus callosum and striatocortical circuits with neurological performance over 21 days. We also evaluated the functionality of the corpus callosum by repetitive recordings of compound action potentials to show that the recovery facilitated by astrocytic extracellular vesicles was both anatomical and functional. Our data provide compelling evidence that astrocytes can hasten the basal recovery that naturally occurs post-stroke through the release of cellular mediators contained in extracellular vesicles.
in bioRxiv: Neuroscience on April 20, 2021 12:00 AM.
• #### Awareness-dependent normalization framework of visual bottom-up attention
Although bottom-up attention can improve visual performance with and without awareness, whether they are governed by a common neural computation remains unclear. Using a modified Posner paradigm with backward masking, we found that both the attention-triggered cueing effect with and without awareness displayed a monotonic gradient profile (Gaussian-like). The scope of this profile, however, was significantly wider with than without awareness. Subsequently, for each subject, the stimulus size was manipulated as their respective mean scopes with and without awareness while stimulus contrast was varied in a spatial cueing task. By measuring the gain pattern of contrast-response functions, we observed changes in the cueing effect consonant with changes in contrast gain for bottom-up attention with awareness and response gain for bottom-up attention without awareness. Our findings indicate an awareness-dependent normalization framework of visual bottom-up attention, placing a necessary constraint, namely, awareness, on our understanding of the neural computations underlying visual attention.
in bioRxiv: Neuroscience on April 20, 2021 12:00 AM.
• #### Dimensionality reduction for neural population decoding
Rapidly developing technology for large scale neural recordings has allowed researchers to measure the activity of hundreds to thousands of neurons at single cell resolution in vivo. Neural decoding analyses are a widely used tool used for investigating what information is represented in this complex, high-dimensional neural population activity. Most population decoding methods assume that correlated activity between neurons has been estimated accurately. In practice, this requires large amounts of data, both across observations and across neurons. Unfortunately, most experiments are fundamentally constrained by practical variables that limit the number of times the neural population can be observed under a single stimulus and/or behavior condition. Therefore, new analytical tools are required to study neural population coding while taking into account these limitations. Here, we present a simple and interpretable method for dimensionality reduction that allows neural decoding metrics to be calculated reliably, even when experimental trial numbers are limited. We illustrate the method using simulations and compare its performance to standard approaches for dimensionality reduction and decoding by applying it to single-unit electrophysiological data collected from auditory cortex.
in bioRxiv: Neuroscience on April 20, 2021 12:00 AM.
• #### Correlated Functional Connectivity and Glucose Metabolism in Brain White Matter Revealed by Simultaneous MRI/PET
Blood oxygenation level-dependent (BOLD) signals in white matter (WM) have usually been ignored or undetected, consistent with the lower vascular density and metabolic demands in WM than in gray matter (GM). Despite converging evidence demonstrating the reliable detection of BOLD signals in WM evoked by neural stimulation and in a resting state, few studies have examined the relationship between BOLD functional signals and tissue metabolism in WM. By analyzing simultaneous recordings of MRI and PET data, we found that the correlations between low frequency resting state BOLD signals in WM are spatially correlated with local glucose uptake, which also covaried with the amplitude of spontaneous low frequency fluctuations in BOLD signals. These results provide further evidence that BOLD signals in WM reflect variations in metabolic demand associated with neural activity, and suggest they should be incorporated into more complete models of brain function.
in bioRxiv: Neuroscience on April 20, 2021 12:00 AM.
• #### Reply to Hopke and Dai: The correlation between PM2.5 and combustion-derived water is unlikely driven by local residential coal combustion [Physical Sciences]
Hopke and Dai (1) propose that the observed correlation between PM2.5 (concentration of particulate matter with an aerodynamic diameter ≤2.5 μm) concentration and the fraction of anthropogenic combustion-derived water (CDW) by Xing et al. (2) is likely due to local residential coal combustion (RCC). Here is our response. Hopke and...
in PNAS on April 19, 2021 07:06 PM.
• #### Why it makes sense that increased PM2.5 was correlated with anthropogenic combustion-derived water [Physical Sciences]
Xing et al. (1) report that over the period of 2016 to 2018 an average of 6.2% of the atmospheric moisture in Xi’an, China was combustion-derived water (CDW). They found correlations between CDW and PM2.5 (concentration of particulate matter with an aerodynamic diameter ≤2.5 μm) and with relative humidity during...
in PNAS on April 19, 2021 07:06 PM.
• #### Correction to Supporting Information for Rabouw et al., Small molecule ISRIB suppresses the integrated stress response within a defined window of activation [SI Correction]
CELL BIOLOGY Correction to Supporting Information for “Small molecule ISRIB suppresses the integrated stress response within a defined window of activation,” by Huib H. Rabouw, Martijn A. Langereis, Aditya A. Anand, Linda J. Visser, Raoul J. de Groot, Peter Walter, and Frank J. M. van Kuppeveld, which was first published...
in PNAS on April 19, 2021 07:04 PM.
• #### Correction for Dumbalska et al., A map of decoy influence in human multialternative choice [Corrections]
PSYCHOLOGICAL AND COGNITIVE SCIENCES, ECONOMIC SCIENCES Correction for “A map of decoy influence in human multialternative choice,” by Tsvetomira Dumbalska, Vickie Li, Konstantinos Tsetsos, and Christopher Summerfield, which was first published September 21, 2020; 10.1073/pnas.2005058117 (Proc. Natl. Acad. Sci. U.S.A. 117, 25169–25178). The authors note that that the affiliation for...
in PNAS on April 19, 2021 07:04 PM.
• #### Correction for Uebbing et al., Massively parallel discovery of human-specific substitutions that alter enhancer activity [Corrections]
EVOLUTION Correction for “Massively parallel discovery of human-specific substitutions that alter enhancer activity,” by Severin Uebbing, Jake Gockley, Steven K. Reilly, Acadia A. Kocher, Evan Geller, Neeru Gandotra, Curt Scharfe, Justin Cotney, and James P. Noonan, which was first published December 28, 2020; 10.1073/pnas.2007049118 (Proc. Natl. Acad. Sci. U.S.A. 118,...
in PNAS on April 19, 2021 07:04 PM.
• #### Correction for Bolmin et al., Nonlinear elasticity and damping govern ultrafast dynamics in click beetles [Corrections]
SYSTEMS BIOLOGY, APPLIED PHYSICAL SCIENCES Correction for “Nonlinear elasticity and damping govern ultrafast dynamics in click beetles,” by Ophelia Bolmin, John J. Socha, Marianne Alleyne, Alison C. Dunn, Kamel Fezzaa, and Aimy A. Wissa, which was first published January 19, 2021; 10.1073/pnas.2014569118 (Proc. Natl. Acad. Sci. U.S.A. 118, e2014569118). The...
in PNAS on April 19, 2021 07:04 PM.
• #### Temporally and spatially partitioned neuropeptide release from individual clock neurons [Neuroscience]
Neuropeptides control rhythmic behaviors, but the timing and location of their release within circuits is unknown. Here, imaging in the brain shows that synaptic neuropeptide release by Drosophila clock neurons is diurnal, peaking at times of day that were not anticipated by prior electrical and Ca2+ data. Furthermore, hours before...
in PNAS on April 19, 2021 07:04 PM.
• #### Reply to Hilborn: We agree that MPAs can improve fish catch in the South and Southeast Asia [Biological Sciences]
We appreciate Hilborn (1) for reminding us that South and Southeast Asia are the regions where marine protected areas (MPAs) could improve fisheries catches. We fully agree, and in fact, our results indicate that many areas in these regions could benefit from MPAs (figure 1 of ref. 2). We want...
in PNAS on April 19, 2021 07:04 PM.
• #### An introgressed gene causes meiotic drive in Neurospora sitophila [Evolution]
Meiotic drive elements cause their own preferential transmission following meiosis. In fungi, this phenomenon takes the shape of spore killing, and in the filamentous ascomycete Neurospora sitophila, the Sk-1 spore killer element is found in many natural populations. In this study, we identify the gene responsible for spore killing in...
in PNAS on April 19, 2021 07:04 PM.
• #### Increasing fisheries harvest with MPAs: Leaving South and Southeast Asia behind [Biological Sciences]
Cabral et al. (1) estimate that putting 5 to 90% of the global oceans in no-take marine protected areas (MPAs) could increase global marine capture food production by 20%. Their model assumes that closing a portion of a stocks range can increase yield if stocks are subject to overfishing, but...
in PNAS on April 19, 2021 07:04 PM.
• #### PIP2 corrects cerebral blood flow deficits in small vessel disease by rescuing capillary Kir2.1 activity [Neuroscience]
Cerebral small vessel diseases (SVDs) are a central link between stroke and dementia—two comorbidities without specific treatments. Despite the emerging consensus that SVDs are initiated in the endothelium, the early mechanisms remain largely unknown. Deficits in on-demand delivery of blood to active brain regions (functional hyperemia) are early manifestations of...
in PNAS on April 19, 2021 07:04 PM.
• #### Scavenging of soluble and immobilized CCL21 by ACKR4 regulates peripheral dendritic cell emigration [Immunology and Inflammation]
Leukocyte homing driven by the chemokine CCL21 is pivotal for adaptive immunity because it controls dendritic cell (DC) and T cell migration through CCR7. ACKR4 scavenges CCL21 and has been shown to play an essential role in DC trafficking at the steady state and during immune responses to tumors and...
in PNAS on April 19, 2021 07:04 PM.
• #### Capillary equilibrium of bubbles in porous media [Applied Physical Sciences]
In geologic, biologic, and engineering porous media, bubbles (or droplets, ganglia) emerge in the aftermath of flow, phase change, or chemical reactions, where capillary equilibrium of bubbles significantly impacts the hydraulic, transport, and reactive processes. There has previously been great progress in general understanding of capillarity in porous media, but...
in PNAS on April 19, 2021 07:04 PM.
• #### People have shaped most of terrestrial nature for at least 12,000 years [Environmental Sciences]
Archaeological and paleoecological evidence shows that by 10,000 BCE, all human societies employed varying degrees of ecologically transformative land use practices, including burning, hunting, species propagation, domestication, cultivation, and others that have left long-term legacies across the terrestrial biosphere. Yet, a lingering paradigm among natural scientists, conservationists, and policymakers is...
in PNAS on April 19, 2021 07:04 PM.
• #### Transcriptional profiling reveals signatures of latent developmental potential in Arabidopsis stomatal lineage ground cells [Plant Biology]
In many developmental contexts, cell lineages have variable or flexible potency to self-renew. What drives a cell to exit from a proliferative state and begin differentiation, or to retain the capacity to divide days or years later is not clear. Here we exploit the mixed potential of the stomatal lineage...
in PNAS on April 19, 2021 07:04 PM.
• #### GPR182 is an endothelium-specific atypical chemokine receptor that maintains hematopoietic stem cell homeostasis [Pharmacology]
G protein–coupled receptor 182 (GPR182) has been shown to be expressed in endothelial cells; however, its ligand and physiological role has remained elusive. We found GPR182 to be expressed in microvascular and lymphatic endothelial cells of most organs and to bind with nanomolar affinity the chemokines CXCL10, CXCL12, and CXCL13....
in PNAS on April 19, 2021 07:04 PM.
• #### Large ecosystem-scale effects of restoration fail to mitigate impacts of land-use legacies in longleaf pine savannas [Sustainability Science]
Ecological restoration is a global priority, with potential to reverse biodiversity declines and promote ecosystem functioning. Yet, successful restoration is challenged by lingering legacies of past land-use activities, which are pervasive on lands available for restoration. Although legacies can persist for centuries following cessation of human land uses such as...
in PNAS on April 19, 2021 07:04 PM.
• #### Life-course trajectories of body mass index from adolescence to old age: Racial and educational disparities [Social Sciences]
No research exists on how body mass index (BMI) changes with age over the full life span and social disparities therein. This study aims to fill the gap using an innovative life-course research design and analytic methods to model BMI trajectories from early adolescence to old age across 20th-century birth...
in PNAS on April 19, 2021 07:04 PM.
• #### Homeostatic regulation of T follicular helper and antibody response to particle antigens by IL-1Ra of medullary sinus macrophage origin [Immunology and Inflammation]
Hepatitis B virus (HBV) vaccines are composed of surface antigen HBsAg that spontaneously assembles into subviral particles. Factors that impede its humoral immunity in 5% to 10% of vaccinees remain elusive. Here, we showed that the low-level interleukin-1 receptor antagonist (IL-1Ra) can predict antibody protection both in mice and humans....
in PNAS on April 19, 2021 07:04 PM.
• #### Highly public anti-Black violence is associated with poor mental health days for Black Americans [Social Sciences]
Highly public anti-Black violence in the United States may cause widely experienced distress for Black Americans. This study identifies 49 publicized incidents of racial violence and quantifies national interest based on Google searches; incidents include police killings of Black individuals, decisions not to indict or convict the officer involved, and...
in PNAS on April 19, 2021 07:04 PM.
• #### A conserved folding nucleus sculpts the free energy landscape of bacterial and archaeal orthologs from a divergent TIM barrel family [Biophysics and Computational Biology]
The amino acid sequences of proteins have evolved over billions of years, preserving their structures and functions while responding to evolutionary forces. Are there conserved sequence and structural elements that preserve the protein folding mechanisms? The functionally diverse and ancient (βα)1–8 TIM barrel motif may answer this question. We mapped...
in PNAS on April 19, 2021 07:04 PM.
• #### Shortened tethering filaments stabilize presynaptic vesicles in support of elevated release probability during LTP in rat hippocampus [Neuroscience]
Long-term potentiation (LTP) is a cellular mechanism of learning and memory that results in a sustained increase in the probability of vesicular release of neurotransmitter. However, previous work in hippocampal area CA1 of the adult rat revealed that the total number of vesicles per synapse decreases following LTP, seemingly inconsistent...
in PNAS on April 19, 2021 07:04 PM.
• #### Assessment of battery utilization and energy consumption in the large-scale development of urban electric vehicles [Sustainability Science]
Electrifying transportation in the form of the large-scale development of electric vehicles (EVs) plays a pivotal role in reducing urban atmospheric pollution and alleviating fossil fuel dependence. However, the rising scale of EV deployment is exposing problems that were previously hidden in small-scale EV applications, and the lack of large-scale...
in PNAS on April 19, 2021 07:04 PM.
• #### Global wind patterns shape genetic differentiation, asymmetric gene flow, and genetic diversity in trees [Ecology]
Wind disperses the pollen and seeds of many plants, but little is known about whether and how it shapes large-scale landscape genetic patterns. We address this question by a synthesis and reanalysis of genetic data from more than 1,900 populations of 97 tree and shrub species around the world, using...
in PNAS on April 19, 2021 07:04 PM.
• #### Evidence from South Africa for a protracted end-Permian extinction on land [Earth, Atmospheric, and Planetary Sciences]
Earth’s largest biotic crisis occurred during the Permo–Triassic Transition (PTT). On land, this event witnessed a turnover from synapsid- to archosauromorph-dominated assemblages and a restructuring of terrestrial ecosystems. However, understanding extinction patterns has been limited by a lack of high-precision fossil occurrence data to resolve events on submillion-year timescales. We...
in PNAS on April 19, 2021 07:04 PM.
• #### Sensitivity of grassland carbon pools to plant diversity, elevated CO2, and soil nitrogen addition over 19 years [Ecology]
Whether the terrestrial biosphere will continue to act as a net carbon (C) sink in the face of multiple global changes is questionable. A key uncertainty is whether increases in plant C fixation under elevated carbon dioxide (CO2) will translate into decades-long C storage and whether this depends on other...
in PNAS on April 19, 2021 07:04 PM.
• #### Heme-binding protein CYB5D1 is a radial spoke component required for coordinated ciliary beating [Cell Biology]
Coordinated beating is crucial for the function of multiple cilia. However, the molecular mechanism is poorly understood. Here, we characterize a conserved ciliary protein CYB5D1 with a heme-binding domain and a cordon-bleu ubiquitin-like domain. Mutation or knockdown of Cyb5d1 in zebrafish impaired coordinated ciliary beating in the otic vesicle and...
in PNAS on April 19, 2021 07:04 PM.
• #### Phylogenetically diverse diets favor more complex venoms in North American pitvipers [Evolution]
The role of natural selection in the evolution of trait complexity can be characterized by testing hypothesized links between complex forms and their functions across species. Predatory venoms are composed of multiple proteins that collectively function to incapacitate prey. Venom complexity fluctuates over evolutionary timescales, with apparent increases and decreases...
in PNAS on April 19, 2021 07:04 PM.
• #### Human retroviral antisense mRNAs are retained in the nuclei of infected cells for viral persistence [Microbiology]
Human retroviruses, including human T cell leukemia virus type 1 (HTLV-1) and HIV type 1 (HIV-1), encode an antisense gene in the negative strand of the provirus. Besides coding for proteins, the messenger RNAs (mRNAs) of retroviral antisense genes have also been found to regulate transcription directly. Thus, it has...
in PNAS on April 19, 2021 07:04 PM.
• #### Bifurcations in a fractional-order BAM neural network with four different delays
Publication date: Available online 18 April 2021
Source: Neural Networks
Author(s): Chengdai Huang, Juan Wang, Xiaoping Chen, Jinde Cao
in Neural Networks on April 19, 2021 06:00 PM.
• #### Saturated impulsive control for synchronization of coupled delayed neural networks
Publication date: Available online 18 April 2021
Source: Neural Networks
Author(s): Shuchen Wu, Xiaodi Li, Yanhui Ding
in Neural Networks on April 19, 2021 06:00 PM.
• #### NMOSD: Therapeutic innovations and complex decision making
Annals of Neurology, Accepted Article.
in Annals of Neurology on April 19, 2021 02:36 PM.
• #### Systematic comparison and prediction of the effects of missense mutations on protein-DNA and protein-RNA interactions
by Yao Jiang, Hui-Fang Liu, Rong Liu
The binding affinities of protein-nucleic acid interactions could be altered due to missense mutations occurring in DNA- or RNA-binding proteins, therefore resulting in various diseases. Unfortunately, a systematic comparison and prediction of the effects of mutations on protein-DNA and protein-RNA interactions (these two mutation classes are termed MPDs and MPRs, respectively) is still lacking. Here, we demonstrated that these two classes of mutations could generate similar or different tendencies for binding free energy changes in terms of the properties of mutated residues. We then developed regression algorithms separately for MPDs and MPRs by introducing novel geometric partition-based energy features and interface-based structural features. Through feature selection and ensemble learning, similar computational frameworks that integrated energy- and nonenergy-based models were established to estimate the binding affinity changes resulting from MPDs and MPRs, but the selected features for the final models were different and therefore reflected the specificity of these two mutation classes. Furthermore, the proposed methodology was extended to the identification of mutations that significantly decreased the binding affinities. Extensive validations indicated that our algorithm generally performed better than the state-of-the-art methods on both the regression and classification tasks. The webserver and software are freely available at http://liulab.hzau.edu.cn/PEMPNI and https://github.com/hzau-liulab/PEMPNI.
in PLoS Computational Biology on April 19, 2021 02:00 PM.
• #### The landscape of metabolic pathway dependencies in cancer cell lines
by James H. Joly, Brandon T. L. Chew, Nicholas A. Graham
The metabolic reprogramming of cancer cells creates metabolic vulnerabilities that can be therapeutically targeted. However, our understanding of metabolic dependencies and the pathway crosstalk that creates these vulnerabilities in cancer cells remains incomplete. Here, by integrating gene expression data with genetic loss-of-function and pharmacological screening data from hundreds of cancer cell lines, we identified metabolic vulnerabilities at the level of pathways rather than individual genes. This approach revealed that metabolic pathway dependencies are highly context-specific such that cancer cells are vulnerable to inhibition of one metabolic pathway only when activity of another metabolic pathway is altered. Notably, we also found that the no single metabolic pathway was universally essential, suggesting that cancer cells are not invariably dependent on any metabolic pathway. In addition, we confirmed that cell culture medium is a major confounding factor for the analysis of metabolic pathway vulnerabilities. Nevertheless, we found robust associations between metabolic pathway activity and sensitivity to clinically approved drugs that were independent of cell culture medium. Lastly, we used parallel integration of pharmacological and genetic dependency data to confidently identify metabolic pathway vulnerabilities. Taken together, this study serves as a comprehensive characterization of the landscape of metabolic pathway vulnerabilities in cancer cell lines.
in PLoS Computational Biology on April 19, 2021 02:00 PM.
• #### Sequence deeper without sequencing more: Bayesian resolution of ambiguously mapped reads
by Rohan N. Shah, Alexander J. Ruthenburg
in PLoS Computational Biology on April 19, 2021 02:00 PM.
• #### MAUI (MBI Analysis User Interface)—An image processing pipeline for Multiplexed Mass Based Imaging
by Alex Baranski, Idan Milo, Shirley Greenbaum, John-Paul Oliveria, Dunja Mrdjen, Michael Angelo, Leeat Keren
Mass Based Imaging (MBI) technologies such as Multiplexed Ion Beam Imaging by time of flight (MIBI-TOF) and Imaging Mass Cytometry (IMC) allow for the simultaneous measurement of the expression levels of 40 or more proteins in biological tissue, providing insight into cellular phenotypes and organization in situ. Imaging artifacts, resulting from the sample, assay or instrumentation complicate downstream analyses and require correction by domain experts. Here, we present MBI Analysis User Interface (MAUI), a series of graphical user interfaces that facilitate this data pre-processing, including the removal of channel crosstalk, noise and antibody aggregates. Our software streamlines these steps and accelerates processing by enabling real-time and interactive parameter tuning across multiple images.
in PLoS Computational Biology on April 19, 2021 02:00 PM.
• #### Convolutional neural networks improve species distribution modelling by capturing the spatial structure of the environment
by Benjamin Deneu, Maximilien Servajean, Pierre Bonnet, Christophe Botella, François Munoz, Alexis Joly
Convolutional Neural Networks (CNNs) are statistical models suited for learning complex visual patterns. In the context of Species Distribution Models (SDM) and in line with predictions of landscape ecology and island biogeography, CNN could grasp how local landscape structure affects prediction of species occurrence in SDMs. The prediction can thus reflect the signatures of entangled ecological processes. Although previous machine-learning based SDMs can learn complex influences of environmental predictors, they cannot acknowledge the influence of environmental structure in local landscapes (hence denoted “punctual models”). In this study, we applied CNNs to a large dataset of plant occurrences in France (GBIF), on a large taxonomical scale, to predict ranked relative probability of species (by joint learning) to any geographical position. We examined the way local environmental landscapes improve prediction by performing alternative CNN models deprived of information on landscape heterogeneity and structure (“ablation experiments”). We found that the landscape structure around location crucially contributed to improve predictive performance of CNN-SDMs. CNN models can classify the predicted distributions of many species, as other joint modelling approaches, but they further prove efficient in identifying the influence of local environmental landscapes. CNN can then represent signatures of spatially structured environmental drivers. The prediction gain is noticeable for rare species, which open promising perspectives for biodiversity monitoring and conservation strategies. Therefore, the approach is of both theoretical and practical interest. We discuss the way to test hypotheses on the patterns learnt by CNN, which should be essential for further interpretation of the ecological processes at play.
in PLoS Computational Biology on April 19, 2021 02:00 PM.
• #### Hierarchical motor adaptations negotiate failures during force field learning
by Tsuyoshi Ikegami, Gowrishankar Ganesh, Tricia L. Gibo, Toshinori Yoshioka, Rieko Osu, Mitsuo Kawato
Humans have the amazing ability to learn the dynamics of the body and environment to develop motor skills. Traditional motor studies using arm reaching paradigms have viewed this ability as the process of ‘internal model adaptation’. However, the behaviors have not been fully explored in the case when reaches fail to attain the intended target. Here we examined human reaching under two force fields types; one that induces failures (i.e., target errors), and the other that does not. Our results show the presence of a distinct failure-driven adaptation process that enables quick task success after failures, and before completion of internal model adaptation, but that can result in persistent changes to the undisturbed trajectory. These behaviors can be explained by considering a hierarchical interaction between internal model adaptation and the failure-driven adaptation of reach direction. Our findings suggest that movement failure is negotiated using hierarchical motor adaptations by humans.
in PLoS Computational Biology on April 19, 2021 02:00 PM.
• #### Estimating Transfer Entropy in Continuous Time Between Neural Spike Trains or Other Event-Based Data
by David P. Shorten, Richard E. Spinney, Joseph T. Lizier
Transfer entropy (TE) is a widely used measure of directed information flows in a number of domains including neuroscience. Many real-world time series for which we are interested in information flows come in the form of (near) instantaneous events occurring over time. Examples include the spiking of biological neurons, trades on stock markets and posts to social media, amongst myriad other systems involving events in continuous time throughout the natural and social sciences. However, there exist severe limitations to the current approach to TE estimation on such event-based data via discretising the time series into time bins: it is not consistent, has high bias, converges slowly and cannot simultaneously capture relationships that occur with very fine time precision as well as those that occur over long time intervals. Building on recent work which derived a theoretical framework for TE in continuous time, we present an estimation framework for TE on event-based data and develop a k-nearest-neighbours estimator within this framework. This estimator is provably consistent, has favourable bias properties and converges orders of magnitude more quickly than the current state-of-the-art in discrete-time estimation on synthetic examples. We demonstrate failures of the traditionally-used source-time-shift method for null surrogate generation. In order to overcome these failures, we develop a local permutation scheme for generating surrogate time series conforming to the appropriate null hypothesis in order to test for the statistical significance of the TE and, as such, test for the conditional independence between the history of one point process and the updates of another. Our approach is shown to be capable of correctly rejecting or accepting the null hypothesis of conditional independence even in the presence of strong pairwise time-directed correlations. This capacity to accurately test for conditional independence is further demonstrated on models of a spiking neural circuit inspired by the pyloric circuit of the crustacean stomatogastric ganglion, succeeding where previous related estimators have failed.
in PLoS Computational Biology on April 19, 2021 02:00 PM.
• #### The dinucleotide composition of the Zika virus genome is shaped by conflicting evolutionary pressures in mammalian hosts and mosquito vectors
by Jelke J. Fros, Imke Visser, Bing Tang, Kexin Yan, Eri Nakayama, Tessa M. Visser, Constantianus J. M. Koenraadt, Monique M. van Oers, Gorben P. Pijlman, Andreas Suhrbier, Peter Simmonds
Most vertebrate RNA viruses show pervasive suppression of CpG and UpA dinucleotides, closely resembling the dinucleotide composition of host cell transcriptomes. In contrast, CpG suppression is absent in both invertebrate mRNA and RNA viruses that exclusively infect arthropods. Arthropod-borne (arbo) viruses are transmitted between vertebrate hosts by invertebrate vectors and thus encounter potentially conflicting evolutionary pressures in the different cytoplasmic environments. Using a newly developed Zika virus (ZIKV) model, we have investigated how demands for CpG suppression in vertebrate cells can be reconciled with potentially quite different compositional requirements in invertebrates and how this affects ZIKV replication and transmission. Mutant viruses with synonymously elevated CpG or UpA dinucleotide frequencies showed attenuated replication in vertebrate cell lines, which was rescued by knockout of the zinc-finger antiviral protein (ZAP). Conversely, in mosquito cells, ZIKV mutants with elevated CpG dinucleotide frequencies showed substantially enhanced replication compared to wild type. Host-driven effects on virus replication attenuation and enhancement were even more apparent in mouse and mosquito models. Infections with CpG- or UpA-high ZIKV mutants in mice did not cause typical ZIKV-induced tissue damage and completely protected mice during subsequent challenge with wild-type virus, which demonstrates their potential as live-attenuated vaccines. In contrast, the CpG-high mutants displayed enhanced replication in Aedes aegypti mosquitoes and a larger proportion of mosquitoes carried infectious virus in their saliva. These findings show that mosquito cells are also capable of discriminating RNA based on dinucleotide composition. However, the evolutionary pressure on the CpG dinucleotides of viral genomes in arthropod vectors directly opposes the pressure present in vertebrate host cells, which provides evidence that an adaptive compromise is required for arbovirus transmission. This suggests that the genome composition of arbo flaviviruses is crucial to maintain the balance between high-level replication in the vertebrate host and persistent replication in the mosquito vector.
in PLoS Biology on April 19, 2021 02:00 PM.
• #### Bacterial persisters are a stochastically formed subpopulation of low-energy cells
by Sylvie Manuse, Yue Shan, Silvia J. Canas-Duarte, Somenath Bakshi, Wei-Sheng Sun, Hirotada Mori, Johan Paulsson, Kim Lewis
Persisters represent a small subpopulation of non- or slow-growing bacterial cells that are tolerant to killing by antibiotics. Despite their prominent role in the recalcitrance of chronic infections to antibiotic therapy, the mechanism of their formation has remained elusive. We show that sorted cells of Escherichia coli with low levels of energy-generating enzymes are better able to survive antibiotic killing. Using microfluidics time-lapse microscopy and a fluorescent reporter for in vivo ATP measurements, we find that a subpopulation of cells with a low level of ATP survives killing by ampicillin. We propose that these low ATP cells are formed stochastically as a result of fluctuations in the abundance of energy-generating components. These findings point to a general “low energy” mechanism of persister formation.
in PLoS Biology on April 19, 2021 02:00 PM.
• #### Signatures of optimal codon usage in metabolic genes inform budding yeast ecology
by Abigail Leavitt LaBella, Dana A. Opulente, Jacob L. Steenwyk, Chris Todd Hittinger, Antonis Rokas
Reverse ecology is the inference of ecological information from patterns of genomic variation. One rich, heretofore underutilized, source of ecologically relevant genomic information is codon optimality or adaptation. Bias toward codons that match the tRNA pool is robustly associated with high gene expression in diverse organisms, suggesting that codon optimization could be used in a reverse ecology framework to identify highly expressed, ecologically relevant genes. To test this hypothesis, we examined the relationship between optimal codon usage in the classic galactose metabolism (GAL) pathway and known ecological niches for 329 species of budding yeasts, a diverse subphylum of fungi. We find that optimal codon usage in the GAL pathway is positively correlated with quantitative growth on galactose, suggesting that GAL codon optimization reflects increased capacity to grow on galactose. Optimal codon usage in the GAL pathway is also positively correlated with human-associated ecological niches in yeasts of the CUG-Ser1 clade and with dairy-associated ecological niches in the family Saccharomycetaceae. For example, optimal codon usage of GAL genes is greater than 85% of all genes in the genome of the major human pathogen Candida albicans (CUG-Ser1 clade) and greater than 75% of genes in the genome of the dairy yeast Kluyveromyces lactis (family Saccharomycetaceae). We further find a correlation between optimization in the GALactose pathway genes and several genes associated with nutrient sensing and metabolism. This work suggests that codon optimization harbors information about the metabolic ecology of microbial eukaryotes. This information may be particularly useful for studying fungal dark matter—species that have yet to be cultured in the lab or have only been identified by genomic material.
in PLoS Biology on April 19, 2021 02:00 PM.
• #### The methodological quality of 176,620 randomized controlled trials published between 1966 and 2018 reveals a positive trend but also an urgent need for improvement
by Christiaan H. Vinkers, Herm J. Lamberink, Joeri K. Tijdink, Pauline Heus, Lex Bouter, Paul Glasziou, David Moher, Johanna A. Damen, Lotty Hooft, Willem M. Otte
Many randomized controlled trials (RCTs) are biased and difficult to reproduce due to methodological flaws and poor reporting. There is increasing attention for responsible research practices and implementation of reporting guidelines, but whether these efforts have improved the methodological quality of RCTs (e.g., lower risk of bias) is unknown. We, therefore, mapped risk-of-bias trends over time in RCT publications in relation to journal and author characteristics. Meta-information of 176,620 RCTs published between 1966 and 2018 was extracted. The risk-of-bias probability (random sequence generation, allocation concealment, blinding of patients/personnel, and blinding of outcome assessment) was assessed using a risk-of-bias machine learning tool. This tool was simultaneously validated using 63,327 human risk-of-bias assessments obtained from 17,394 RCTs evaluated in the Cochrane Database of Systematic Reviews (CDSR). Moreover, RCT registration and CONSORT Statement reporting were assessed using automated searches. Publication characteristics included the number of authors, journal impact factor (JIF), and medical discipline. The annual number of published RCTs substantially increased over 4 decades, accompanied by increases in authors (5.2 to 7.8) and institutions (2.9 to 4.8). The risk of bias remained present in most RCTs but decreased over time for allocation concealment (63% to 51%), random sequence generation (57% to 36%), and blinding of outcome assessment (58% to 52%). Trial registration (37% to 47%) and the use of the CONSORT Statement (1% to 20%) also rapidly increased. In journals with a higher impact factor (>10), the risk of bias was consistently lower with higher levels of RCT registration and the use of the CONSORT Statement. Automated risk-of-bias predictions had accuracies above 70% for allocation concealment (70.7%), random sequence generation (72.1%), and blinding of patients/personnel (79.8%), but not for blinding of outcome assessment (62.7%). In conclusion, the likelihood of bias in RCTs has generally decreased over the last decades. This optimistic trend may be driven by increased knowledge augmented by mandatory trial registration and more stringent reporting guidelines and journal requirements. Nevertheless, relatively high probabilities of bias remain, particularly in journals with lower impact factors. This emphasizes that further improvement of RCT registration, conduct, and reporting is still urgently needed.
in PLoS Biology on April 19, 2021 02:00 PM.
• #### Quantitative proteome comparison of human hearts with those of model organisms
by Nora Linscheid, Alberto Santos, Pi Camilla Poulsen, Robert W. Mills, Kirstine Calloe, Ulrike Leurs, Johan Z. Ye, Christian Stolte, Morten B. Thomsen, Bo H. Bentzen, Pia R. Lundegaard, Morten S. Olesen, Lars J. Jensen, Jesper V. Olsen, Alicia Lundby
Delineating human cardiac pathologies and their basic molecular mechanisms relies on research conducted in model organisms. Yet translating findings from preclinical models to humans present a significant challenge, in part due to differences in cardiac protein expression between humans and model organisms. Proteins immediately determine cellular function, yet their large-scale investigation in hearts has lagged behind those of genes and transcripts. Here, we set out to bridge this knowledge gap: By analyzing protein profiles in humans and commonly used model organisms across cardiac chambers, we determine their commonalities and regional differences. We analyzed cardiac tissue from each chamber of human, pig, horse, rat, mouse, and zebrafish in biological replicates. Using mass spectrometry–based proteomics workflows, we measured and evaluated the abundance of approximately 7,000 proteins in each species. The resulting knowledgebase of cardiac protein signatures is accessible through an online database: atlas.cardiacproteomics.com. Our combined analysis allows for quantitative evaluation of protein abundances across cardiac chambers, as well as comparisons of cardiac protein profiles across model organisms. Up to a quarter of proteins with differential abundances between atria and ventricles showed opposite chamber-specific enrichment between species; these included numerous proteins implicated in cardiac disease. The generated proteomics resource facilitates translational prospects of cardiac studies from model organisms to humans by comparisons of disease-linked protein networks across species.
in PLoS Biology on April 19, 2021 02:00 PM.
• #### Biomolecular response to hour-long ultralow field microwave radiation: An effective coarse-grained model simulation
Author(s): Anang Kumar Singh, P. S. Burada, and Anushree Roy
Various electronic devices, which we commonly use, radiate microwaves. Such external perturbation influences the functionality of biomolecules. In an ultralow field, the cumulative response of a molecule is expected only over a time scale of hours. To study the structural dynamics of biomolecules ov...
[Phys. Rev. E 103, 042416] Published Mon Apr 19, 2021
in Physical Review E: Biological physics on April 19, 2021 10:00 AM.
• #### Robustness and predictability of evolution in bottlenecked populations
Author(s): Osmar Freitas, Lindi M. Wahl, and Paulo R. A. Campos
Deterministic and stochastic evolutionary processes drive adaptation in natural populations. The strength of each component process is determined by the population size: deterministic components prevail in very large populations, while stochastic components are the driving mechanisms in small ones. ...
[Phys. Rev. E 103, 042415] Published Mon Apr 19, 2021
in Physical Review E: Biological physics on April 19, 2021 10:00 AM.
• #### ALF -- A Fitness-Based Artificial Life Form for Evolving Large-Scale Neural Networks. (arXiv:2104.08252v1 [cs.NE])
Machine Learning (ML) is becoming increasingly important in daily life. In this context, Artificial Neural Networks (ANNs) are a popular approach within ML methods to realize an artificial intelligence. Usually, the topology of ANNs is predetermined. However, there are problems where it is difficult to find a suitable topology. Therefore, Topology and Weight Evolving Artificial Neural Network (TWEANN) algorithms have been developed that can find ANN topologies and weights using genetic algorithms. A well-known downside for large-scale problems is that TWEANN algorithms often evolve inefficient ANNs and require long runtimes.
To address this issue, we propose a new TWEANN algorithm called Artificial Life Form (ALF) with the following technical advancements: (1) speciation via structural and semantic similarity to form better candidate solutions, (2) dynamic adaptation of the observed candidate solutions for better convergence properties, and (3) integration of solution quality into genetic reproduction to increase the probability of optimization success. Experiments on large-scale ML problems confirm that these approaches allow the fast solving of these problems and lead to efficient evolved ANNs.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 19, 2021 01:30 AM.
• #### An exploration of asocial and social learning in the evolution of variable-length structures. (arXiv:2104.08239v1 [cs.NE])
We wish to explore the contribution that asocial and social learning might play as a mechanism for self-adaptation in the search for variable-length structures by an evolutionary algorithm. An extremely challenging, yet simple to understand problem landscape is adopted where the probability of randomly finding a solution is approximately one in a trillion. A number of learning mechanisms operating on variable-length structures are implemented and their performance analysed. The social learning setup, which combines forms of both social and asocial learning in combination with evolution is found to be most performant, while the setups exclusively adopting evolution are incapable of finding solutions.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 19, 2021 01:30 AM.
• #### Altered connectedness of the brain chronnectome during the progression of Alzheimer's disease. (arXiv:2104.08175v1 [q-bio.NC])
Graph theory has been extensively used to investigate brain network topology and its changes in disease cohorts. However, many graph theoretic analysis-based brain network studies focused on the shortest paths or, more generally, cost-efficiency. In this work, we use two new concepts, connectedness and 2-connectedness, to measure different global properties compared to the previously widely adopted ones.
in arXiv: Quantitative Biology: Neurons and Cognition on April 19, 2021 01:30 AM.
• #### Explorative Data Analysis of Time Series based AlgorithmFeatures of CMA-ES Variants. (arXiv:2104.08098v1 [cs.NE])
In this study, we analyze behaviours of the well-known CMA-ES by extracting the time-series features on its dynamic strategy parameters. An extensive experiment was conducted on twelve CMA-ES variants and 24 test problems taken from the BBOB (Black-Box Optimization Bench-marking) testbed, where we used two different cutoff times to stop those variants. We utilized the tsfresh package for extracting the features and performed the feature selection procedure using the Boruta algorithm, resulting in 32 features to distinguish either CMA-ES variants or the problems. After measuring the number of predefined targets reached by those variants, we contrive to predict those measured values on each test problem using the feature. From our analysis, we saw that the features can classify the CMA-ES variants, or the function groups decently, and show a potential for predicting the performance of those variants. We conducted a hierarchical clustering analysis on the test problems and noticed a drastic change in the clustering outcome when comparing the longer cutoff time to the shorter one, indicating a huge change in search behaviour of the algorithm. In general, we found that with longer time series, the predictive power of the time series features increase.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 19, 2021 01:30 AM.
• #### A Novel Surrogate-assisted Evolutionary Algorithm Applied to Partition-based Ensemble Learning. (arXiv:2104.08048v1 [cs.NE])
We propose a novel surrogate-assisted Evolutionary Algorithm for solving expensive combinatorial optimization problems. We integrate a surrogate model, which is used for fitness value estimation, into a state-of-the-art P3-like variant of the Gene-Pool Optimal Mixing Algorithm (GOMEA) and adapt the resulting algorithm for solving non-binary combinatorial problems. We test the proposed algorithm on an ensemble learning problem. Ensembling several models is a common Machine Learning technique to achieve better performance. We consider ensembles of several models trained on disjoint subsets of a dataset. Finding the best dataset partitioning is naturally a combinatorial non-binary optimization problem. Fitness function evaluations can be extremely expensive if complex models, such as Deep Neural Networks, are used as learners in an ensemble. Therefore, the number of fitness function evaluations is typically limited, necessitating expensive optimization techniques. In our experiments we use five classification datasets from the OpenML-CC18 benchmark and Support-vector Machines as learners in an ensemble. The proposed algorithm demonstrates better performance than alternative approaches, including Bayesian optimization algorithms. It manages to find better solutions using just several thousand fitness function evaluations for an ensemble learning problem with up to 500 variables.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 19, 2021 01:30 AM.
• #### Evolving and Merging Hebbian Learning Rules: Increasing Generalization by Decreasing the Number of Rules. (arXiv:2104.07959v1 [cs.NE])
Generalization to out-of-distribution (OOD) circumstances after training remains a challenge for artificial agents. To improve the robustness displayed by plastic Hebbian neural networks, we evolve a set of Hebbian learning rules, where multiple connections are assigned to a single rule. Inspired by the biological phenomenon of the genomic bottleneck, we show that by allowing multiple connections in the network to share the same local learning rule, it is possible to drastically reduce the number of trainable parameters, while obtaining a more robust agent. During evolution, by iteratively using simple K-Means clustering to combine rules, our Evolve and Merge approach is able to reduce the number of trainable parameters from 61,440 to 1,920, while at the same time improving robustness, all without increasing the number of generations used. While optimization of the agents is done on a standard quadruped robot morphology, we evaluate the agents' performances on slight morphology modifications in a total of 30 unseen morphologies. Our results add to the discussion on generalization, overfitting and OOD adaptation. To create agents that can adapt to a wider array of unexpected situations, Hebbian learning combined with a regularising "genomic bottleneck" could be a promising research direction.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 19, 2021 01:30 AM.
• #### Controlling extended criticality via modular connectivity. (arXiv:2104.07939v1 [q-bio.NC])
Criticality has been conjectured as an integral part of neuronal network dynamics. Operating at a critical threshold requires precise parameter tuning and a corresponding mechanism remains an open question. Recent studies have suggested that topological features observed in brain networks give rise to a Griffiths phase, leading to power-laws in brain activity dynamics and the operational benefits of criticality in an extended parameter region. Motivated by growing evidence of neural correlates of different states of consciousness, we investigate how topological changes affect the expression of a Griffiths phase. We analyze the activity decay in modular networks using a Susceptible-Infected-Susceptible propagation model and find that we can control the extension of the Griffiths phase by altering intra- and intermodular connectivity. We find that by adjusting system parameters, we can counteract changes in critical behavior and maintain a stable critical region despite changes in network topology. Our results give insight into how structural network properties affect the emergence of a Griffiths phase and how its features are linked to established topological network metrics. We discuss how those findings can contribute to understand the observed changes in functional brain networks. Finally, we indicate how our results could be useful in the study of disease spreading.
in arXiv: Quantitative Biology: Neurons and Cognition on April 19, 2021 01:30 AM.
• #### A New Pathway to Approximate Energy Expenditure and Recovery of an Athlete. (arXiv:2104.07903v1 [cs.NE])
This work proposes to use evolutionary computation as a pathway to allow a new perspective on the modeling of energy expenditure and recovery of an individual athlete during exercise. We revisit a theoretical concept called the "three component hydraulic model" which is designed to simulate metabolic systems during exercise and which is able to address recently highlighted shortcomings of currently applied performance models. This hydraulic model has not been entirely validated on individual athletes because it depends on physiological measures that cannot be acquired in the required precision or quantity. This paper introduces a generalized interpretation and formalization of the three component hydraulic model that removes its ties to concrete metabolic measures and allows to use evolutionary computation to fit its parameters to an athlete.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 19, 2021 01:30 AM.
• #### Quantum Architecture Search via Deep Reinforcement Learning. (arXiv:2104.07715v1 [quant-ph])
Recent advances in quantum computing have drawn considerable attention to building realistic application for and using quantum computers. However, designing a suitable quantum circuit architecture requires expert knowledge. For example, it is non-trivial to design a quantum gate sequence for generating a particular quantum state with as fewer gates as possible. We propose a quantum architecture search framework with the power of deep reinforcement learning (DRL) to address this challenge. In the proposed framework, the DRL agent can only access the Pauli-$X$, $Y$, $Z$ expectation values and a predefined set of quantum operations for learning the target quantum state, and is optimized by the advantage actor-critic (A2C) and proximal policy optimization (PPO) algorithms. We demonstrate a successful generation of quantum gate sequences for multi-qubit GHZ states without encoding any knowledge of quantum physics in the agent. The design of our framework is rather general and can be employed with other DRL architectures or optimization methods to study gate synthesis and compilation for many quantum states.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 19, 2021 01:30 AM.
• #### Information cartography in association rule mining. (arXiv:2003.00348v2 [cs.NE] UPDATED)
Association Rule Mining is a machine learning method for discovering the interesting relations between the attributes in a huge transaction database. Typically, algorithms for Association Rule Mining generate a huge number of association rules, from which it is hard to extract structured knowledge and present this automatically in a form that would be suitable for the user. Recently, an information cartography has been proposed for creating structured summaries of information and visualizing with methodology called "metro maps". This was applied to several problem domains, where pattern mining was necessary. The aim of this study is to develop a method for automatic creation of metro maps of information obtained by Association Rule Mining and, thus, spread its applicability to the other machine learning methods. Although the proposed method consists of multiple steps, its core presents metro map construction that is defined in the study as an optimization problem, which is solved using an evolutionary algorithm. Finally, this was applied to four well-known UCI Machine Learning datasets and one sport dataset. Visualizing the resulted metro maps not only justifies that this is a suitable tool for presenting structured knowledge hidden in data, but also that they can tell stories to users.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 19, 2021 01:30 AM.
• #### From base pair to brain
Nature Neuroscience, Published online: 19 April 2021; doi:10.1038/s41593-021-00852-2
In new research, Smith et al. identify thousands of novel genetic associations with human brain structure and function, including those on the X chromosome, by analyzing ~4,000 MRI-derived traits measured in almost 40,000 individuals from the UK Biobank resource.
in Nature Neuroscience on April 19, 2021 12:00 AM.
• #### Multimodal neural recordings with Neuro-FITM uncover diverse patterns of cortical–hippocampal interactions
Nature Neuroscience, Published online: 19 April 2021; doi:10.1038/s41593-021-00841-5
Liu et al. present a flexible, insertable and transparent microelectrode (FITM) array termed Neuro-FITM. Multimodal recordings with Neuro-FITM reveal diverse and selective large-scale cortical activation patterns associated with hippocampal sharp-wave ripples.
in Nature Neuroscience on April 19, 2021 12:00 AM.
• #### Attractor dynamics gate cortical information flow during decision-making
Nature Neuroscience, Published online: 19 April 2021; doi:10.1038/s41593-021-00840-6
The flow of information in the brain is regulated over space and time. The authors show that mice can adaptively filter stimuli originating in the sensory cortex. The stimuli are gated by attractor dynamics in the frontal cortex, revealing a mechanism of gating of neural information.
in Nature Neuroscience on April 19, 2021 12:00 AM.
• #### Closing the gate to distractors during decision-making
Nature Neuroscience, Published online: 19 April 2021; doi:10.1038/s41593-021-00833-5
The act of remembering information or planning actions in short term memory can often be robust to distracting or conflicting information. Finkelstein et al. reveal the neural computations behind this robustness against distractors using a combination of optogenetics, behavior, neural recordings and neural network modelling.
in Nature Neuroscience on April 19, 2021 12:00 AM.
• #### An expanded set of genome-wide association studies of brain imaging phenotypes in UK Biobank
Nature Neuroscience, Published online: 19 April 2021; doi:10.1038/s41593-021-00826-4
The Elliott and Smith teams used imaging and genetics data from 40,000 volunteers in the UK Biobank healthcare study, discovering new genetic influences over brain structure and function, which are of relevance to both rare and common diseases.
in Nature Neuroscience on April 19, 2021 12:00 AM.
• #### A community effort to bring structure to disorder
Nature Methods, Published online: 19 April 2021; doi:10.1038/s41592-021-01123-5
With protein structure prediction recently getting a seismic boost in accuracy, hopes are also up to better predict unstructured protein regions that can adopt diverse conformations. CAID, a community effort to revive systematic benchmarking, should help.
in Nature Methods on April 19, 2021 12:00 AM.
• #### Critical assessment of protein intrinsic disorder prediction
Nature Methods, Published online: 19 April 2021; doi:10.1038/s41592-021-01117-3
Results are presented from the first Critical Assessment of protein Intrinsic Disorder prediction (CAID) experiment, a community-based blind test to determine the state of the art in predicting intrinsically disordered regions in proteins.
in Nature Methods on April 19, 2021 12:00 AM.
• #### The DANNCE of the rats: a new toolkit for 3D tracking of animal behavior
Nature Methods, Published online: 19 April 2021; doi:10.1038/s41592-021-01110-w
A new approach tracks animal movements in 3D from multiple camera views using volumetric triangulation, reconciling occlusions and ambiguities present in any one camera view.
in Nature Methods on April 19, 2021 12:00 AM.
• #### Geometric deep learning enables 3D kinematic profiling across species and environments
Nature Methods, Published online: 19 April 2021; doi:10.1038/s41592-021-01106-6
DANNCE enables robust 3D tracking of animals’ limbs and other features in naturalistic environments by making use of a deep learning approach that incorporates geometric reasoning. DANNCE is demonstrated on behavioral sequences from rodents, marmosets, and chickadees.
in Nature Methods on April 19, 2021 12:00 AM.
• #### Adjuvanting a subunit COVID-19 vaccine to induce protective immunity
Nature, Published online: 19 April 2021; doi:10.1038/s41586-021-03530-2
Adjuvanting a subunit COVID-19 vaccine to induce protective immunity
in Nature on April 19, 2021 12:00 AM.
• #### Author Correction: Slower decay of landfalling hurricanes in a warming world
Nature, Published online: 19 April 2021; doi:10.1038/s41586-021-03321-9
Author Correction: Slower decay of landfalling hurricanes in a warming world
in Nature on April 19, 2021 12:00 AM.
• #### Author Correction: Discovering the genes mediating the interactions between chronic respiratory diseases in the human interactome
Nature Communications, Published online: 19 April 2021; doi:10.1038/s41467-021-22939-x
Author Correction: Discovering the genes mediating the interactions between chronic respiratory diseases in the human interactome
in Nature Communications on April 19, 2021 12:00 AM.
• #### Phenological shifts in lake stratification under climate change
Nature Communications, Published online: 19 April 2021; doi:10.1038/s41467-021-22657-4
Stratification has a considerable influence on lake ecology, but there is little understanding of past or future changes in its seasonality. Here, the authors use modelling and empirical data to determine that between 1901–2099, climate change causes stratification to start earlier and end later.
in Nature Communications on April 19, 2021 12:00 AM.
• #### A time-domain phase diagram of metastable states in a charge ordered quantum material
Nature Communications, Published online: 19 April 2021; doi:10.1038/s41467-021-22646-7
Tracking the evolution of non-equilibrium phases requires measurements over a wide range of timescales. Here, using a combination of femtosecond spectroscopy and scanning tunneling microscopy, the authors map out a temporal phase diagram of metastable states in a charge-ordered material 1T-TaS2.
in Nature Communications on April 19, 2021 12:00 AM.
• #### Energy implications of the 21st century agrarian transition
Nature Communications, Published online: 19 April 2021; doi:10.1038/s41467-021-22581-7
The global agrarian transition is characterized by a rise in large-scale land acquisitions (LSLAs), whose energy impacts are unknown. Here, the authors assess how LSLAs change land use, finding that they necessitate greater investment in energy to meet demands, and greater greenhouse gas emissions.
in Nature Communications on April 19, 2021 12:00 AM.
• #### STING enhances cell death through regulation of reactive oxygen species and DNA damage
Nature Communications, Published online: 19 April 2021; doi:10.1038/s41467-021-22572-8
The endoplasmic reticulum-localized adaptor STING regulates the innate immune response through its ability to sense DNA damage. Here the authors reveal that STING functions as a regulator of cellular ROS homeostasis and tumor cell susceptibility to reactive oxygen dependent, DNA damaging agents.
in Nature Communications on April 19, 2021 12:00 AM.
• #### Chromatic micromaps in primary visual cortex
Nature Communications, Published online: 19 April 2021; doi:10.1038/s41467-021-22488-3
Stimulus feature maps are found in primary visual cortex of many species. Here the authors show color maps in trichromatic primates containing segregated ensembles of neurons with distinct chromatic signatures that associate with cortical modules known as blobs.
in Nature Communications on April 19, 2021 12:00 AM.
• #### Cytoplasmic condensation induced by membrane damage is associated with antibiotic lethality
Nature Communications, Published online: 19 April 2021; doi:10.1038/s41467-021-22485-6
The detailed mechanisms of action of bactericidal antibiotics remain unclear. Here, Wong et al. show that these antibiotics induce cytoplasmic condensation through membrane damage and outflow of cytoplasmic contents, as well as accumulation of reactive metabolic by-products and lipid peroxidation, as part of their lethality.
in Nature Communications on April 19, 2021 12:00 AM.
• #### Transcriptomic analysis to identify genes associated with selective hippocampal vulnerability in Alzheimer’s disease
Nature Communications, Published online: 19 April 2021; doi:10.1038/s41467-021-22399-3
Alzheimer’s disease (AD) is typically associated with hippocampal and cortical pathology, although hippocampal sparing and limbic predominant forms exist. The authors use transcriptomic analysis and neuropathology to identify genes associated with selective hippocampal vulnerability in AD.
in Nature Communications on April 19, 2021 12:00 AM.
• #### A central circadian oscillator confers defense heterosis in hybrids without growth vigor costs
Nature Communications, Published online: 19 April 2021; doi:10.1038/s41467-021-22268-z
There is frequently a trade-off between plant immunity and growth. Here the authors show that the epigenetic control of CCA1, encoding a core component of the circadian oscillator, simultaneously promotes heterosis for both defense and growth in hybrids under pathogen invasion.
in Nature Communications on April 19, 2021 12:00 AM.
• #### Daily briefing: Five questions about COVID vaccines and blood clots
Nature, Published online: 19 April 2021; doi:10.1038/d41586-021-01056-1
What scientists want to know about COVID vaccines and blood clots, the first flight on Mars and why a scientific career isn’t a pipeline, it’s a braided river.
in Nature on April 19, 2021 12:00 AM.
• #### ‘The damage is total’: fire rips through historic South African library and plant collection
Nature, Published online: 19 April 2021; doi:10.1038/d41586-021-01045-4
University of Cape Town risks losing ‘irreplaceable’ historical material on anthropology, ecology and politics.
in Nature on April 19, 2021 12:00 AM.
• #### Webcast: How to do a great peer review
Nature, Published online: 19 April 2021; doi:10.1038/d41586-021-01044-5
in Nature on April 19, 2021 12:00 AM.
• #### Charting ice from above
Nature, Published online: 19 April 2021; doi:10.1038/d41586-021-01030-x
Instrument engineer Cristina Sans Coll flies the polar skies to help measure climate change.
in Nature on April 19, 2021 12:00 AM.
• #### Born secret — the heavy burden of bomb physics
Nature, Published online: 19 April 2021; doi:10.1038/d41586-021-01024-9
How data restrictions shaped nuclear discovery, energy research and more.
in Nature on April 19, 2021 12:00 AM.
• #### A molecular connection hints at how a genetic risk factor drives Crohn’s disease
Nature, Published online: 19 April 2021; doi:10.1038/d41586-021-00979-z
Mutations of the NOD2 gene are risk factors for Crohn’s disease. Many aspects of how they contribute to the condition are unknown. The discovery of cell populations that are involved suggests new therapeutic options.
in Nature on April 19, 2021 12:00 AM.
• #### Lift off! First flight on Mars launches new way to explore worlds
Nature, Published online: 19 April 2021; doi:10.1038/d41586-021-00909-z
NASA’s Ingenuity helicopter successfully hovered for 40 seconds in Mars’s thin atmosphere.
in Nature on April 19, 2021 12:00 AM.
• #### Journal of Neurology
in Journal of Neurology on April 19, 2021 12:00 AM.
• #### Experimental Brain Research
in Experimental Brain Research on April 19, 2021 12:00 AM.
• #### Journal of Molecular Neuroscience
in Journal of Molecular Neuroscience on April 19, 2021 12:00 AM.
### Abstract
The present study aimed to detect the effect of tenascin C (TNC) on cell function and chemosensitivity to paclitaxel and phosphatidylinositol 3-kinase/protein kinase B (PI3K/AKT) signaling in glioma cells.
Human glioma cells U87, LN-229, T98G and U251 and normal human astrocytes were obtained, in which TNC expression was detected. The U87 cells and U251 cells were chosen and infected with lentivirus of control overexpression, TNC overexpression, control knockdown, and TNC knockdown for functional experiments. Rescue experiments were then performed to evaluate the effect of PI3K/AKT activator 740 Y-P on cell function and chemosensitivity to paclitaxel in TNC knockdown U251 cells. TNC mRNA and protein expression was elevated in glioma cells, including U87, LN-229, U251 and T98G cells, compared to normal human astrocytes. In U87 and U251 cells, TNC promoted proliferation while inhibiting apoptosis. In addition, TNC upregulated PI3K and p-AKT protein expression in U87 and U251 cells. As for chemosensitivity, TNC increased relative viability in U251 cells treated with 400 ng/mL and 800 ng/mL paclitaxel. In terms of stemness, TNC increased the sphere number per 1000 cells, CD44+CD133+ cell percentage and 1/stem cell frequency (assessed by extreme limiting dilution analysis) in U251 cells. In rescue experiments, 740 Y-P reduced the effect of TNC on proliferation, apoptosis, chemosensitivity to paclitaxel, and stemness in U251 cells.
TNC acts as an oncogenic factor by promoting cancer cell proliferation and stemness while inhibiting apoptosis and chemosensitivity to paclitaxel in glioma via modulation of PI3K/AKT signaling.
in Journal of Molecular Neuroscience on April 19, 2021 12:00 AM.
### Background
Electroencephalography (EEG) significantly contributes to the neuroprognostication after resuscitation from cardiac arrest. Recent studies suggest that the prognostic value of EEG is highest for continuous recording within the first days after cardiac arrest. Early continuous EEG, however, is not available in all hospitals. In this observational study, we sought to evaluate the predictive value of a ‘late’ EEG recording 5–14 days after cardiac arrest without sedatives.
### Methods
We retrospectively analyzed EEG data in consecutive adult patients treated at the medical intensive care units (ICU) of the Charité—Universitätsmedizin Berlin. Outcome was assessed as cerebral performance category (CPC) at discharge from ICU, with an unfavorable outcome being defined as CPC 4 and 5.
### Results
In 187 patients, a ‘late’ EEG recording was performed. Of these patients, 127 were without continuous administration of sedative agents for at least 24 h before the EEG recording. In this patient group, a continuously suppressed background activity < 10 µV predicted an unfavorable outcome with a sensitivity of 31% (95% confidence interval (CI) 20–45) and a specificity of 99% (95% CI 91–100). In patients with suppressed background activity and generalized periodic discharges, sensitivity was 15% (95% CI 7–27) and specificity was 100% (95% CI 94–100). GPDs on unsuppressed background activity were associated with a sensitivity of 42% (95% CI 29–46) and a specificity of 92% (95% CI 82–97).
### Conclusions
A ‘late’ EEG performed 5 to 14 days after resuscitation from cardiac arrest can aide in prognosticating functional outcome. A suppressed EEG background activity in this time period indicates poor outcome.
in Journal of Neurology on April 19, 2021 12:00 AM.
• #### Neurosensory dysphagia in a COVID-19 patient
in Journal of Neurology on April 19, 2021 12:00 AM.
### Abstract
in Experimental Brain Research on April 19, 2021 12:00 AM.
### Abstract
Previous studies have reported (i) freezing-like posturographic correlates in response to painful as compared to non-painful scenes vision (Lelard et al., Front Hum Neurosci 7:4, 2013) and (ii) an increase of this response during the mental simulation as compared to the passive viewing of the painful scenes (Lelard et al., Front Psychol 8:2012, 2017). The main objective of the present study was to explore the modulation of posturographic correlates of painful scenes vision by the level of depicted pain and the influence of mental simulation on this modulation. Thirty-six participants (36.3 ± 11.4 years old) were included in this study. During the experiment, participants had to stand on a posturographic platform. Three types of static visual stimuli were randomly depicting different pain-level situations: no-pain, low-pain, high-pain. In a first run, participants watched these stimuli passively (passive condition); in a second run, they were asked to “imagine that they were personally experiencing the situations they were about to see” (mental simulation condition). For each picture, subjective ratings were recorded for displeasure and desire to avoid at the end of the posturographic session. Results support an approach-type behavior in response to high-pain stimuli in the passive condition which becomes a withdrawal-type behavior in the mental simulation condition. Moreover, this withdrawal-type behavior is modulated by the level of depicted pain and this modulation does not appear for the subjective data. As a conclusion, these results are in accordance with those of previous studies showing the modulation of posturographic correlates of pain perception by mental simulation and report, for the first time, modulation of this effect by the level of depicted pain. The dichotomy of this modulatory effect between subjective and objective data is discussed as well as the finding of an approach-type behavior towards painful stimuli when passively viewing them becoming a withdrawal-type behavior when mental simulation is applied to the same stimuli.
in Experimental Brain Research on April 19, 2021 12:00 AM.
### Abstract
The ability to perform individual finger movements, highly important in daily activities, involves visual monitoring and proprioception. We investigated the influence of vision on the spatial and temporal control of independent finger movements, for the dominant and non-dominant hand and in relation to sex. Twenty-six healthy middle-aged to old adults (M age = 61 years; range 46–79 years; females n = 13) participated. Participants performed cyclic flexion–extension movements at the metacarpophalangeal joint of one finger at a time while keeping the other fingers as still as possible. Movements were recorded using 3D optoelectronic motion technique (120 Hz). The movement trajectory distance; speed peaks (movement smoothness); Individuation Index (II; the degree a finger can move in isolation from the other fingers) and Stationarity Index (SI; how still a finger remains while the other fingers move) were extracted. The main findings were: (1) vision only improved the II and SI marginally; (2) longer trajectories were evident in the no-vision condition for the fingers of the dominant hand in the female group; (3) longer trajectories were specifically evident for the middle and ring fingers within the female group; (4) females had marginally higher II and SI compared with males; and (5) females had fewer speed peaks than males, particularly for the ring finger. Our results suggest that visual monitoring of finger movements marginally improves performance of our non-manipulative finger movement task. A consistent finding was that females showed greater independent finger control compared with males.
in Experimental Brain Research on April 19, 2021 12:00 AM.
• #### Proceedings of the Eighth Annual Deep Brain Stimulation Think Tank: Advances in Optogenetics, Ethical Issues Affecting DBS Research, Neuromodulatory Approaches for Depression, Adaptive Neurostimulation, and Emerging DBS Technologies
in Frontiers in Human Neuroscience on April 19, 2021 12:00 AM.
• #### Corrigendum: Vitamin D, Folate, and Cobalamin Serum Concentrations Are Related to Brain Volume and White Matter Integrity in Urban Adults
in Frontiers in Ageing Neuroscience on April 19, 2021 12:00 AM.
• #### Multiscale analysis of single and double maternal-zygotic Myh9 and Myh10 mutants during mouse preimplantation development
During the first days of mammalian development, the embryo forms the blastocyst, the structure responsible for implanting the mammalian embryo. Consisting of an epithelium enveloping the pluripotent inner cell mass and a fluid-filled lumen, the blastocyst results from a series of cleavages divisions, morphogenetic movements and lineage specification. Recent studies identified the essential role of actomyosin contractility in driving the cytokinesis, morphogenesis and fate specification leading to the formation of the blastocyst. However, the preimplantation development of contractility mutants has not been characterized. Here, we generated single and double maternal-zygotic mutants of non-muscle myosin II heavy chains (NMHC) to characterize them with multiscale imaging. We find that Myh9 (NMHC II-A) is the major NMHC during preimplantation development as its maternal-zygotic loss causes failed cytokinesis, increased duration of the cell cycle, weaker embryo compaction and reduced differentiation, whereas Myh10 (NMHC II-B) maternal-zygotic loss is much less severe. Double maternal-zygotic mutants for Myh9 and Myh10 show a much stronger phenotype, failing most attempts of cytokinesis. We find that morphogenesis and fate specification are affected but nevertheless carry on in a timely fashion, regardless of the impact of the mutations on cell number. Strikingly, even when all cell divisions fail, the resulting single-celled embryo can initiate trophectoderm differentiation and lumen formation by accumulating fluid in increasingly large vacuoles. Therefore, contractility mutants reveal that fluid accumulation is a cell-autonomous process and that the preimplantation program carries on independently of successful cell division.
in eLife on April 19, 2021 12:00 AM.
• #### Characterizing the hum of hovering animals
The sounds of flying animals, such as the hum of a hummingbird as it hovers, are influenced by the unique forces generated by the flapping of their wings.
in eLife on April 19, 2021 12:00 AM.
• #### Genetic determinants facilitating the evolution of resistance to carbapenem antibiotics
In this era of rising antibiotic resistance, in contrast to our increasing understanding of mechanisms that cause resistance, our understanding of mechanisms that influence the propensity to evolve resistance remains limited. Here, we identified genetic factors that facilitate the evolution of resistance to carbapenems, the antibiotic of 'last resort,' in Klebsiella pneumoniae, the major carbapenem resistant species. In clinical isolates, we found that high-level transposon insertional mutagenesis plays an important role in contributing to high-level resistance frequencies in several major and emerging carbapenem-resistant lineages. A broader spectrum of resistance-conferring mutations for select carbapenems such as ertapenem also enables higher resistance frequencies and importantly, creates stepping-stones to achieve high-level resistance to all carbapenems. These mutational mechanisms can contribute to the evolution of resistance, in conjunction with the loss of systems that restrict horizontal resistance gene uptake, such as the CRISPR-Cas system. Given the need for greater antibiotic stewardship, these findings argue that in addition to considering the current efficacy of an antibiotic for a clinical isolate in antibiotic selection, considerations of future efficacy are also important. The genetic background of a clinical isolate and the exact antibiotic identity can and should also be considered as it is a determinant of a strain's propensity to become resistant. Together, these findings thus provide a molecular framework for understanding acquisition of carbapenem resistance in K. pneumoniae with important implications for diagnosing and treating this important class of pathogens.
in eLife on April 19, 2021 12:00 AM.
• #### Repeated introductions and intensive community transmission fueled a mumps virus outbreak in Washington State
In 2016/2017, Washington State experienced a mumps outbreak despite high childhood vaccination rates, with cases more frequently detected among school-aged children and members of the Marshallese community. We sequenced 166 mumps virus genomes collected in Washington and other US states, and traced mumps introductions and transmission within Washington. We uncover that mumps was introduced into Washington approximately 13 times, primarily from Arkansas, sparking multiple co-circulating transmission chains. Although age and vaccination status may have impacted transmission, our dataset could not quantify their precise effects. Instead, the outbreak in Washington was overwhelmingly sustained by transmission within the Marshallese community. Our findings underscore the utility of genomic data to clarify epidemiologic factors driving transmission, and pinpoint contact networks as critical for mumps transmission. These results imply that contact structures and historic disparities may leave populations at increased risk for respiratory virus disease even when a vaccine is effective and widely used.
in eLife on April 19, 2021 12:00 AM.
• #### Evolutionarily conserved sperm factors, DCST1 and DCST2, are required for gamete fusion
To trigger gamete fusion, spermatozoa need to activate the molecular machinery in which sperm IZUMO1 and oocyte JUNO (IZUMO1R) interaction plays a critical role in mammals. Although a set of factors involved in this process has recently been identified, no common factor that can function in both vertebrates and invertebrates has yet been reported. Here, we first demonstrate that the evolutionarily conserved factors dendrocyte expressed seven transmembrane protein domain-containing 1 (DCST1) and dendrocyte expressed seven transmembrane protein domain-containing 2 (DCST2) are essential for sperm–egg fusion in mice, as proven by gene disruption and complementation experiments. We also found that the protein stability of another gamete fusion-related sperm factor, SPACA6, is differently regulated by DCST1/2 and IZUMO1. Thus, we suggest that spermatozoa ensure proper fertilization in mammals by integrating various molecular pathways, including an evolutionarily conserved system that has developed as a result of nearly one billion years of evolution.
in eLife on April 19, 2021 12:00 AM.
• #### Competitive binding of STATs to receptor phospho-Tyr motifs accounts for altered cytokine responses
Cytokines elicit pleiotropic and non-redundant activities despite strong overlap in their usage of receptors, JAKs and STATs molecules. We use IL-6 and IL-27 to ask how two cytokines activating the same signaling pathway have different biological roles. We found that IL-27 induces more sustained STAT1 phosphorylation than IL-6, with the two cytokines inducing comparable levels of STAT3 phosphorylation. Mathematical and statistical modelling of IL-6 and IL-27 signaling identified STAT3 binding to GP130, and STAT1 binding to IL-27Rα, as the main dynamical processes contributing to sustained pSTAT1 levels by IL-27. Mutation of Tyr613 on IL-27Rα decreased IL-27-induced STAT1 phosphorylation by 80% but had limited effect on STAT3 phosphorylation. Strong receptor/STAT coupling by IL-27 initiated a unique gene expression program, which required sustained STAT1 phosphorylation and IRF1 expression and was enriched in classical Interferon Stimulated Genes. Interestingly, the STAT/receptor coupling exhibited by IL-6/IL-27 was altered in patients with systemic lupus erythematosus (SLE). IL-6/IL-27 induced a more potent STAT1 activation in SLE patients than in healthy controls, which correlated with higher STAT1 expression in these patients. Partial inhibition of JAK activation by sub-saturating doses of Tofacitinib specifically lowered the levels of STAT1 activation by IL-6. Our data show that receptor and STATs concentrations critically contribute to shape cytokine responses and generate functional pleiotropy in health and disease.
in eLife on April 19, 2021 12:00 AM.
• #### Structure of HIV-1 gp41 with its membrane anchors targeted by neutralizing antibodies
The HIV-1 gp120/gp41 trimer undergoes a series of conformational changes in order to catalyze gp41-induced fusion of viral and cellular membranes. Here, we present the crystal structure of gp41 locked in a fusion intermediate state by an MPER-specific neutralizing antibody. The structure illustrates the conformational plasticity of the six membrane anchors arranged asymmetrically with the fusion peptides and the transmembrane regions pointing into different directions. Hinge regions located adjacent to the fusion peptide and the transmembrane region facilitate the conformational flexibility that allows high affinity binding of broadly neutralizing anti-MPER antibodies. Molecular dynamics simulation of the MPER Ab-stabilized gp41 conformation reveals a possible transition pathway into the final post-fusion conformation with the central fusion peptides forming a hydrophobic core with flanking transmembrane regions. This suggests that MPER-specific broadly neutralizing antibodies can block final steps of refolding of the fusion peptide and the transmembrane region, which is required for completing membrane fusion.
in eLife on April 19, 2021 12:00 AM.
• #### Downregulation of glial genes involved in synaptic function mitigates Huntington's Disease pathogenesis
Most research on neurodegenerative diseases has focused on neurons, yet glia help form and maintain the synapses whose loss is so prominent in these conditions. To investigate the contributions of glia to Huntington's disease (HD), we profiled the gene expression alterations of Drosophila expressing human mutant Huntingtin (mHTT) in either glia or neurons and compared these changes to what is observed in HD human and HD mice striata. A large portion of conserved genes are concordantly dysregulated across the three species; we tested these genes in a high-throughput behavioral assay and found that downregulation of genes involved in synapse assembly mitigated pathogenesis and behavioral deficits. To our surprise, reducing dNRXN3 function in glia was sufficient to improve the phenotype of flies expressing mHTT in neurons, suggesting that mHTT's toxic effects in glia ramify throughout the brain. This supports a model in which dampening synaptic function is protective because it attenuates the excitotoxicity that characterizes HD.
in eLife on April 19, 2021 12:00 AM.
• #### Defective apoptotic cell contractility provokes sterile inflammation leading to liver damage and tumour suppression
Apoptosis is characterized by profound morphological changes, but their physiological purpose is unknown. To characterize the role of apoptotic cell contraction, ROCK1 was rendered caspase non-cleavable (ROCK1nc) by mutating Aspartate 1113, which revealed that ROCK1 cleavage was necessary for forceful contraction and membrane blebbing. When homozygous ROCK1nc mice were treated with the liver-selective apoptotic stimulus of diethylnitrosamine, ROCK1nc mice had more profound liver damage with greater neutrophil infiltration than wild-type mice. Inhibition of the damage associated molecular pattern protein HMGB1 or signalling by its cognate receptor TLR4 lowered neutrophil infiltration and reduced liver damage. ROCK1nc mice also developed fewer diethylnitrosamine-induced hepatocellular carcinoma (HCC) tumours, while HMGB1 inhibition increased HCC tumour numbers. Thus, ROCK1 activation and consequent cell contraction are required to limit sterile inflammation and damage amplification following tissue-scale cell death. Additionally, these findings reveal a previously unappreciated role for acute sterile inflammation as an efficient tumour suppressive mechanism.
in eLife on April 19, 2021 12:00 AM.
• #### Spontaneous and evoked activity patterns diverge over development
The immature brain is highly spontaneously active. Over development this activity must be integrated with emerging patterns of stimulus-evoked activity, but little is known about how this occurs. Here we investigated this question by recording spontaneous and evoked neural activity in the larval zebrafish tectum from 4 to 15 days post fertilisation. Correlations within spontaneous and evoked activity epochs were comparable over development, and their neural assemblies properties refined in similar ways. However both the similarity between evoked and spontaneous assemblies, and also the geometric distance between spontaneous and evoked patterns, decreased over development. At all stages of development evoked activity was of higher dimension than spontaneous activity. Thus spontaneous and evoked activity do not converge over development in this system, and these results do not support the hypothesis that spontaneous activity evolves to form a Bayesian prior for evoked activity.
in eLife on April 19, 2021 12:00 AM.
• #### DCC regulates astroglial development essential for telencephalic morphogenesis and corpus callosum formation
The forebrain hemispheres are predominantly separated during embryogenesis by the interhemispheric fissure (IHF). Radial astroglia remodel the IHF to form a continuous substrate between the hemispheres for midline crossing of the corpus callosum (CC) and hippocampal commissure (HC). DCC and NTN1 are molecules that have an evolutionarily conserved function in commissural axon guidance. The CC and HC are absent in Dcc and Ntn1 knockout mice, while other commissures are only partially affected, suggesting an additional aetiology in forebrain commissure formation. Here, we find that these molecules play a critical role in regulating astroglial development and IHF remodelling during CC and HC formation. Human subjects with DCC mutations display disrupted IHF remodelling associated with CC and HC malformations. Thus, axon guidance molecules such as DCC and NTN1 first regulate the formation of a midline substrate for dorsal commissures prior to their role in regulating axonal growth and guidance across it.
in eLife on April 19, 2021 12:00 AM.
• #### Hierarchical Computational Anatomy: Unifying the Molecular to Tissue Continuum via Measure Representations of the Brain
This paper presents a unified representation of the brain based on mathematical functional measures integrating the molecular and cellular scale descriptions with continuum tissue scale descriptions. We present a fine-to-coarse recipe for traversing the brain as a hierarchy of measures projecting functional description into stable empirical probability laws that unifies scale-space aggregation. The representation uses measure norms for mapping the brain across scales from different measurement technologies. Brainspace is constructed as a metric space with metric comparison between brains provided by a hierarchy of Hamiltonian geodesic flows of diffeomorphisms connecting the molecular and continuum tissue scales. The diffeomorphisms act on the brain measures via the 3D varifold action representing "copy and paste" so that basic particle quantities that are conserved biologically are combined with greater multiplicity and not geometrically distorted. Two applications are examined, the first histological and tissue scale data in the human brain for studying Alzheimer's disease, and the second the RNA and cell signatures of dense spatial transcriptomics mapped to the meso-scales of brain atlases. The representation unifies the classical formalism of computational anatomy for representing continuum tissue scale with non-classical generalized functions appropriate for molecular particle scales.
in bioRxiv: Neuroscience on April 19, 2021 12:00 AM.
• #### Neuromodulation-induced burst firing in parvalbumin interneurons of the basolateral amygdala mediates transition between fear-associated network and behavioral states
Network orchestration of behavioral states involves coordinated oscillations within and between brain regions. The network communication between the basolateral amygdala (BLA) and the medial prefrontal cortex (PFC) plays a critical role in fear expression. Neuromodulatory systems play an essential role in regulating changes between behavioral states, however, a mechanistic understanding of how amygdalar circuits mediate transitions between brain and behavioral states remains largely unknown. Here, we examine the role of Gq-mediated neuromodulation of parvalbumin (PV)-expressing interneurons in the BLA in coordinating network and behavioral states using combined chemogenetics, patch clamp and field potential recordings. We demonstrate that Gq-signaling via hM3D designer receptor and 1 adrenoreceptor activation shifts the pattern of activity of the PV interneurons from tonic to phasic by stimulating a previously unknown, highly stereotyped bursting pattern of activity. This, in turn, generates bursts of inhibitory postsynaptic currents (IPSCs) and phasic firing in BLA principal neurons. The Gq-induced transition from tonic to phasic firing in BLA PV interneurons suppressed amygdalo-frontal gamma oscillations in vivo, consistent with the critical role of tonic PV neuron activity in gamma generation. The suppression of gamma oscillations by hM3D and 1 receptor activation in BLA PV interneurons also facilitated fear memory recall, in line with the inhibitory effect of gamma on fear expression. Thus, our data reveal a BLA parvalbumin neuron-specific neuromodulatory mechanism that mediates the transition to a fear-associated brain network state via regulation of amygdalo-frontal gamma oscillations.
in bioRxiv: Neuroscience on April 19, 2021 12:00 AM.
• #### Spatial alignment between faces and voices improves selective attention to audio-visual speech
The ability to see a talker's face has long been known to improve speech intelligibility in noise. This perceptual benefit depends on approximate temporal alignment between the auditory and visual speech components. However, the practical role that cross-modal spatial alignment plays in integrating audio-visual (AV) speech remains unresolved, particularly when competing talkers are present. In a series of online experiments, we investigated the importance of spatial alignment between corresponding faces and voices using a paradigm that featured both acoustic masking (speech-shaped noise) and attentional demands from a competing talker. Participants selectively attended a Target Talker's speech, then identified a word spoken by the Target Talker. In Exp. 1, we found improved task performance when the talkers' faces were visible, but only when corresponding faces and voices were presented in the same hemifield (spatially aligned). In Exp. 2, we tested for possible influences of eye position on this result. In auditory-only conditions, directing gaze toward the distractor voice reduced performance as predicted, but this effect could not fully explain the cost of AV spatial misalignment. Finally, in Exp. 3 and 4, we show that the effect of AV spatial alignment changes with noise level, but this was limited by a floor effect: due to the use of closed-set stimuli, participants were able to perform the task relatively well using lipreading alone. However, comparison between the results of Exp. 1 and Exp. 3 suggests that the cost of AV misalignment is larger at high noise levels. Overall, these results indicate that spatial alignment between corresponding faces and voices is important for AV speech integration in attentionally demanding communication settings.
in bioRxiv: Neuroscience on April 19, 2021 12:00 AM.
• #### Central nervous system infusion of TrkB agonist, 7,8-dihydroxyflavone, is ineffective in promoting myelin repair in cuprizone and experimental autoimmune encephalomyelitis mouse models of multiple sclerosis
Small molecular weight functional mimetics of brain-derived neurotrophic factor (BDNF) which act via the TrkB receptor have been developed to overcome the pharmacokinetic limitations of BDNF as a therapeutic agent for neurological disease. Activation of TrkB signalling on oligodendrocytes has been identified as a potential strategy for promoting myelin repair in demyelinating conditions such as Multiple Sclerosis (MS). Here, we tested the efficacy of intracerebroventricular infusion of TrkB agonist 7,8-dihydroxyflavone (DHF) to promote myelin repair in the cuprizone model of de- and remyelination and alter the course of experimental autoimmune encephalomyelitis (EAE), after the onset of clinical signs. In these two distinct, but common mouse models used for the preclinical testing of MS therapeutics, we found that DHF infusion increased the percentage of myelin basic protein and density of oligodendrocyte progenitor cells (OPCs) in the corpus callosum of female C57BL/6 mice after cuprizone demyelination. However, DHF did not alter the percentage of axons myelinated or increase the density of post-mitotic oligodendrocytes in this model. Direct central nervous system infusion of DHF infusion also had no effect on the clinical course of EAE in male and female C57BL/6 mice, and examination of the lumbar spinal cord after 21 days of treatment revealed extensive demyelination, with active phagocytosis of myelin debris by Iba1+ macrophages/microglia. These results indicate that direct central nervous system infusion of DHF is ineffective at promoting myelin repair in toxin-induced and inflammatory models of demyelination.
in bioRxiv: Neuroscience on April 19, 2021 12:00 AM.
• #### Independent information from PET, CSF and plasma biomarkers of tau pathology in Alzheimer's disease
PET, CSF and plasma biomarkers of tau pathology may be differentially associated with Alzheimer's disease (AD) related demographic, cognitive, genetic and neuroimaging markers. We examined 771 participants with normal cognition, mild cognitive impairment or dementia from BioFINDER-2 (n=400) and ADNI (n=371). All had tau-PET ([18F]RO948 in BioFINDER-2, [18F]flortaucipir in ADNI) and CSF p-tau181 biomarkers available. Plasma p-tau181 and plasma/CSF p-tau217 were available in BioFINDER-2 only. Concordance between PET, CSF and plasma tau biomarkers ranged between 66% and 95%. Across the whole group, ridge regression models showed that increased CSF and plasma p-tau181 and p-tau217 levels were independently of tau PET associated with higher age, and APOE{varepsilon}4-carriership and A{beta}-positivity, while increased tau-PET signal in the temporal cortex was associated with worse cognitive performance and reduced cortical thickness. We conclude that biofluid and neuroimaging markers of tau pathology convey partly independent information, with CSF and plasma p-tau181 and p-tau217 levels being more tightly linked with early markers of AD (especially A{beta} pathology), while tau-PET shows the strongest associations with cognitive and neurodegenerative markers of disease progression.
in bioRxiv: Neuroscience on April 19, 2021 12:00 AM.
• #### Structure-function interplay as signature for brain decoding and fingerprinting
Brain signatures of functional activity have shown promising results in both decoding brain states; i.e., determining whether a subject is at rest or performing a given task, and fingerprinting, that is identifying individuals within a large group. Importantly, these brain signatures do not account for the underlying brain anatomy on which brain function takes place. Here, we leveraged brain structure-function coupling as a new imaging-based biomarker to characterize tasks and individuals. We used multimodal magnetic resonance imaging and the recently introduced Structural-Decoupling Index (SDI) to quantify regional structure-function interplay in 100 healthy volunteers from the Human Connectome Project, both during rest and seven different tasks. SDI allowed accurate classifications for both decoding and fingerprinting, outperforming functional signatures. Further, SDI profiles in resting-state correlated with individual cognitive traits. These results show that brain structure-function interplay contains unique information which provides a new class of signatures of brain organization and cognition.
in bioRxiv: Neuroscience on April 19, 2021 12:00 AM.
• #### Evidence for a general neural signature of face familiarity
We explored the neural signatures of face familiarity using cross-participant and cross-experiment decoding of event-related potentials, evoked by unknown and experimentally familiarized faces from a set of experiments with different participants, stimuli, and familiarization-types. Participants were either familiarized perceptually, via media exposure, or by personal interaction. We observed significant cross-experiment familiarity decoding involving all three experiments, predominantly over posterior and central regions of the right hemisphere in the 270 - 630 ms time window. This shared face familiarity effect was most prominent between the Media and Personal, as well as between the Perceptual and Personal experiments. Cross-experiment decodability makes this signal a strong candidate for a general neural indicator of face familiarity, independent of familiarization methods and stimuli. Furthermore, the sustained pattern of temporal generalization suggests that it reflects a single automatic processing cascade that is maintained over time.
in bioRxiv: Neuroscience on April 19, 2021 12:00 AM.
• #### Visualization of subcortical structures in non-human primates in vivo by Quantitative Susceptibility Mapping at 3T MRI
Magnetic resonance imaging (MRI) is now an essential tool in the field of neuroscience involving non-human primates (NHP). Structural MRI scanning using T1-weighted (T1w) or T2-weighted (T2w) images provides anatomical information, particularly for experiments involving deep structures such as the basal ganglia and cerebellum. However, for certain subcortical structures, T1w and T2w images fail to reveal important anatomical details. To better visualize such structures in the macaque brain, we applied a relatively new method called quantitative susceptibility mapping (QSM), which enhances tissue contrast based on the local tissue magnetic susceptibility. To evaluate the visualization of important structures, we quantified the the contrast-to-noise ratios (CNRs) of the ventral pallidum (VP), globus pallidus external and internal segments (GPe and GPi), substantia nigra (SN), subthalamic nucleus (STN) in the basal ganglia and the dentate nucleus (DN) in the cerebellum. For these structures, the QSM method significantly increased the CNR, and thus the visibility, beyond that in either the T1w or T2w images. In addition, QSM values of some structures were correlated to the age of the macaque subjects. These results indicate that the QSM method can enable the clear identification of certain subcortical structures that are invisible in more traditional scanning sequences.
in bioRxiv: Neuroscience on April 19, 2021 12:00 AM.
• #### The physiological basis for the computation of direction selectivity in the Drosophila OFF pathway
In Drosophila, direction-selective neurons implement a mechanism of motion computation similar to cortical neurons, using contrast-opponent receptive fields with ON and OFF subunits. It is not clear how the presynaptic circuitry of direction-selective neurons in the OFF pathway supports this computation, because all major inputs are OFF-rectified neurons. Here, we reveal the biological substrate for motion computation in the OFF pathway. Three interneurons, Tm2, Tm9 and CT1, also provide information about ON stimuli to the OFF direction-selective neuron T5 across its receptive field, supporting a contrast-opponent receptive field organization. Consistent with its prominent role in motion detection, variability in Tm9 receptive field properties is passed on to T5, and calcium decrements in Tm9 in response to ON stimuli are maintained across behavioral states, while spatial tuning is sharpened by active behavior. Together, our work shows how a key neuronal computation is implemented by its constituent neuronal circuit elements to ensure direction selectivity.
in bioRxiv: Neuroscience on April 19, 2021 12:00 AM.
• #### Speed tuning in head coordinates as an alternative explanation of depth selectivity from motion parallax in area MT
There are two distinct sources of retinal image motion: motion of objects in the world and movement of the observer. In cases where an object moves in a scene and the eyes also move, a coordinate transformation that involves smooth eye movements and retinal motion will be needed in order to estimate object motion in world coordinates. More recently, interactions between retinal and eye velocity signals have also been suggested to generate depth selectivity from motion parallax (MP) in the macaque middle temporal (MT) area. We explored whether the nature of the interaction between eye and retinal velocities in MT neurons favors one of these two possibilities or a mixture of both. We analyzed responses of MT neurons to retinal and eye velocities in a viewing context in which the observer translates laterally while maintaining visual fixation on a world-fixed target. In this scenario, the depth of an object can be inferred from the ratio between retinal velocity and eye velocity, according to the motion-pursuit law. Previous studies have shown that MT responses to retinal motion are gain-modulated by the direction of eye movement, suggesting a potential mechanism for depth tuning from MP. However, our analysis of the joint tuning profile for retinal and eye velocities reveals that some MT neurons show a partial coordinate transformation toward head coordinates. We formalized a series of computational models to predict neural spike trains as well as selectivity for depth, and we used factorial model comparisons to quantify the relative importance of each model component. Our findings for many MT neurons reveal that the data are equally well explained by gain modulation or a partial coordinate transformation toward head coordinates, although some responses can only be well fit by the coordinate transform model. Our results highlight the potential role of MT neurons in representing multiple higher-level sensory variables, including depth from MP and object motion in the world.
in bioRxiv: Neuroscience on April 19, 2021 12:00 AM.
• #### Post mortem mapping of connectional anatomy for the validation of diffusion MRI
Despite the impressive advances in diffusion MRI (dMRI) acquisition and analysis that have taken place during the Human Connectome era, dMRI tractography is still an imperfect source of information on the circuitry of the brain. In this review, we discuss methods for post mortem validation of dMRI tractography, fiber orientations, and other microstructural properties of axon bundles that are typically extracted from dMRI data. These methods include anatomic tracer studies, Klingler's dissection, myelin stains, label-free optical imaging techniques, and others. We provide an overview of the basic principles of each technique, its limitations, and what it has taught us so far about the accuracy of different dMRI acquisition and analysis approaches.
in bioRxiv: Neuroscience on April 19, 2021 12:00 AM.
• #### Time-oriented attention improves accuracy in a paced finger tapping task
Finger tapping is a task widely used in a variety of experimental paradigms, in particular to understand sensorimotor synchronization and time processing in the range of hundreds of milliseconds (millisecond timing). Normally, subjects don't receive any instruction about what to attend to and the results are seldom interpreted taking into account the possible effects of attention. In this work we show that attention can be oriented to the purely temporal aspects of a paced finger tapping task and that it affects performance. Specifically, time-oriented attention improves the accuracy in paced finger tapping and it also increases the resynchronization efficiency after a period perturbation. We use two markers of the attention level: auditory ERPs and subjective report of the mental workload. In addition, we propose a novel algorithm to separate the auditory, stimulus-related components from the somatosensory, response-related ones, which are naturally overlapped in the recorded EEG.
in bioRxiv: Neuroscience on April 19, 2021 12:00 AM.
• #### A functional topography within the cholinergic basal forebrain for processing sensory cues associated with reward and punishment
Basal forebrain cholinergic neurons (BFCNs) project throughout the cortex to regulate arousal, stimulus salience, plasticity, and learning. The basal forebrain features distinct connectivity along its anteroposterior axis that could impart regional differences in feature processing. Here, we simultaneously measured bulk BFCN activity from an anterior structure, the horizontal limb of the diagonal band (HDB), and from the posterior tail of the basal forebrain in globus pallidus and substantia innominata (GP/SI) over a 30-day period as mice learned a sensory reversal task. Although HDB and GP/SI responses were similar for many features, HDB more closely tracked fluctuations in pupil-indexed brain state and exhibited stronger responses to reward omission than to delivery of anticipated awards. In GP/SI, BFCNs were strongly activated by sound, and this response was further enhanced for punishment-predicting - but not reward-predicting - cues. These results identify a functional topography that diversifies cholinergic modulatory signals broadcast to downstream brain regions.
in bioRxiv: Neuroscience on April 19, 2021 12:00 AM.
• #### Neural network approximation: Three hidden layers are enough
Publication date: Available online 17 April 2021
Source: Neural Networks
Author(s): Zuowei Shen, Haizhao Yang, Shijun Zhang
in Neural Networks on April 18, 2021 06:00 PM.
• #### To reduce the risk of dementia focus on the patient
Annals of Neurology, Accepted Article.
in Annals of Neurology on April 18, 2021 09:24 AM.
• #### A recurrent EIF2AK2 missense variant causes autosomal‐dominant isolated dystonia
Annals of Neurology, Accepted Article.
in Annals of Neurology on April 18, 2021 09:18 AM.
### Abstract
Prior studies have reported an association between visual evoked potentials (VEPs) and cognitive performance in people with multiple sclerosis (PwMS), but the specific mechanisms that account for this relationship remain unclear. We examined the relationship between VEP latency and cognitive performance in a large sample of PwMS, hypothesizing that VEP latency indexes not only visual system functioning but also general neural efficiency. Standardized performance index scores were obtained for the domains of memory, executive function, visual-spatial processing, verbal function, attention, information processing speed, and motor skills, as well as global cognitive performance (NeuroTrax battery). VEP P100 component latency was obtained using a standard checkerboard pattern-reversal paradigm. Prolonged VEP latency was significantly associated with poorer performance in multiple cognitive domains, and with the number of cognitive domains in which performance was ≥ 1 SD below the normative mean. Relationships between VEP latency and cognitive performance were significant for information processing speed, executive function, attention, motor skills, and global cognitive performance after controlling for disease duration, visual acuity, and inter-ocular latency differences. This study provides evidence that VEP latency delays index general neural inefficiency that is associated with cognitive disturbances in PwMS.
in Journal of Neurology on April 18, 2021 12:00 AM.
### Abstract
Hand use is a widespread act in many vertebrate lineages and subserves behaviors including locomotion, predation, feeding, nest construction, and grooming. In order to determine whether hand use is similarly used in social behavior, the present paper describes hand use in the social play of rats. In the course of rough and tumble play sessions, rats are found to make as many as twenty different movements a minute with each hand for the purposes of manipulating a partner into a subordinate position or defending against a partner’s attack. The hand movements comprise signaling movements of touching, offensive manipulating of a partner to control a play engagement, and defensive hand movements directed toward blocking, pushing and pulling to parry an attack. For signaling, attack and defense, hand movements have a structure that is similar to the structure of hand movements used for other purposes including eating, but in their contact points on an opponent, they are tailored for partner control. Given the time devoted to play by rats, play likely features the social rat behavior with the most extensive use of hand movements. This extensive use of hand movements for social play is discussed in relation to the ubiquity of hand use in adaptive behavior, the evolution of hand use in the play of mammals, and in relation to extending the multifunctional theory of the purposes of play to include the education of skilled hand movements for various adult functions including as feeding.
in Experimental Brain Research on April 18, 2021 12:00 AM.
• #### The entorhinal cortex modulates trace fear memory formation and neuroplasticity in the lateral amygdala via cholecystokinin
Although the neural circuitry underlying fear memory formation is important in fear-related mental disorders, it is incompletely understood. Here, we utilized trace fear conditioning to study the formation of trace fear memory. We identified the entorhinal cortex (EC) as a critical component of sensory signaling to the amygdala. Moreover, we used the loss of function and rescue experiments to demonstrate that release of the neuropeptide cholecystokinin (CCK) from the EC is required for trace fear memory formation. We discovered that CCK-positive neurons extend from the EC to the lateral nuclei of the amygdala (LA), and inhibition of CCK-dependent signaling in the EC prevented long-term potentiation of sensory signals to the LA and formation of trace fear memory. Altogether, we suggest a model where sensory stimuli trigger the release of CCK from EC neurons, which potentiates sensory signals to the LA, ultimately influencing neural plasticity and trace fear memory formation.
in bioRxiv: Neuroscience on April 18, 2021 12:00 AM.
• #### Activity-dependent alteration of early myelin ensheathment in a developing sensory circuit
Adaptive myelination has been reported in response to experimental manipulations of neuronal activity, but the links between sensory experience, corresponding neuronal activity, and resultant alterations in myelination require investigation. To study this, we used the Xenopus laevis tadpole, which is a classic model for studies of visual system development and function because it is translucent and visually responsive throughout the formation of this retinotectal system. Here, we report the timecourse of early myelin ensheathment in the Xenopus retinotectal system using immunohistochemistry of myelin basic protein (MBP) along with third-harmonic generation (THG) microscopy, a label-free structural imaging technique. Characterization of the myelination progression revealed an appropriate developmental window to address the effects of early patterned visual experience on myelin ensheathment. To alter patterned activity, we showed tadpoles stroboscopic stimuli and measured the calcium responses of retinal ganglion cell axon terminals. We identified strobe frequencies that elicited robust versus dampened calcium responses, reared animals in these strobe conditions for 7 d, and subsequently observed differences in the amount of early myelin ensheathment at the optic chiasm. This study provides evidence that it is not just the presence but also to the specific temporal properties of sensory stimuli that are important for myelin plasticity.
in bioRxiv: Neuroscience on April 18, 2021 12:00 AM.
• #### Same Action, Different Meaning: Neural substrates of Semantic Goal Representation
Accurate control over everyday goal-directed actions is mediated by sensory-motor predictions of intended consequences and their comparison with actual outcomes. Such online comparisons of the expected and re-afferent, immediate, sensory feedback are conceptualized as internal forward models. Current predictive coding theories describing such models typically address the processing of immediate sensory-motor goals, yet voluntary actions are also oriented towards long-term conceptual goals and intentions, for which the sensory consequence is sometimes absent or cannot be fully predicted. Thus, the neural mechanisms underlying actions with distal conceptual goals is far from being clear. Specifically, it is still unknown whether sensory-motor circuits also encode information regarding the global meaning of the action, detached from the immediate, movement-related goal. Therefore, using fMRI and behavioral measures, we examined identical actions (either right or left-hand button presses) performed for two different semantic intentions ('yes'/'no' response to questions regarding visual stimuli). Importantly, actions were devoid of differences in the immediate sensory outcome. Our findings revealed voxel patterns differentiating the two semantic goals in the frontoparietal cortex and visual pathways including the Lateral-occipital complex, in both hemispheres. Behavioral results suggest that the results cannot be explained by kinetic differences such as force. To the best of our knowledge, this is the first evidence showing that semantic meaning is embedded in the neural representation of actions independent of immediate sensory outcome and kinetic differences.
in bioRxiv: Neuroscience on April 18, 2021 12:00 AM.
• #### Profiling the molecular signature of Satellite Glial Cells at the single cell level reveals high similarities between rodent and human
Peripheral sensory neurons located in dorsal root ganglia relay sensory information from the peripheral tissue to the brain. Satellite glial cells (SGC) are unique glial cells that form an envelope completely surrounding each sensory neuron soma. This organization allows for close bi-directional communication between the neuron and it surrounding glial coat. Morphological and molecular changes in SGC have been observed in multiple pathological conditions such as inflammation, chemotherapy-induced neuropathy, viral infection and nerve injuries. There is evidence that changes in SGC contribute to chronic pain by augmenting neuronal activity in various rodent pain models. SGC also play a critical role in axon regeneration. Whether findings made in rodent model systems are relevant to human physiology have not been investigated. Here we present a detailed characterization of the transcriptional profile of SGC in mouse, rat and human at the single cell level. Our findings suggest that key features of SGC in rodent models are conserved in human. Our study provides the potential to leverage on rodent SGC properties and identify potential targets for the treatment of nerve repair and alleviation of painful conditions.
in bioRxiv: Neuroscience on April 18, 2021 12:00 AM.
• #### NeuroMechFly, a neuromechanical model of adult Drosophila melanogaster
Animal behavior emerges from a seamless interaction between musculoskeletal elements, neural network dynamics, and the environment. Accessing and understanding the interplay between these intertwined systems requires the development of integrative neuromechanical simulations. Until now, there has been no such simulation framework for the widely studied model organism, Drosophila melanogaster. Here we present NeuroMechFly, a data-driven computational model of an adult female fly that is designed to synthesize rapidly growing experimental datasets and to test theories of neuromechanical behavioral control. NeuroMechFly combines a set of modules including an exoskeleton with articulating body parts--limbs, halteres, wings, abdominal segments, head, proboscis, and antennae--muscle models, and neural networks within a physics-based simulation environment. Using this computational framework, we (i) predict the minimal limb degrees-of-freedom needed for real Drosophila behaviors, (ii) estimate expected contact reaction forces, torques, and tactile signals during replayed Drosophila walking and grooming, and (iii) discover neural network and muscle parameters that can drive tripod walking. Thus, NeuroMechFly is a powerful testbed for building an understanding of how behaviors emerge from interactions between complex neuromechanical systems and their physical surroundings.
in bioRxiv: Neuroscience on April 18, 2021 12:00 AM.
• #### Multiscale modeling of presynaptic dynamics from molecular to mesoscale
Chemical synapses exhibit a diverse array of internal mechanisms that affect the dynamics of transmission efficacy. Many of these processes, such as release of neurotransmitter and vesicle recycling, depend strongly on activity-dependent influx and accumulation of Ca2+. To model how each of these processes may affect the processing of information in neural circuits, and how their dysfunction may lead to disease states, requires a computationally efficient modelling framework, capable of generating accurate phenomenology without incurring a heavy computational cost per synapse. Constructing a phenomenologically realistic model requires the precise characterization of the timing and probability of neurotransmitter release. Difficulties arise in that functional forms of instantaneous release rate can be difficult to extract from noisy data without running many thousands of trials, and in biophysical synapses, facilitation of per-vesicle release probability is confounded by depletion. To overcome this, we obtained traces of free Ca2+ concentration in response to various action potential stimulus trains from a molecular MCell model of a hippocampal mossy fiber axon. Ca2+ sensors were placed at varying distance from a voltage-dependent calcium channel (VDCC) cluster, and Ca2+ was buffered by calbindin. Then, using the calcium traces to drive deterministic state vector models of synaptotagmin 1 and 7 (Syt-1/7), which respectively mediate synchronous and asynchronous release in excitatory hippocampal synapses, we obtained high-resolution profiles of instantaneous release rate, to which we applied functional fits. Synchronous vesicle release occurred predominantly within half a micron of the source of spike-evoked Ca2+ influx, while asynchronous release occurred more consistently at all distances. Both fast and slow mechanisms exhibited multi-exponential release rate curves, whose magnitudes decayed exponentially with distance from the Ca2+ source. Profile parameters facilitate on different time scales according to a single, general facilitation function. These functional descriptions lay the groundwork for efficient mesoscale modelling of vesicular release dynamics. Author SummaryMost information transmission between neurons in the brain occurs via release of neurotransmitter from synaptic vesicles. In response to a presynaptic spike, calcium influx at the active zone of a synapse can trigger the release of neurotransmitter with a certain probability. These stochastic release events may occur immediately after a spike or with some delay. As calcium accumulates from one spike to the next, the probability of release may increase (facilitate) for subsequent spikes. This process, known as short-term plasticity, transforms the spiking code to a release code, underlying much of the brains information processing. In this paper, we use an accurate, detailed model of presynaptic molecular physiology to characterize these processes at high precision in response to various spike trains. We then apply model reduction to the results to obtain a phenomenological model of release timing, probability, and facilitation, which can perform as accurately as the molecular model but with far less computational cost. This mesoscale model of spike-evoked release and facilitation helps to bridge the gap between microscale molecular dynamics and macroscale information processing in neural circuits. It can thus benefit large scale modelling of neural circuits, biologically inspired machine learning models, and the design of neuromorphic chips.
in bioRxiv: Neuroscience on April 18, 2021 12:00 AM.
• #### Causally informed activity flow models provide mechanistic insight into the emergence of cognitive processes from brain network interactions
Brain activity flow models estimate the movement of task-evoked activity over brain connections to help explain the emergence of task-related functionality. Activity flow estimates have been shown to accurately predict task-evoked brain activations across a wide variety of brain regions and task conditions. However, these predictions have had limited explanatory power, given known issues with causal interpretations of the standard functional connectivity measures used to parameterize activity flow models. We show here that functional/effective connectivity (FC) measures grounded in causal principles facilitate mechanistic interpretation of activity flow models. Starting from Pearson correlation (the current field standard), we progress from FC measures with poor to excellent causal grounding, demonstrating a continuum of causal validity using simulations and empirical fMRI data. Finally, we apply a causal FC method to a dorsolateral prefrontal cortex region, demonstrating causal network mechanisms contributing to its strong activation during a 2-back (relative to a 0-back) working memory task. Together, these results reveal the promise of parameterizing activity flow models using causal FC methods to identify network mechanisms underlying cognitive computations in the human brain. Highlights- Activity flow models provide insight into how cognitive neural effects emerge from brain network interactions. - Functional connectivity methods grounded in causal principles facilitate mechanistic interpretations of task activity flow models. - Mechanistic activity flow models accurately predict task-evoked neural effects across a wide variety of brain regions and cognitive tasks.
in bioRxiv: Neuroscience on April 18, 2021 12:00 AM.
• #### A neuralized feature engineering method for entity relation extraction
Publication date: Available online 16 April 2021
Source: Neural Networks
Author(s): Yanping Chen, Weizhe Yang, Kai Wang, Yongbin Qin, Ruizhang Huang, Qinghua Zheng
in Neural Networks on April 17, 2021 06:00 PM.
• #### Learning dual-margin model for visual tracking
Publication date: Available online 16 April 2021
Source: Neural Networks
Author(s): Nana Fan, Xin Li, Zikun Zhou, Qiao Liu, Zhenyu He
in Neural Networks on April 17, 2021 06:00 PM.
• #### Cover Image, Volume 529, Issue 9
The cover image is based on the Research Article Structural basis for noradrenergic regulation of neural circuits in the mouse olfactory bulb by Sawa Horie et al., https://doi.org/10.1002/cne.25085.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Cover Image, Volume 529, Issue 9
The cover image is based on the Research Article A comparative analysis of cone photoreceptor morphology in bowhead and beluga whales by Matthew A. Smith et al., https://doi.org/10.1002/cne.25101.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### The Journal of Comparative Neurology, Table of Content, Vol. 529, No. 6, April, 2021
Journal of Comparative Neurology, Volume 529, Issue 9, Page 2157-2158, June 2021.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### A comparative analysis of cone photoreceptor morphology in bowhead and beluga whales
Abstract The cetacean visual system is a product of selection pressures favoring underwater vision, yet relatively little is known about it across taxa. Previous studies report several mutations in the opsin genetic sequence in cetaceans, suggesting the evolutionary complete or partial loss of retinal cone photoreceptor function in mysticete and odontocete lineages, respectively. Despite this, limited anatomical evidence suggests cone structures are partially maintained but with absent outer and inner segments in the bowhead retina. The functional consequence and anatomical distributions associated with these unique cone morphologies remain unclear. The current study further investigates the morphology and distribution of cone photoreceptors in the bowhead whale and beluga retina and evaluates the potential functional capacity of these cells' alternative to photoreception. Refined histological and advanced microscopic techniques revealed two additional cone morphologies in the bowhead and beluga retina that have not been previously described. Two proteins involved in magnetosensation were present in these cone structures suggesting the possibility for an alternative functional role in responding to changes in geomagnetic fields. These findings highlight a revised understanding of the unique evolution of cone and gross retinal anatomy in cetaceans, and provide prefatory evidence of potential functional reassignment of these cells.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Defining vitamin D receptor expression in the brain using a novel VDRCre mouse
We generated a novel vitamin D receptor (VDR) VDRCre knockin mouse for the interrogation of VDR anatomy in the brain. In this article, we demonstrate through immunohistochemistry that VDRCre‐driven reporter expression colocalizes with VDR mRNA throughout the brain. Additionally, we demonstrate that tdTomato (+) neurons respond rapidly to vitamin D, which does not occur in tdTomato (−) neurons. Abstract Vitamin D action has been linked to several diseases regulated by the brain including obesity, diabetes, autism, and Parkinson's. However, the location of the vitamin D receptor (VDR) in the brain is not clear due to conflicting reports. We found that two antibodies previously published as specific in peripheral tissues are not specific in the brain. We thus created a new knockin mouse with cre recombinase expression under the control of the endogenous VDR promoter (VDRCre). We demonstrated that the cre activity in the VDRCre mouse brain (as reported by a cre‐dependent tdTomato expression) is highly overlapping with endogenous VDR mRNAs. These VDR‐expressing cells were enriched in multiple brain regions including the cortex, amygdala, caudate putamen, and hypothalamus among others. In the hypothalamus, VDR partially colocalized with vasopressin, oxytocin, estrogen receptor‐α, and β‐endorphin to various degrees. We further functionally validated our model by demonstrating that the endogenous VDR agonist 1,25‐dihydroxyvitamin D activated all tested tdTomato+ neurons in the paraventricular hypothalamus but had no effect on neurons without tdTomato fluorescence. Thus, we have generated a new mouse tool that allows us to visualize VDR‐expressing cells and to characterize their functions.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Identification and localization of a gonadotropin‐releasing hormone‐related neuropeptide in Biomphalaria, an intermediate host for schistosomiasis
Histological methods were used to localize a neuropeptide related to gonadotrophin‐releasing hormone in the central nervous system and periphery of Biomphalaria glabrata, an intermediate host for intestinal schistosomiasis. Abstract Freshwater snails of the genus Biomphalaria serve as obligatory hosts for the digenetic trematode Schistosoma mansoni, the causative agent for the most widespread form of intestinal schistosomiasis. Within Biomphalaria, S. mansoni larvae multiply and transform into the cercariae form that can infect humans. Trematode development and proliferation is thought to be facilitated by modifications of host behavior and physiological processes, including a reduction of reproduction known as “parasitic castration.” As neuropeptides participate in the control of reproduction across phylogeny, a neural transcriptomics approach was undertaken to identify peptides that could regulate Biomphalaria reproductive physiology. The present study identified a transcript in Biomphalaria alexandrina that encodes a peptide belonging to the gonadotropin‐releasing hormone (GnRH) superfamily. The precursor and the predicted mature peptide, pQIHFTPDWGNN‐NH2 (designated Biom‐GnRH), share features with peptides identified in other molluscan species, including panpulmonates, opisthobranchs, and cephalopods. An antibody generated against Biom‐GnRH labeled neurons in the cerebral, pedal, and visceral ganglia of Biomphalaria glabrata. GnRH‐like immunoreactive fiber systems projected to all central ganglia. In the periphery, immunoreactive material was detected in the ovotestis, oviduct, albumen gland, and nidamental gland. As these structures serve crucial roles in the production, transport, nourishment, and encapsulation of eggs, disruption of the GnRH system of Biomphalaria could contribute to reduced reproductive activity in infected snails.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Acute Δ9‐tetrahydrocannabinol prompts rapid changes in cannabinoid CB1 receptor immunolabeling and subcellular structure in CA1 hippocampus of young adult male mice
Acute exposure to Δ‐9‐tetrahydrocannabinol (THC) significantly decreases cannabinoid type 1 (CB1) receptor density at inhibitory terminals and mitochondria. Besides, dendrites are significantly larger, have more spines and contain more mitochondria. CB1 receptors (arrows), inhibitory terminals (ter, red shading), mitochondria (m, purple shading), dendrites (blue shading), dendritic spines (sp, blue shading). Scale bars = 500 nm. Abstract The use and abuse of cannabis can be associated with significant pathophysiology, however, it remains unclear whether (1) acute administration of Δ‐9‐tetrahydrocannabinol (THC) during early adulthood alters the cannabinoid type 1 (CB1) receptor localization and expression in cells of the brain, and (2) THC produces structural brain changes. Here we use electron microscopy and a highly sensitive pre‐embedding immunogold method to examine CB1 receptors in the hippocampus cornu ammonis subfield 1 (CA1) 30 min after male mice were exposed to a single THC injection (5 mg/kg). The findings show that acute exposure to THC can significantly decrease the percentage of CB1 receptor immunopositive terminals making symmetric synapses, mitochondria, and astrocytes. The percentage of CB1 receptor‐labeled terminals forming asymmetric synapses was unaffected. Lastly, CB1 receptor expression was significantly lower at terminals of symmetric and asymmetric synapses as well as in mitochondria. Structurally, CA1 dendrites were significantly larger, and contained more spines and mitochondria following acute THC administration. The area of the dendritic spines, synaptic terminals, mitochondria, and astrocytes decreased significantly following acute THC exposure. Altogether, these results indicate that even a single THC exposure can have a significant impact on CB1 receptor expression, and can alter CA1 ultrastructure, within 30 min of drug exposure. These changes may contribute to the behavioral alterations experienced by young individuals shortly after cannabis intoxication.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Neuroplasticity in N‐methyl‐d‐aspartic acid receptor signaling in subregions of the rat rostral ventrolateral medulla following sedentary versus physically active conditions
The mechanisms by which the brain undergoes neuroplasticity during sedentary versus active conditions are important in understanding how a sedentary lifestyle has become the leading cause of death due to modifiable risk factors. Neurons in the rostral ventrolateral medulla (RVLM) regulate ongoing sympathetic vasoconstriction and cardiac function under normal conditions, and their activity appears heightened in models of cardiovascular disease associated with sympathetic overactivity. The current study provides evidence of activity‐related neuroplasticity in the RVLM via increased glutamate levels and increased expression of N‐methyl‐d‐aspartic acid receptors in the RVLM of sedentary compared to active rats. These mechanisms produce a complex framework by which RVLM neurons increase their influence on sympathoexcitation in sedentary compared to physically active conditions. Abstract The rostral ventrolateral medulla (RVLM) is a brain region involved in normal regulation of the cardiovascular system and heightened sympathoexcitatory states of cardiovascular disease (CVD). Among major risk factors for CVD, sedentary lifestyles contribute to higher mortality than other modifiable risk factors. Previous studies suggest excessive glutamatergic excitation of presympathetic neurons in the RVLM occurs in sedentary animals. Therefore, the purpose of this study was to examine neuroplasticity in the glutamatergic system in the RVLM of sedentary and physically active rats. We hypothesized that relative to active rats, sedentary rats would exhibit higher expression of glutamate N‐methyl‐d‐aspartic acid receptor subunits (GluN), phosphoGluN1, and the excitatory scaffold protein postsynaptic density 95 (PSD95), while achieving higher glutamate levels. Male Sprague–Dawley rats (4 weeks old) were divided into sedentary and active (running wheel) conditions for 10–12 weeks. We used retrograde tracing/triple‐labeling techniques, western blotting, and magnetic resonance spectroscopy. We report in sedentary versus physically active rats: 1) fewer bulbospinal non‐C1 neurons positive for GluN1, 2) significantly higher expression of GluN1 and GluN2B but lower levels of phosphoGluN1 (pSer896) and PSD95, and 3) higher levels of glutamate in the RVLM. Higher GluN expression is consistent with enhanced sympathoexcitation in sedentary animals; however, a more complex neuroplasticity occurs within subregions of the ventrolateral medulla. Our results in rodents may also indicate that alterations in glutamatergic excitation of the RVLM contribute to the increased incidence of CVD in humans who lead sedentary lifestyles. Thus, there is a strong need to further pursue mechanisms of inactivity‐related neuroplasticity in the RVLM.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Sex and age influence gonadal steroid hormone receptor distributions relative to estrogen receptor β‐containing neurons in the mouse hypothalamic paraventricular nucleus
Light and electron microscopic studies reveal that gonadal steroids may directly and indirectly influence PVN neurons via nuclear (a) and extranuclear (b) gonadal hormone receptors in a sex‐specific manner. AR, androgen receptor; ER, estrogen receptor; GPER, G‐protein–coupled estrogen receptor. Abstract Within the hypothalamic paraventricular nucleus (PVN), estrogen receptor (ER) β and other gonadal hormone receptors play a role in central cardiovascular processes. However, the influence of sex and age on the cellular and subcellular relationships of ERβ with ERα, G‐protein ER (GPER1), as well as progestin and androgen receptors (PR and AR) in the PVN is uncertain. In young (2‐ to 3‐month‐old) females and males, ERβ‐enhanced green fluorescent protein (EGFP) containing neurons were approximately four times greater than ERα‐labeled and PR‐labeled nuclei in the PVN. In subdivisions of the PVN, young females, compared to males, had: (1) more ERβ‐EGFP neurons in neuroendocrine rostral regions; (2) fewer ERα‐labeled nuclei in neuroendocrine and autonomic projecting medial subregions; and (3) more ERα‐labeled nuclei in an autonomic projecting caudal region. In contrast, young males, compared to females, had approximately 20 times more AR‐labeled nuclei, which often colocalized with ERβ‐EGFP in neuroendocrine (approximately 70%) and autonomic (approximately 50%) projecting subregions. Ultrastructurally, in soma and dendrites, PVN ERβ‐EGFP colocalized primarily with extranuclear AR (approximately 85% soma) and GPER1 (approximately 70% soma). Aged (12‐ to 24‐month‐old) males had more ERβ‐EGFP neurons in a rostral neuroendocrine subregion compared to aged females and females with accelerated ovarian failure (AOF) and in a caudal autonomic subregion compared to post‐AOF females. Late‐aged (18‐ to 24‐month‐old) females compared to early‐aged (12‐ to 14‐month‐old) females and AOF females had fewer AR‐labeled nuclei in neuroendrocrine and autonomic projecting subregions. These findings indicate that gonadal steroids may directly and indirectly influence PVN neurons via nuclear and extranuclear gonadal hormone receptors in a sex‐specific manner.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Visual opsin expression and morphological characterization of retinal photoreceptors in the pouched lamprey (Geotria australis, Gray)
A multidisciplinary approach was applied to assign the expression of different visual opsin classes to specific photoreceptor subtypes in Geotria australis. Five discrete photoreceptor subtypes were identified in the retina, each containing one of five classes of visual opsin. This suggests that G. australis has the potential for complex color vision. Abstract Lampreys are extant members of the agnathan (jawless) vertebrates that diverged ~500 million years ago, during a critical stage of vertebrate evolution when image‐forming eyes first emerged. Among lamprey species assessed thus far, the retina of the southern hemisphere pouched lamprey, Geotria australis, is unique, in that it possesses morphologically distinct photoreceptors and expresses five visual photopigments. This study focused on determining the number of different photoreceptors present in the retina of G. australis and whether each cell type expresses a single opsin class. Five photoreceptor subtypes were identified based on ultrastructure and differential expression of one of each of the five different visual opsin classes (lws, sws1, sws2, rh1, and rh2) known to be expressed in the retina. This suggests, therefore, that the retina of G. australis possesses five spectrally and morphologically distinct photoreceptors, with the potential for complex color vision. Each photoreceptor subtype was shown to have a specific spatial distribution in the retina, which is potentially associated with changes in spectral radiance across different lines of sight. These results suggest that there have been strong selection pressures for G. australis to maintain broad spectral sensitivity for the brightly lit surface waters that this species inhabits during its marine phase. These findings provide important insights into the functional anatomy of the early vertebrate retina and the selection pressures that may have led to the evolution of complex color vision.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Forebrain projection neurons target functionally diverse respiratory control areas in the midbrain, pons, and medulla oblongata
Respiratory motor activity can be modulated by higher brain inputs. We hypothesized that forebrain regions send descending inputs to the respiratory control areas in the midbrain, pons, and medulla. Our data imply that (a) volitional motor commands for vocalization are specifically relayed via the midbrain periaqueductal gray; (b) commands to coordinate breathing with other orofacial behaviors (e.g., sniffing, whisking, swallowing) target the pontine Kölliker‐Fuse nucleus, predominantly; and (c) limbic or autonomic (interoceptive) systems are connected to broadly distributed downstream brainstem respiratory networks, including the medullary Bötzinger complex (BötC), pre‐BötC, and caudal raphé nuclei. We provide a neural substrate to explain how volitional, state‐dependent, and emotional modulation of breathing is regulated by the forebrain. Abstract Eupnea is generated by neural circuits located in the ponto‐medullary brainstem, but can be modulated by higher brain inputs which contribute to volitional control of breathing and the expression of orofacial behaviors, such as vocalization, sniffing, coughing, and swallowing. Surprisingly, the anatomical organization of descending inputs that connect the forebrain with the brainstem respiratory network remains poorly defined. We hypothesized that descending forebrain projections target multiple distributed respiratory control nuclei across the neuroaxis. To test our hypothesis, we made discrete unilateral microinjections of the retrograde tracer cholera toxin subunit B in the midbrain periaqueductal gray (PAG), the pontine Kölliker‐Fuse nucleus (KFn), the medullary Bötzinger complex (BötC), pre‐BötC, or caudal midline raphé nuclei. We quantified the regional distribution of retrogradely labeled neurons in the forebrain 12–14 days postinjection. Overall, our data reveal that descending inputs from cortical areas predominantly target the PAG and KFn. Differential forebrain regions innervating the PAG (prefrontal, cingulate cortices, and lateral septum) and KFn (rhinal, piriform, and somatosensory cortices) imply that volitional motor commands for vocalization are specifically relayed via the PAG, while the KFn may receive commands to coordinate breathing with other orofacial behaviors (e.g., sniffing, swallowing). Additionally, we observed that the limbic or autonomic (interoceptive) systems are connected to broadly distributed downstream bulbar respiratory networks. Collectively, these data provide a neural substrate to explain how volitional, state‐dependent, and emotional modulation of breathing is regulated by the forebrain.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### N‐cadherin localization in taste buds of mouse circumvallate papillae
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Age, sex, and regional differences in scavenger receptor CD36 in the mouse brain: Potential relevance to cerebral amyloid angiopathy and Alzheimer's disease
Scavenger receptor CD36 contributes significantly to lipid homeostasis, inflammation, and amyloid deposition in the brain. Here, we characterized CD36 gene and protein expression in the brains of young and aged male and female C57BL/6J mice. We found age‐related increases in CD36 mRNA, and CD36 protein differed by region, age, and sex and was primarily expressed intravascularly. These variations may contribute to Aβ deposition and neuroinflammation in Alzheimer's disease. Abstract Scavenger receptor CD36 contributes significantly to lipid homeostasis, inflammation, and amyloid deposition, while CD36 deficiency is associated with restored cerebrovascular function in an Alzheimer's disease (AD) mouse model. Yet the distribution of CD36 has not been examined in the brain. Here, we characterized CD36 gene and protein expression in the brains of young, middle aged, aged, and elderly male and female C57BL/6J mice. Age‐related increases in CD36 mRNA expression were observed in the male hippocampus and female midbrain. Additionally, male mice had greater CD36 mRNA expression than females in the striatum, hippocampus, and midbrain. CD36 protein was primarily expressed intravascularly, and this expression differed by region, age, and sex in the mouse brain. Although male mice brains demonstrated an increase in CD36 protein with age in several cortices, basal ganglia, hippocampus, and midbrain, a decrease with age was observed in female mice in the same regions. These data suggest that distinctive age, region, and sex expression of CD36 in the brain may contribute to Aβ deposition and neuroinflammation in AD.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Axo‐axonic synapses: Diversity in neural circuit function
Abstract The chemical synapse is the principal form of contact between neurons of the central nervous system. These synapses are typically configured as presynaptic axon terminations onto postsynaptic dendrites or somata, giving rise to axo‐dendritic and axo‐somatic synapses, respectively. Beyond these common synapse configurations are less‐studied, non‐canonical synapse types that are prevalent throughout the brain and significantly contribute to neural circuit function. Among these are the axo‐axonic synapses, which consist of an axon terminating on another axon or axon terminal. Here, we review evidence for axo‐axonic synapse contributions to neural signaling in the mammalian nervous system and survey functional neural circuit motifs enabled by these synapses. We also detail how recent advances in microscopy, transgenics, and biological sensors may be used to identify and functionally assay axo‐axonic synapses.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Structural basis for noradrenergic regulation of neural circuits in the mouse olfactory bulb
We accomplished visualization and reconstruction of noradrenergic neurons projecting to the olfactory bulb (OB) by infection of adeno‐associated viruses into the locus coeruleus of dopamine beta hydroxylase‐Cre mice. An individual noradrenergic axon traveled while forming several branches and was distributed in multiple glomeruli that were located in different areas of the glomerular layer in the OB. Abstract Olfactory input is processed in the glomerulus of the main olfactory bulb (OB) and relayed to higher centers in the brain by projection neurons. Conversely, centrifugal inputs from other brain regions project to the OB. We have previously analyzed centrifugal inputs into the OB from several brain regions using single‐neuron labeling. In this study, we analyzed the centrifugal noradrenergic (NA) fibers derived from the locus coeruleus (LC), because their projection pathways and synaptic connections in the OB have not been clarified in detail. We analyzed the NA centrifugal projections by single‐neuron labeling and immunoelectron microscopy. Individual NA neurons labeled by viral infection were three‐dimensionally traced using Neurolucida software to visualize the projection pathway from the LC to the OB. Also, centrifugal NA fibers were visualized using an antibody for noradrenaline transporter (NET). NET immunoreactive (‐ir) fibers contained many varicosities and synaptic vesicles. Furthermore, electron tomography demonstrated that NET‐ir fibers formed asymmetrical synapses of varied morphology. Although these synapses were present at varicosities, the density of synapses was relatively low throughout the OB. The maximal density of synapses was found in the external plexiform layer; about 17% of all observed varicosities contained synapses. These results strongly suggest that NA‐containing fibers in the OB release NA from both varicosities and synapses to influence the activities of OB neurons. The present study provides a morphological basis for olfactory modulation by centrifugal NA fibers derived from the LC.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Late onset of Synaptotagmin 2a expression at synapses relevant to social behavior
During brain development, synapses form and are refined through the accumulation of different proteins. Here we examined the dynamic distribution of Synaptotagmin 2a (Syt2a) in the developing forebrain of zebrafish larvae with a focus on regions associated with social behavior. We show that Syt2a colocalized with tyrosine hydroxylase, a protein important for behavior. Further, we found that Syt2a localizes to synapses onto neurons implicated in social behavior, with an increase of Syt2a expression coinciding with the emergence of social behavior. Abstract As they form, synapses go through various stages of maturation and refinement. These steps are linked to significant changes in synaptic function, potentially resulting in emergence and maturation of behavioral outputs. Synaptotagmins are calcium‐sensing proteins of the synaptic vesicle exocytosis machinery, and changes in Synaptotagmin proteins at synapses have significant effects on vesicle release and synaptic function. Here, we examined the distribution of the synaptic vesicle protein Synaptotagmin 2a (Syt2a) during development of the zebrafish nervous system. Syt2a is widely distributed throughout the midbrain and hindbrain early during larval development but very weakly expressed in the forebrain. Later in development, Syt2a expression levels in the forebrain increase, particularly in regions associated with social behavior, and most intriguingly, around the time social behavior becomes apparent. We provide evidence that Syt2a localizes to synapses onto neurons implicated in social behavior in the ventral forebrain and show that Syt2a is colocalized with tyrosine hydroxylase, a biosynthetic enzyme in the dopamine pathway. Our results suggest a developmentally important role for Syt2a in maturing synapses in the forebrain, coinciding with the emergence of social behavior.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Corrigendum
Journal of Comparative Neurology, Volume 529, Issue 9, Page 2402-2403, June 2021.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Functional, molecular and morphological heterogeneity of superficial interneurons in the larval zebrafish tectum
The Superficial Interneurons (SINs) of the zebrafish tectum comprise a diverse population. Here we demonstrate that despite their wide range of reported tuning properties and morphology, most SINs respond to light offset and possess large receptive fields. SINs were previously thought to be solely inhibitory, yet we report that a subset express glutamatergic markers. Finally, these cells are unlikely to be intrinsically photosensitive as removal of retinal input, eliminated nearly all light on and off responses in SINs. (Image: Eva Laurell) Abstract The superficial interneurons, SINs, of the zebrafish tectum, have been implicated in a range of visual functions, including size discrimination, directional selectivity, and looming‐evoked escape. This raises the question if SIN subpopulations, despite their morphological similarities and shared anatomical position in the retinotectal processing stream, carry out diverse, task‐specific functions in visual processing, or if they have simple tuning properties in common. Here we have further characterized the SINs through functional imaging, electrophysiological recordings, and neurotransmitter typing in two transgenic lines, the widely used Gal4s1156t and the recently reported LCRRH2‐RH2‐2:GFP. We found that about a third of the SINs strongly responded to changes in whole‐field light levels, with a strong preference for OFF over ON stimuli. Interestingly, individual SINs were selectively tuned to a diverse range of narrow luminance decrements. Overall responses to whole‐field luminance steps did not vary with the position of the SIN cell body along the depth of the tectal neuropil or with the orientation of its neurites. We ruled out the possibility that intrinsic photosensitivity of Gal4s1156t+ SINs contribute to the measured visual responses. We found that, while most SINs express GABAergic markers, a substantial minority express an excitatory neuronal marker, the vesicular glutamate transporter, expanding the possible roles of SIN function in the tectal circuitry. In conclusion, SINs represent a molecularly, morphologically, and functionally heterogeneous class of interneurons, with subpopulations that detect a range of specific visual features, to which we have now added narrow luminance decrements.
in Journal of Comparative Neurology on April 17, 2021 11:19 AM.
• #### Upregulation of eIF4E, but not other translation initiation factors, in dendritic spines during memory formation
in Journal of Comparative Neurology on April 17, 2021 09:01 AM.
• #### Author Correction: Synthesis and breakdown of universal metabolic precursors promoted by iron
Nature, Published online: 17 April 2021; doi:10.1038/s41586-021-03383-9
Author Correction: Synthesis and breakdown of universal metabolic precursors promoted by iron
in Nature on April 17, 2021 12:00 AM.
### Abstract
Dysregulation of the oxidant-antioxidant system contributes to the pathogenesis of cerebral stroke (CS). Epigenetic changes of redox homeostasis genes, such as glutamate-cysteine ligase (GCLM), glutathione-S-transferase-P1 (GSTP1), thioredoxin reductase 1 (TXNRD1), and myeloperoxidase (MPO), may be biomarkers of CS. In this study, we assessed the association of DNA methylation levels of these genes with CS and clinical features of CS. We quantitatively analyzed DNA methylation patterns in the promoter or regulatory regions of 4 genes (GCLM, GSTP1, TXNRD1, and MPO) in peripheral blood leukocytes of 59 patients with CS in the acute phase and in 83 relatively healthy individuals (controls) without cardiovascular and cerebrovascular diseases. We found that in both groups, the methylation level of CpG sites in genes TXNRD1 and GSTP1 was ≤ 5%. Lower methylation levels were registered at a CpG site (chr1:94,374,293, GRCh37 [hg19]) in GCLM in patients with ischemic stroke compared with the control group (9% [7%; 11.6%] (median and interquartile range) versus 14.7% [10.4%; 23%], respectively, p < 0.05). In the leukocytes of patients with CS, the methylation level of CpG sites in the analyzed region of MPO (chr17:56,356,470, GRCh3 [hg19]) on average was significantly lower (23.5% [19.3%; 26.7%]) than that in the control group (35.6% [30.4%; 42.6%], p < 0.05). We also found increased methylation of MPO in smokers with CS (27.2% [23.5%; 31.1%]) compared with nonsmokers with CS (21.7% [18.1%; 24.8%]). Thus, hypomethylation of CpG sites in GCLM and MPO in blood leukocytes is associated with CS in the acute phase.
in Journal of Molecular Neuroscience on April 17, 2021 12:00 AM.
### Objective
The long-duration response (LDR) to L-dopa is a sustained benefit deriving from chronic administration of therapy to patients with Parkinson’s disease (PD). Almost all patients with early PD may develop the LDR to L-dopa, even if some patients could not at given dosages of the drug. Aim of this exploratory study is to investigate whether a neuroanatomical substrate may underlie the development of the of LDR using structural magnetic resonance imaging (MRI) and voxel-based morphometry (VBM) analysis.
### Methods
Twenty-four drug-naïve PD patients were enrolled and underwent a baseline 3D T1-weighted structural brain MRI. Then, a treatment with 250/25 mg of L-dopa/carbidopa every 24 h was started and, after 2 weeks, LDR was evaluated by movement time recordings.
### Results
After 2 weeks of continuative therapy, 15 patients (62.5%) showed a sustained LDR (LDR +), while nine patients (37.5%) did not develop a sustained LDR (LDR −). VBM analysis on MRI executed before treatment showed changes of gray matter in precentral and middle frontal gyri in patients subsequently developing a sustained LDR with respect to those patients who will not achieve LDR.
### Conclusions
Parkinsonian patients who will develop a LDR to L-dopa may present, before starting treatment, peculiar structural conditions in cortical areas involved in motor control. Our exploratory study suggests that some cortical structural changes may predispose individual patients for developing the LDR to L-dopa.
in Journal of Neurology on April 17, 2021 12:00 AM.
### Objective
To define the neuropsychological and neuroimaging characteristics of classical infratentorial superficial siderosis (iSS), a rare but disabling disorder defined by hemosiderin deposition affecting the superficial layers of the cerebellum, brainstem and spinal cord, usually associated with a slowly progressive neurological syndrome of deafness, ataxia and myelopathy.
### Methods
We present the detailed neuropsychological and neuroimaging findings in 16 patients with iSS (mean age 57 years; 6 female).
### Results
Cognitive impairment was present in 8/16 (50%) of patients: executive dysfunction was the most prevalent (44%), followed by impairment of visual recognition memory (27%); other cognitive domains were largely spared. Disease symptom duration was significantly correlated with the number of cognitive domains impaired (r = 0.59, p = 0.011). Mood disorders were also common (anxiety 62%, depression 38%, both 69%) but not associated with disease symptom duration. MRI findings revealed siderosis was not only in infratentorial brain regions, but also in characteristic widespread symmetrical supratentorial brain regions, independent of disease duration and degree of cognitive impairment. The presence of small vessel disease markers was very low and did not account for the cognitive impairment observed.
### Conclusion
Neuropsychological disturbances are common in iSS and need to be routinely investigated. The lack of association between the anatomical extent of hemosiderin and cognitive impairment or disease duration suggests that hemosiderin itself is not directly neurotoxic. Additional biomarkers of iSS disease severity and progression are needed for future research and clinical trials.
in Journal of Neurology on April 17, 2021 12:00 AM.
### Abstract
Evidence for the influence of unaware signals on behaviour has been reported in both patient groups and healthy observers using the Redundant Signal Effect (RSE). The RSE refers to faster manual reaction times to the onset of multiple simultaneously presented target than those to a single stimulus. These findings are robust and apply to unimodal and multi-modal sensory inputs. A number of studies on neurologically impaired cases have demonstrated that RSE can be found even in the absence of conscious experience of the redundant signals. Here, we investigated behavioural changes associated with awareness in healthy observers by using Continuous Flash Suppression to render observers unaware of redundant targets. Across three experiments, we found an association between reaction times to the onset of a consciously perceived target and the reported level of visual awareness of the redundant target, with higher awareness being associated with faster reaction times. However, in the absence of any awareness of the redundant target, we found no evidence for speeded reaction times and even weak evidence for an inhibitory effect (slowing down of reaction times) on response to the seen target. These findings reveal marked differences between healthy observers and blindsight patients in how aware and unaware information from different locations is integrated in the RSE.
in Experimental Brain Research on April 17, 2021 12:00 AM.
• #### Transcranial direct current stimulation for balance and gait in repetitive mild traumatic brain injury in rats
Balance impairment and lack of postural orientation are serious problems in patients with repetitive mild traumatic brain injury (mTBI).
in BMC Neuroscience on April 17, 2021 12:00 AM.
• #### Issue Information
Annals of Neurology, Volume 89, Issue 5, Page i-viii, May 2021.
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Annals of Neurology: Volume 89, Number 5, May 2021
A photograph of the brain of a mouse perfused with blue latex and clarified to visualize the entire vascular system. This mouse had received an injection of a viral vector targeting vascular endothelium with a mutation of the KRAS gene that was recently implicated in the genesis of arteriovenous malformations (AVMs) in humans. The tangles of blue dye seen in multiple locations in the brain represent AVMs. This method allows the detailed investigation of the development of AVMs. See Park et al., (pp. 926–941, this issue) for details.
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Serum Glial Fibrillary Acidic Protein: A Neuromyelitis Optica Spectrum Disorder Biomarker
Objective Blood tests to monitor disease activity, attack severity, or treatment impact in neuromyelitis optica spectrum disorder (NMOSD) have not been developed. This study investigated the relationship between serum glial fibrillary acidic protein (sGFAP) concentration and NMOSD activity and assessed the impact of inebilizumab treatment. Methods N‐MOmentum was a prospective, multicenter, double‐blind, placebo‐controlled, randomized clinical trial in adults with NMOSD. sGFAP levels were measured by single‐molecule arrays (SIMOA) in 1,260 serial and attack‐related samples from 215 N‐MOmentum participants (92% aquaporin 4‐immunoglobulin G‐seropositive) and in control samples (from healthy donors and patients with relapsing–remitting multiple sclerosis). Results At baseline, 62 participants (29%) exhibited high sGFAP concentrations (≥170 pg/ml; ≥2 standard deviations above healthy donor mean concentration) and were more likely to experience an adjudicated attack than participants with lower baseline concentrations (hazard ratio [95% confidence interval], 3.09 [1.6–6.1], p = 0.001). Median (interquartile range [IQR]) concentrations increased within 1 week of an attack (baseline: 168.4, IQR = 128.9–449.7 pg/ml; attack: 2,160.1, IQR = 302.7–9,455.0 pg/ml, p = 0.0015) and correlated with attack severity (median fold change from baseline [FC], minor attacks: 1.06, IQR = 0.9–7.4; major attacks: 34.32, IQR = 8.7–107.5, p = 0.023). This attack‐related increase in sGFAP occurred primarily in placebo‐treated participants (FC: 20.2, IQR = 4.4–98.3, p = 0.001) and was not observed in inebilizumab‐treated participants (FC: 1.1, IQR = 0.8–24.6, p > 0.05). Five participants (28%) with elevated baseline sGFAP reported neurological symptoms leading to nonadjudicated attack assessments. Interpretation Serum GFAP may serve as a biomarker of NMOSD activity, attack risk, and treatment effects. ANN NEUROL 2021;89:895–910
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Noninvasive Mapping of Ripple Onset Predicts Outcome in Epilepsy Surgery
Objective Intracranial electroencephalographic (icEEG) studies show that interictal ripples propagate across the brain of children with medically refractory epilepsy (MRE), and the onset of this propagation (ripple onset zone [ROZ]) estimates the epileptogenic zone. It is still unknown whether we can map this propagation noninvasively. The goal of this study is to map ripples (ripple zone [RZ]) and their propagation onset (ROZ) using high‐density EEG (HD‐EEG) and magnetoencephalography (MEG), and to estimate their prognostic value in pediatric epilepsy surgery. Methods We retrospectively analyzed simultaneous HD‐EEG and MEG data from 28 children with MRE who underwent icEEG and epilepsy surgery. Using electric and magnetic source imaging, we estimated virtual sensors (VSs) at brain locations that matched the icEEG implantation. We detected ripples on VSs, defined the virtual RZ and virtual ROZ, and estimated their distance from icEEG. We assessed the predictive value of resecting virtual RZ and virtual ROZ for postsurgical outcome. Interictal spike localization on HD‐EEG and MEG was also performed and compared with ripples. Results We mapped ripple propagation in all patients with HD‐EEG and in 27 (96%) patients with MEG. The distance from icEEG did not differ between HD‐EEG and MEG when mapping the RZ (26–27mm, p = 0.6) or ROZ (22–24mm, p = 0.4). Resecting the virtual ROZ, but not virtual RZ or the sources of spikes, was associated with good outcome for HD‐EEG (p = 0.016) and MEG (p = 0.047). Interpretation HD‐EEG and MEG can map interictal ripples and their propagation onset (virtual ROZ). Noninvasively mapping the ripple onset may augment epilepsy surgery planning and improve surgical outcome of children with MRE. ANN NEUROL 2021;89:911–925
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### ANA Investigates: Neurological Complications of COVID‐19 Vaccines
Annals of Neurology, Volume 89, Issue 5, Page 856-857, May 2021.
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Genetic Variation in WNT9B Increases Relapse Hazard in Multiple Sclerosis
Objective Many multiple sclerosis (MS) genetic susceptibility variants have been identified, but understanding disease heterogeneity remains a key challenge. Relapses are a core feature of MS and a common primary outcome of clinical trials, with prevention of relapses benefiting patients immediately and potentially limiting long‐term disability accrual. We aim to identify genetic variation associated with relapse hazard in MS by analyzing the largest study population to date. Methods We performed a genomewide association study (GWAS) in a discovery cohort and investigated the genomewide significant variants in a replication cohort. Combining both cohorts, we captured a total of 2,231 relapses occurring before the start of any immunomodulatory treatment in 991 patients. For assessing time to relapse, we applied a survival analysis utilizing Cox proportional hazards models. We also investigated the association between MS genetic risk scores and relapse hazard and performed a gene ontology pathway analysis. Results The low‐frequency genetic variant rs11871306 within WNT9B reached genomewide significance in predicting relapse hazard and replicated (meta‐analysis hazard ratio (HR) = 2.15, 95% confidence interval (CI) = 1.70–2.78, p = 2.07 × 10−10). A pathway analysis identified an association of the pathway “response to vitamin D” with relapse hazard (p = 4.33 × 10−6). The MS genetic risk scores, however, were not associated with relapse hazard. Interpretation Genetic factors underlying disease heterogeneity differ from variants associated with MS susceptibility. Our findings imply that genetic variation within the Wnt signaling and vitamin D pathways contributes to differences in relapse occurrence. The present study highlights these cross‐talking pathways as potential modulators of MS disease activity. ANN NEUROL 2021;89:884–894
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Electroencephalographic Abnormalities are Common in COVID‐19 and are Associated with Outcomes
Objective The aim was to determine the prevalence and risk factors for electrographic seizures and other electroencephalographic (EEG) patterns in patients with Coronavirus disease 2019 (COVID‐19) undergoing clinically indicated continuous electroencephalogram (cEEG) monitoring and to assess whether EEG findings are associated with outcomes. Methods We identified 197 patients with COVID‐19 referred for cEEG at 9 participating centers. Medical records and EEG reports were reviewed retrospectively to determine the incidence of and clinical risk factors for seizures and other epileptiform patterns. Multivariate Cox proportional hazards analysis assessed the relationship between EEG patterns and clinical outcomes. Results Electrographic seizures were detected in 19 (9.6%) patients, including nonconvulsive status epilepticus (NCSE) in 11 (5.6%). Epileptiform abnormalities (either ictal or interictal) were present in 96 (48.7%). Preceding clinical seizures during hospitalization were associated with both electrographic seizures (36.4% in those with vs 8.1% in those without prior clinical seizures, odds ratio [OR] 6.51, p = 0.01) and NCSE (27.3% vs 4.3%, OR 8.34, p = 0.01). A pre‐existing intracranial lesion on neuroimaging was associated with NCSE (14.3% vs 3.7%; OR 4.33, p = 0.02). In multivariate analysis of outcomes, electrographic seizures were an independent predictor of in‐hospital mortality (hazard ratio [HR] 4.07 [1.44–11.51], p < 0.01). In competing risks analysis, hospital length of stay increased in the presence of NCSE (30 day proportion discharged with vs without NCSE: HR 0.21 [0.03–0.33] vs 0.43 [0.36–0.49]). Interpretation This multicenter retrospective cohort study demonstrates that seizures and other epileptiform abnormalities are common in patients with COVID‐19 undergoing clinically indicated cEEG and are associated with adverse clinical outcomes. ANN NEUROL 2021;89:872–883
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Selective Endothelial Hyperactivation of Oncogenic KRAS Induces Brain Arteriovenous Malformations in Mice
Objective Brain arteriovenous malformations (bAVMs) are a leading cause of hemorrhagic stroke and neurological deficits in children and young adults, however, no pharmacological intervention is available to treat these patients. Although more than 95% of bAVMs are sporadic without family history, the pathogenesis of sporadic bAVMs is largely unknown, which may account for the lack of therapeutic options. KRAS mutations are frequently observed in cancer, and a recent unprecedented finding of these mutations in human sporadic bAVMs offers a new direction in the bAVM research. Using a novel adeno‐associated virus targeting brain endothelium (AAV‐BR1), the current study tested if endothelial KRASG12V mutation induces sporadic bAVMs in mice. Methods Five‐week‐old mice were systemically injected with either AAV‐BR1‐GFP or ‐KRASG12V. At 8 weeks after the AAV injection, bAVM formation and characteristics were addressed by histological and molecular analyses. The effect of MEK/ERK inhibition on KRASG12V‐induced bAVMs was determined by treatment of trametinib, a US Food and Drug Administration (FDA)‐approved MEK/ERK inhibitor. Results The viral‐mediated KRASG12V overexpression induced bAVMs, which were composed of a tangled nidus mirroring the distinctive morphology of human bAVMs. The bAVMs were accompanied by focal angiogenesis, intracerebral hemorrhages, altered vascular constituents, neuroinflammation, and impaired sensory/cognitive/motor functions. Finally, we confirmed that bAVM growth was inhibited by trametinib treatment. Interpretation Our innovative approach using AAV‐BR1 confirms that KRAS mutations promote bAVM development via the MEK/ERK pathway, and provides a novel preclinical mouse model of bAVMs which will be useful to develop a therapeutic strategy for patients with bAVM. ANN NEUROL 2021;89:926–941
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Corrigendum for “Vision Therapy: Ocular Motor Training in Mild Traumatic Brain Injury”
Annals of Neurology, Volume 89, Issue 5, Page 1055-1055, May 2021.
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### The Changing Face of Cerebral Palsy
Annals of Neurology, Volume 89, Issue 5, Page 858-859, May 2021.
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Postictal Death Is Associated with Tonic Phase Apnea in a Mouse Model of Sudden Unexpected Death in Epilepsy
Objective Sudden unexpected death in epilepsy (SUDEP) is an unpredictable and devastating comorbidity of epilepsy that is believed to be due to cardiorespiratory failure immediately after generalized convulsive seizures. Methods We performed cardiorespiratory monitoring of seizure‐induced death in mice carrying either a p.Arg1872Trp or p.Asn1768Asp mutation in a single Scn8a allele—mutations identified from patients who died from SUDEP—and of seizure‐induced death in pentylenetetrazole‐treated wild‐type mice. Results The primary cause of seizure‐induced death for all mice was apnea, as (1) apnea began during a seizure and continued for tens of minutes until terminal asystole, and (2) death was prevented by mechanical ventilation. Fatal seizures always included a tonic phase that was coincident with apnea. This tonic phase apnea was not sufficient to produce death, as it also occurred during many nonfatal seizures; however, all seizures that were fatal had tonic phase apnea. We also made the novel observation that continuous tonic diaphragm contraction occurred during tonic phase apnea, which likely contributes to apnea by preventing exhalation, and this was only fatal when breathing did not resume after the tonic phase ended. Finally, recorded seizures from a patient with developmental epileptic encephalopathy with a previously undocumented SCN8A likely pathogenic variant (p.Leu257Val) revealed similarities to those of the mice, namely, an extended tonic phase that was accompanied by apnea. Interpretation We conclude that apnea coincident with the tonic phase of a seizure, and subsequent failure to resume breathing, are the determining events that cause seizure‐induced death in Scn8a mutant mice. ANN NEUROL 2021;89:1023–1035
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Early Predictors of 9‐Year Disability in Pediatric Multiple Sclerosis
Objective The purpose of this study was to assess early predictors of 9‐year disability in pediatric patients with multiple sclerosis. Methods Clinical and magnetic resonance imaging (MRI) assessments of 123 pediatric patients with multiple sclerosis were obtained at disease onset and after 1 and 2 years. A 9‐year clinical follow‐up was also performed. Cox proportional hazard and multivariable regression models were used to assess independent predictors of time to first relapse and 9‐year outcomes. Results Time to first relapse was predicted by optic nerve lesions (hazard ratio [HR] = 2.10, p = 0.02) and high‐efficacy treatment exposure (HR = 0.31, p = 0.005). Predictors of annualized relapse rate were: at baseline, presence of cerebellar (β = −0.15, p < 0.001), cervical cord lesions (β = 0.16, p = 0.003), and high‐efficacy treatment exposure (β = −0.14, p = 0.01); considering also 1‐year variables, number of relapses (β = 0.14, p = 0.002), and the previous baseline predictors; considering 2‐year variables, time to first relapse (2‐year: β = −0.12, p = 0.01) entered, whereas high‐efficacy treatment exposure exited the model. Predictors of 9‐year disability worsening were: at baseline, presence of optic nerve lesions (odds ratio [OR] = 6.45, p = 0.01); considering 1‐year and 2‐year variables, Expanded Disability Status Scale (EDSS) changes (1‐year: OR = 26.05, p < 0.001; 2‐year: OR = 16.38, p = 0.02), and ≥ 2 new T2‐lesions in 2 years (2‐year: OR = 4.91, p = 0.02). Predictors of higher 9‐year EDSS score were: at baseline, EDSS score (β = 0.58, p < 0.001), presence of brainstem lesions (β = 0.31, p = 0.04), and number of cervical cord lesions (β = 0.22, p = 0.05); considering 1‐year and 2‐year variables, EDSS changes (1‐year: β = 0.79, p < 0.001; 2‐year: β = 0.55, p < 0.001), and ≥ 2 new T2‐lesions (1‐year: β = 0.28, p = 0.03; 2‐year: β = 0.35, p = 0.01). Interpretation A complete baseline MRI assessment and an accurate clinical and MRI monitoring during the first 2 years of disease contribute to predict 9‐year prognosis in pediatric patients with multiple sclerosis. ANN NEUROL 2021;89:1011–1022
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Leucine Zipper 4 Autoantibody: A Novel Germ Cell Tumor and Paraneoplastic Biomarker
Objective This study was undertaken to describe a novel biomarker of germ cell tumor and associated paraneoplastic neurological syndrome (PNS). Methods Archival sera from patients with germ cell tumor–associated PNS were evaluated. We identified a common autoantigen in a human testicular cancer cell line (TCam‐2) by Western blot and mass spectrometry. Its identity was confirmed by recombinant‐protein Western blot, enzyme‐linked immunosorbent assay (ELISA), and cell‐based assay. Autoantibody specificity was confirmed by analyzing assorted control sera/cerebrospinal fluid. Results Leucine zipper 4 (LUZP4)–immunoglobulin G (IgG) was detected in 28 patients' sera, 26 of whom (93%) were men. The median age at neurological symptom onset was 45 years (range = 28–84). Median titer (ELISA) was 1:300 (1:50 to >1:6,400, normal value < 1:50). Coexistent kelchlike protein 11–IgG was identified in 18 cases (64%). The most common presenting phenotype was rhombencephalitis (17/28, 61%). Other presentations included limbic encephalitis (n = 5, 18%), seizures and/or encephalitis (n = 2, 7%), and motor neuronopathy/polyradiculopathy (n = 4, 14%). The most common malignancy among cancer‐evaluated PNS patients was seminoma (21/27, 78%). Nine of the 21 seminomas detected by whole‐body fluorodeoxyglucose positron emission tomography scan (43%) were extratesticular. Both female patients had ovarian teratoma. Regressed testicular germ cell tumors were found in 4 patients. Exposure of T‐cell–dendritic‐cell cocultures from chronic immunosuppression‐naïve LUZP4‐IgG–seropositive patients to recombinant LUZP4 protein evoked a marked increase in CD69 expression on both CD4+ and CD8+ T cells when compared to vehicle‐exposed and healthy control cultures. Interpretation LUZP4‐IgG represents a novel serological biomarker of PNS and has high predictive value for germ cell tumors. The demonstrated antigen‐specific T‐cell responses support a CD8+ T‐cell–mediated cytotoxic paraneoplastic and antitumor potential. ANN NEUROL 2021;89:1001–1010
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Onset of Preclinical Alzheimer Disease in Monozygotic Twins
Objective The present work was undertaken to study the genetic contribution to the start of Alzheimer's disease (AD) with amyloid and tau biomarkers in cognitively intact older identical twins. Methods We studied in 96 monozygotic twin‐pairs relationships between amyloid‐beta (Aβ) aggregation as measured by the Aβ1–42/1–40 ratio in cerebrospinal fluid (CSF; n = 126) and positron emission tomography (PET, n = 194), and CSF markers for Aβ production (beta‐secretase 1, Aβ1–40, and Aβ1–38) and CSF tau. Associations among markers were tested with generalized estimating equations including a random effect for twin status, adjusted for age, gender, and apolipoprotein E ε4 genotype. We used twin analyses to determine relative contributions of genetic and/or environmental factors to AD pathophysiological processes. Results Twenty‐seven individuals (14%) had an abnormal amyloid PET, and 14 twin‐pairs (15%) showed discordant amyloid PET scans. Within twin‐pairs, Aβ production markers and total‐tau (t‐tau) levels strongly correlated (r range = 0.73–0.86, all p < 0.0001), and Aβ aggregation markers and 181‐phosphorylated‐tau (p‐tau) levels correlated moderately strongly (r range = 0.50–0.64, all p < 0.0001). Cross‐twin cross‐trait analysis showed that Aβ1–38 in one twin correlated with Aβ1–42/1–40 ratios, and t‐tau and p‐tau levels in their cotwins (r range = −0.28 to 0.58, all p < .007). Within‐pair differences in Aβ production markers related to differences in tau levels (r range = 0.49–0.61, all p < 0.0001). Twin discordance analyses suggest that Aβ production and tau levels show coordinated increases in very early AD. Interpretation Our results suggest a substantial genetic/shared environmental background contributes to both Aβ and tau increases, suggesting that modulation of environmental risk factors may aid in delaying the onset of AD pathophysiological processes. ANN NEUROL 2021;89:987–1000
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Depression and Nigral Neuron Density in Lewy Body Spectrum Diseases
Parkinson's disease and other Lewy body spectrum diseases (LBDs) are associated with a specific risk for clinical depression. In the present clinicopathological study with 73 patients with LBD, we observed that the substantia nigra pars compacta dopamine neuron density was markedly lower in patients who had comorbid depression antemortem than in nondepressed patients (1.52 vs 2.32 n/mm2, p = 0.004). There were no differences in cognition, motor disease severity, antiparkinsonian medications, or disease duration between groups. The results implicate the substantia nigra as an important psychomotor modulatory area of mood in patients with Lewy body disorders. ANN NEUROL 2021;;89:1046–1050
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Diagnostic Utility of Gold Coast Criteria in Amyotrophic Lateral Sclerosis
Objective The diagnosis of amyotrophic lateral sclerosis (ALS) remains problematic, with current diagnostic criteria (revised El Escorial [rEEC] and Awaji) being complex and prone to error. Consequently, the diagnostic utility of the recently proposed Gold Coast criteria was determined in ALS. Methods We retrospectively reviewed 506 patients (302 males, 204 females) to compare the diagnostic accuracy of the Gold Coast criteria to that of the Awaji and rEEC criteria (defined by the proportion of patients categorized as definite, probable, or possible ALS) in accordance with standards of reporting of diagnostic accuracy criteria. Results The sensitivity of Gold Coast criteria (92%, 95% confidence interval [CI] = 88.7–94.6%) was comparable to that of Awaji (90.3%, 95% CI = 86.69–93.2%) and rEEC (88.6, 95% CI = 84.8–91.7%) criteria. Additionally, the Gold Coast criteria sensitivity was maintained across different subgroups, defined by site of onset, disease duration, and functional disability. In atypical ALS phenotypes, the Gold Coast criteria exhibited greater sensitivity and specificity. Interpretation The present study established the diagnostic utility of the Gold Coast criteria in ALS, with benefits evident in bulbar and limb onset disease patients, as well as atypical phenotypes. The Gold Coast criteria should be considered in clinical practice and therapeutic trials. ANN NEUROL 2021;89:979–986
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Assessing Dysferlinopathy Patients Over Three Years With a New Motor Scale
Objective Dysferlinopathy is a muscular dystrophy with a highly variable clinical presentation and currently unpredictable progression. This variability and unpredictability presents difficulties for prognostication and clinical trial design. The Jain Clinical Outcomes Study of Dysferlinopathy aims to establish the validity of the North Star Assessment for Limb Girdle Type Muscular Dystrophies (NSAD) scale and identify factors that influence the rate of disease progression using NSAD. Methods We collected a longitudinal series of functional assessments from 187 patients with dysferlinopathy over 3 years. Rasch analysis was used to develop the NSAD, a motor performance scale suitable for ambulant and nonambulant patients. Generalized estimating equations were used to evaluate the impact of patient factors on outcome trajectories. Results The NSAD detected significant change in clinical progression over 1 year. The steepest functional decline occurred during the first 10 years after symptom onset, with more rapid decline noted in patients who developed symptoms at a younger age (p = 0.04). The most rapidly deteriorating group over the study was patients 3 to 8 years post symptom onset at baseline. Interpretation The NSAD is the first validated limb girdle specific scale of motor performance, suitable for use in clinical practice and clinical trials. Longitudinal analysis showed it may be possible to identify patient factors associated with greater functional decline both across the disease course and in the short‐term for clinical trial preparation. Through further work and validation in this cohort, we anticipate that a disease model incorporating functional performance will allow for more accurate prognosis for patients with dysferlinopathy. ANN NEUROL 2021;89:967–978
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Apolipoprotein E4 Reduction with Antisense Oligonucleotides Decreases Neurodegeneration in a Tauopathy Model
Objective Apolipoprotein E (ApoE) genotype is the strongest genetic risk factor for late‐onset Alzheimer's disease, with the ε4 allele increasing risk in a dose‐dependent fashion. In addition to ApoE4 playing a crucial role in amyloid‐β deposition, recent evidence suggests that it also plays an important role in tau pathology and tau‐mediated neurodegeneration. It is not known, however, whether therapeutic reduction of ApoE4 would exert protective effects on tau‐mediated neurodegeneration. Methods Herein, we used antisense oligonucleotides (ASOs) against human APOE to reduce ApoE4 levels in the P301S/ApoE4 mouse model of tauopathy. We treated P301S/ApoE4 mice with ApoE or control ASOs via intracerebroventricular injection at 6 and 7.5 months of age and performed brain pathological assessments at 9 months of age. Results Our results indicate that treatment with ApoE ASOs reduced ApoE4 protein levels by ~50%, significantly protected against tau pathology and associated neurodegeneration, decreased neuroinflammation, and preserved synaptic density. These data were also corroborated by a significant reduction in levels of neurofilament light chain (NfL) protein in plasma of ASO‐treated mice. Interpretation We conclude that reducing ApoE4 levels should be explored further as a therapeutic approach for APOE4 carriers with tauopathy including Alzheimer's disease. ANN NEUROL 2021;89:952–966
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Inflammatory Cytokine Patterns Associated with Neurological Diseases in Coronavirus Disease 2019
Patients with coronavirus disease 2019 (COVID‐19) can present with distinct neurological manifestations. This study shows that inflammatory neurological diseases were associated with increased levels of interleukin (IL)‐2, IL‐4, IL‐6, IL‐10, IL‐12, chemokine (C‐X‐C motif) ligand 8 (CXCL8), and CXCL10 in the cerebrospinal fluid. Conversely, encephalopathy was associated with high serum levels of IL‐6, CXCL8, and active tumor growth factor β1. Inflammatory syndromes of the central nervous system in COVID‐19 can appear early, as a parainfectious process without significant systemic involvement, or without direct evidence of severe acute respiratory syndrome coronavirus 2 neuroinvasion. At the same time, encephalopathy is mainly influenced by peripheral events, including inflammatory cytokines. ANN NEUROL 2021;89:1041–1045
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Adults with Cerebral Palsy Require Ongoing Neurologic Care: A Systematic Review
Cerebral palsy (CP) neurologic care and research efforts typically focus on children. However, most people with CP are adults. Adults with CP are at increased risk of new neurologic conditions, such as stroke and myelopathy, that require ongoing neurologic surveillance to distinguish them from baseline motor impairments. Neurologic factors could also contribute to the motor function decline, chronic pain, and chronic fatigue that are commonly experienced by adults with CP. Based on a systematic literature review, we suggest (1) guidelines for neurologic surveillance and neurologist referral and (2) clinical research questions regarding the evolving neurologic risks for adults with CP. ANN NEUROL 2021;89:860–871
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Age‐Related Cognitive Changes as a Function of CAG Repeat in Child and Adolescent Carriers of Mutant Huntingtin
Limited data exists regarding the disease course of Huntington's Disease (HD) in children and young adults. Here, we evaluate the trajectory of various cognitive skill development as a function of cytosine‐adenine‐guanine (CAG) repeat length in children and adolescents that carry the mutation that causes HD. We discovered that the development of verbal skills seems to plateau earlier as CAG repeat length increases. These findings increase our understanding of the relationship between neurodegeneration and neurodevelopment and may have far‐reaching implications for future gene‐therapy treatment strategies. ANN NEUROL 2021;89:1036–1040
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Heritability Enrichment Implicates Microglia in Parkinson's Disease Pathogenesis
Objective Understanding how different parts of the immune system contribute to pathogenesis in Parkinson's disease is a burning challenge with important therapeutic implications. We studied enrichment of common variant heritability for Parkinson's disease stratified by immune and brain cell types. Methods We used summary statistics from the most recent meta‐analysis of genomewide association studies in Parkinson's disease and partitioned heritability using linkage disequilibrium score regression, stratified for specific cell types, as defined by open chromatin regions. We also validated enrichment results using a polygenic risk score approach and intersected disease‐associated variants with epigenetic data and expression quantitative loci to nominate and explore a putative microglial locus. Results We found significant enrichment of Parkinson's disease risk heritability in open chromatin regions of microglia and monocytes. Genomic annotations overlapped substantially between these 2 cell types, and only the enrichment signal for microglia remained significant in a joint model. We present evidence suggesting P2RY12, a key microglial gene and target for the antithrombotic agent clopidogrel, as the likely driver of a significant Parkinson's disease association signal on chromosome 3. Interpretation Our results provide further support for the importance of immune mechanisms in Parkinson's disease pathogenesis, highlight microglial dysregulation as a contributing etiological factor, and nominate a targetable microglial gene candidate as a pathogenic player. Immune processes can be modulated by therapy, with potentially important clinical implications for future treatment in Parkinson's disease. ANN NEUROL 2021;89:942–951
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Impact of Global Health Electives on Neurology Trainees
We surveyed neurologists who completed a global health experience as residents or fellows to assess the impact of the experience. A total of 100% (n = 72) would recommend the experience to others. Most reported improved clinical (86%) and examination (82%) skills. All gained an understanding of different health care systems, and 83% reported deeper commitment to underserved populations. A total of 41 participants (57%) reported more judicious use of resources upon return to the United States. Global health electives had a positive impact on neurology trainees. More attention to the host country perspective and predeparture training may help inform program structure and participant expectations in the future. ANN NEUROL 2021;89:851–855
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Biceps Activity Synchronous with Inspiration After Phrenic Nerve Transfer
Annals of Neurology, Volume 89, Issue 5, Page 1053-1054, May 2021.
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### Functional Imaging of Bow Hunter's Syndrome
Annals of Neurology, Volume 89, Issue 5, Page 1051-1052, May 2021.
in Annals of Neurology on April 16, 2021 07:54 PM.
• #### The metastable brain associated with autistic-like traits of typically developing individuals
by Takumi Sase, Keiichi Kitajo
Metastability in the brain is thought to be a mechanism involved in dynamic organization of cognitive and behavioral functions across multiple spatiotemporal scales. However, it is not clear how such organization is realized in underlying neural oscillations in a high-dimensional state space. It was shown that macroscopic oscillations often form phase-phase coupling (PPC) and phase-amplitude coupling (PAC) which result in synchronization and amplitude modulation, respectively, even without external stimuli. These oscillations can also make spontaneous transitions across synchronous states at rest. Using resting-state electroencephalographic signals and the autism-spectrum quotient scores acquired from healthy humans, we show experimental evidence that the PAC combined with PPC allows amplitude modulation to be transient, and that the metastable dynamics with this transient modulation is associated with autistic-like traits. In individuals with a longer attention span, such dynamics tended to show fewer transitions between states by forming delta-alpha PAC. We identified these states as two-dimensional metastable states that could share consistent patterns across individuals. Our findings suggest that the human brain dynamically organizes inter-individual differences in a hierarchy of macroscopic oscillations with multiple timescales by utilizing metastability.
in PLoS Computational Biology on April 16, 2021 02:00 PM.
• #### Genome-wide analysis of lncRNA stability in human
by Kaiwen Shi, Tao Liu, Hanjiang Fu, Wuju Li, Xiaofei Zheng
Transcript stability is associated with many biological processes, and the factors affecting mRNA stability have been extensively studied. However, little is known about the features related to human long noncoding RNA (lncRNA) stability. By inhibiting transcription and collecting samples in 10 time points, genome-wide RNA-seq studies was performed in human lung adenocarcinoma cells (A549) and RNA half-life datasets were constructed. The following observations were obtained. First, the half-life distributions of both lncRNAs and messanger RNAs (mRNAs) with one exon (lnc-human1 and m-human1) were significantly different from those of both lncRNAs and mRNAs with more than one exon (lnc-human2 and m-human2). Furthermore, some factors such as full-length transcript secondary structures played a contrary role in lnc-human1 and m-human2. Second, through the half-life comparisons of nucleus- and cytoplasm-specific and common lncRNAs and mRNAs, lncRNAs (mRNAs) in the nucleus were found to be less stable than those in the cytoplasm, which was derived from transcripts themselves rather than cellular location. Third, kmers-based protein−RNA or RNA−RNA interactions promoted lncRNA stability from lnc-human1 and decreased mRNA stability from m-human2 with high probability. Finally, through applying deep learning−based regression, a non-linear relationship was found to exist between the half-lives of lncRNAs (mRNAs) and related factors. The present study established lncRNA and mRNA half-life regulation networks in the A549 cell line and shed new light on the degradation behaviors of both lncRNAs and mRNAs.
in PLoS Computational Biology on April 16, 2021 02:00 PM.
• #### Identification of long regulatory elements in the genome of <i>Plasmodium falciparum</i> and other eukaryotes
by Christophe Menichelli, Vincent Guitard, Rafael M. Martins, Sophie Lèbre, Jose-Juan Lopez-Rubio, Charles-Henri Lecellier, Laurent Bréhélin
Long regulatory elements (LREs), such as CpG islands, polydA:dT tracts or AU-rich elements, are thought to play key roles in gene regulation but, as opposed to conventional binding sites of transcription factors, few methods have been proposed to formally and automatically characterize them. We present here a computational approach named DExTER (Domain Exploration To Explain gene Regulation) dedicated to the identification of candidate LREs (cLREs) and apply it to the analysis of the genomes of P. falciparum and other eukaryotes. Our analyses show that all tested genomes contain several cLREs that are somewhat conserved along evolution, and that gene expression can be predicted with surprising accuracy on the basis of these long regions only. Regulation by cLREs exhibits very different behaviours depending on species and conditions. In P. falciparum and other Apicomplexan organisms as well as in Dictyostelium discoideum, the process appears highly dynamic, with different cLREs involved at different phases of the life cycle. For multicellular organisms, the same cLREs are involved in all tissues, but a dynamic behavior is observed along embryonic development stages. In P. falciparum, whose genome is known to be strongly depleted of transcription factors, cLREs are predictive of expression with an accuracy above 70%, and our analyses show that they are associated with both transcriptional and post-transcriptional regulation signals. Moreover, we assessed the biological relevance of one LRE discovered by DExTER in P. falciparum using an in vivo reporter assay. The source code (python) of DExTER is available at https://gite.lirmm.fr/menichelli/DExTER.
in PLoS Computational Biology on April 16, 2021 02:00 PM.
• #### Accurate cancer phenotype prediction with AKLIMATE, a stacked kernel learner integrating multimodal genomic data and pathway knowledge
by Vladislav Uzunangelov, Christopher K. Wong, Joshua M. Stuart
Advancements in sequencing have led to the proliferation of multi-omic profiles of human cells under different conditions and perturbations. In addition, many databases have amassed information about pathways and gene “signatures”—patterns of gene expression associated with specific cellular and phenotypic contexts. An important current challenge in systems biology is to leverage such knowledge about gene coordination to maximize the predictive power and generalization of models applied to high-throughput datasets. However, few such integrative approaches exist that also provide interpretable results quantifying the importance of individual genes and pathways to model accuracy. We introduce AKLIMATE, a first kernel-based stacked learner that seamlessly incorporates multi-omics feature data with prior information in the form of pathways for either regression or classification tasks. AKLIMATE uses a novel multiple-kernel learning framework where individual kernels capture the prediction propensities recorded in random forests, each built from a specific pathway gene set that integrates all omics data for its member genes. AKLIMATE has comparable or improved performance relative to state-of-the-art methods on diverse phenotype learning tasks, including predicting microsatellite instability in endometrial and colorectal cancer, survival in breast cancer, and cell line response to gene knockdowns. We show how AKLIMATE is able to connect feature data across data platforms through their common pathways to identify examples of several known and novel contributors of cancer and synthetic lethality.
in PLoS Computational Biology on April 16, 2021 02:00 PM.
• #### Supercoiled DNA and non-equilibrium formation of protein complexes: A quantitative model of the nucleoprotein ParB<i>S</i> partition complex
by Jean-Charles Walter, Thibaut Lepage, Jérôme Dorignac, Frédéric Geniet, Andrea Parmeggiani, John Palmeri, Jean-Yves Bouet, Ivan Junier
ParABS, the most widespread bacterial DNA segregation system, is composed of a centromeric sequence, parS, and two proteins, the ParA ATPase and the ParB DNA binding proteins. Hundreds of ParB proteins assemble dynamically to form nucleoprotein parS-anchored complexes that serve as substrates for ParA molecules to catalyze positioning and segregation events. The exact nature of this ParBS complex has remained elusive, what we address here by revisiting the Stochastic Binding model (SBM) introduced to explain the non-specific binding profile of ParB in the vicinity of parS. In the SBM, DNA loops stochastically bring loci inside a sharp cluster of ParB. However, previous SBM versions did not include the negative supercoiling of bacterial DNA, leading to use unphysically small DNA persistences to explain the ParB binding profiles. In addition, recent super-resolution microscopy experiments have revealed a ParB cluster that is significantly smaller than previous estimations and suggest that it results from a liquid-liquid like phase separation. Here, by simulating the folding of long (≥ 30 kb) supercoiled DNA molecules calibrated with realistic DNA parameters and by considering different possibilities for the physics of the ParB cluster assembly, we show that the SBM can quantitatively explain the ChIP-seq ParB binding profiles without any fitting parameter, aside from the supercoiling density of DNA, which, remarkably, is in accord with independent measurements. We also predict that ParB assembly results from a non-equilibrium, stationary balance between an influx of produced proteins and an outflux of excess proteins, i.e., ParB clusters behave like liquid-like protein condensates with unconventional “leaky” boundaries.
in PLoS Computational Biology on April 16, 2021 02:00 PM.
• #### A new model for simultaneous dimensionality reduction and time-varying functional connectivity estimation
by Diego Vidaurre
An important question in neuroscience is whether or not we can interpret spontaneous variations in the pattern of correlation between brain areas, which we refer to as functional connectivity or FC, as an index of dynamic neuronal communication in fMRI. That is, can we measure time-varying FC reliably? And, if so, can FC reflect information transfer between brain regions at relatively fast-time scales? Answering these questions in practice requires dealing with the statistical challenge of having high-dimensional data and a comparatively lower number of time points or volumes. A common strategy is to use PCA to reduce the dimensionality of the data, and then apply some model, such as the hidden Markov model (HMM) or a mixture model of Gaussian distributions, to find a set of distinct FC patterns or states. The distinct spatial properties of these FC states together with the time-resolved switching between them offer a flexible description of time-varying FC. In this work, I show that in this context PCA can suffer from systematic biases and loss of sensitivity for the purposes of finding time-varying FC. To get around these issues, I propose a novel variety of the HMM, named HMM-PCA, where the states are themselves PCA decompositions. Since PCA is based on the data covariance, the state-specific PCA decompositions reflect distinct patterns of FC. I show, theoretically and empirically, that fusing dimensionality reduction and time-varying FC estimation in one single step can avoid these problems and outperform alternative approaches, facilitating the quantification of transient communication in the brain.
in PLoS Computational Biology on April 16, 2021 02:00 PM.
• #### Modeling the grid cell activity on non-horizontal surfaces based on oscillatory interference modulated by gravity
Publication date: Available online 16 April 2021
Source: Neural Networks
Author(s): Yihong Wang, Xuying Xu, Rubin Wang
in Neural Networks on April 16, 2021 01:00 PM.
• #### Dynamic interplay between GABAergic networks and developing neurons in the adult hippocampus
Publication date: August 2021
Source: Current Opinion in Neurobiology, Volume 69
Author(s): Mariela F. Trinchero, Damiana Giacomini, Alejandro F. Schinder
in Current Opinion in Neurobiology on April 16, 2021 01:00 PM.
• #### The synaptic life of microtubules
Publication date: August 2021
Source: Current Opinion in Neurobiology, Volume 69
Author(s): Clarissa Waites, Xiaoyi Qu, Francesca Bartolini
in Current Opinion in Neurobiology on April 16, 2021 01:00 PM.
• #### Efficient sliding locomotion of three-link bodies
Author(s): Silas Alben
We study the efficiency of sliding locomotion for three-link bodies with prescribed joint angle motions. The bodies move with no inertia, under dry (Coulomb) friction that is anisotropic (different in the directions normal and tangent to the links) and directional (different in the forward and backw...
[Phys. Rev. E 103, 042414] Published Fri Apr 16, 2021
in Physical Review E: Biological physics on April 16, 2021 10:00 AM.
• #### Superficial Siderosis: A Clinical Review
in Annals of Neurology on April 16, 2021 02:58 AM.
• #### Simulations Approaching Data: Cortical Slow Waves in Inferred Models of the Whole Hemisphere of Mouse. (arXiv:2104.07445v1 [q-bio.NC])
Recent enhancements in neuroscience, like the development of new and powerful recording techniques of the brain activity combined with the increasing anatomical knowledge provided by atlases and the growing understanding of neuromodulation principles, allow studying the brain at a whole new level, paving the way to the creation of extremely detailed effective network models directly from observed data. Leveraging the advantages of this integrated approach, we propose a method to infer models capable of reproducing the complex spatio-temporal dynamics of the slow waves observed in the experimental recordings of the cortical hemisphere of a mouse under anesthesia. To reliably claim the good match between data and simulations, we implemented a versatile ensemble of analysis tools, applicable to both experimental and simulated data and capable to identify and quantify the spatio-temporal propagation of waves across the cortex. In order to reproduce the observed slow wave dynamics, we introduced an inference procedure composed of two steps: the inner and the outer loop. In the inner loop, the parameters of a mean-field model are optimized by likelihood maximization, exploiting the anatomical knowledge to define connectivity priors. The outer loop explores "external" parameters, seeking for an optimal match between the simulation outcome and the data, relying on observables (speed, directions, and frequency of the waves) apt for the characterization of cortical slow waves; the outer loop includes a periodic neuro-modulation for better reproduction of the experimental recordings. We show that our model is capable to reproduce most of the features of the non-stationary and non-linear dynamics displayed by the biological network. Also, the proposed method allows to infer which are the relevant modifications of parameters when the brain state changes, e.g. according to anesthesia levels.
in arXiv: Quantitative Biology: Neurons and Cognition on April 16, 2021 01:30 AM.
• #### The Role of Context in Detecting Previously Fact-Checked Claims. (arXiv:2104.07423v1 [cs.CL])
Recent years have seen the proliferation of disinformation and misinformation online, thanks to the freedom of expression on the Internet and to the rise of social media. Two solutions were proposed to address the problem: (i) manual fact-checking, which is accurate and credible, but slow and non-scalable, and (ii) automatic fact-checking, which is fast and scalable, but lacks explainability and credibility. With the accumulation of enough manually fact-checked claims, a middle-ground approach has emerged: checking whether a given claim has previously been fact-checked. This can be made automatically, and thus fast, while also offering credibility and explainability, thanks to the human fact-checking and explanations in the associated fact-checking article. This is a relatively new and understudied research direction, and here we focus on claims made in a political debate, where context really matters. Thus, we study the impact of modeling the context of the claim: both on the source side, i.e., in the debate, as well as on the target side, i.e., in the fact-checking explanation document. We do this by modeling the local context, the global context, as well as by means of co-reference resolution, and reasoning over the target text using Transformer-XH. The experimental results show that each of these represents a valuable information source, but that modeling the source-side context is more important, and can yield 10+ points of absolute improvement.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 16, 2021 01:30 AM.
• #### On the Assessment of Benchmark Suites for Algorithm Comparison. (arXiv:2104.07381v1 [cs.NE])
Benchmark suites, i.e. a collection of benchmark functions, are widely used in the comparison of black-box optimization algorithms. Over the years, research has identified many desired qualities for benchmark suites, such as diverse topology, different difficulties, scalability, representativeness of real-world problems among others. However, while the topology characteristics have been subjected to previous studies, there is no study that has statistically evaluated the difficulty level of benchmark functions, how well they discriminate optimization algorithms and how suitable is a benchmark suite for algorithm comparison. In this paper, we propose the use of an item response theory (IRT) model, the Bayesian two-parameter logistic model for multiple attempts, to statistically evaluate these aspects with respect to the empirical success rate of algorithms. With this model, we can assess the difficulty level of each benchmark, how well they discriminate different algorithms, the ability score of an algorithm, and how much information the benchmark suite adds in the estimation of the ability scores. We demonstrate the use of this model in two well-known benchmark suites, the Black-Box Optimization Benchmark (BBOB) for continuous optimization and the Pseudo Boolean Optimization (PBO) for discrete optimization. We found that most benchmark functions of BBOB suite have high difficulty levels (compared to the optimization algorithms) and low discrimination. For the PBO, most functions have good discrimination parameters but are often considered too easy. We discuss potential uses of IRT in benchmarking, including its use to improve the design of benchmark suites, to measure multiple aspects of the algorithms, and to design adaptive suites.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 16, 2021 01:30 AM.
• #### A Novel Neuron Model of Visual Processor. (arXiv:2104.07257v1 [q-bio.NC])
Simulating and imitating the neuronal network of humans or mammals is a popular topic that has been explored for many years in the fields of pattern recognition and computer vision. Inspired by neuronal conduction characteristics in the primary visual cortex of cats, pulse-coupled neural networks (PCNNs) can exhibit synchronous oscillation behavior, which can process digital images without training. However, according to the study of single cells in the cat primary visual cortex, when a neuron is stimulated by an external periodic signal, the interspike-interval (ISI) distributions represent a multimodal distribution. This phenomenon cannot be explained by all PCNN models. By analyzing the working mechanism of the PCNN, we present a novel neuron model of the primary visual cortex consisting of a continuous-coupled neural network (CCNN). Our model inherited the threshold exponential decay and synchronous pulse oscillation property of the original PCNN model, and it can exhibit chaotic behavior consistent with the testing results of cat primary visual cortex neurons. Therefore, our CCNN model is closer to real visual neural networks. For image segmentation tasks, the algorithm based on CCNN model has better performance than the state-of-art of visual cortex neural network model. The strength of our approach is that it helps neurophysiologists further understand how the primary visual cortex works and can be used to quantitatively predict the temporal-spatial behavior of real neural networks. CCNN may also inspire engineers to create brain-inspired deep learning networks for artificial intelligence purposes.
in arXiv: Computer Science: Neural and Evolutionary Computing on April 16, 2021 01:30 AM.
• #### Decoding of the Walking States and Step Rates from Cortical Electrocorticogram Signals. (arXiv:2104.07062v1 [q-bio.NC])
Brain-computer interfaces (BCIs) have shown promising results in restoring motor function to individuals with spinal cord injury. These systems have traditionally focused on the restoration of upper extremity function; however, the lower extremities have received relatively little attention. Early feasibility studies used noninvasive electroencephalogram (EEG)-based BCIs to restore walking function to people with paraplegia. However, the limited spatiotemporal resolution of EEG signals restricted the application of these BCIs to elementary gait tasks, such as the initiation and termination of walking. To restore more complex gait functions, BCIs must accurately decode additional degrees of freedom from brain signals. In this study, we used subdurally recorded electrocorticogram (ECoG) signals from able-bodied subjects to design a decoder capable of predicting the walking state and step rate information. We recorded ECoG signals from the motor cortices of two individuals as they walked on a treadmill at different speeds. Our offline analysis demonstrated that the state information could be decoded from >16 minutes of ECoG data with an unprecedented accuracy of 99.8%. Additionally, using a Bayesian filter approach, we achieved an average correlation coefficient between the decoded and true step rates of 0.934. When combined, these decoders may yield decoding accuracies sufficient to safely operate present-day walking prostheses.
in arXiv: Quantitative Biology: Neurons and Cognition on April 16, 2021 01:30 AM.
• #### Author Correction: Rebuilding marine life
Nature, Published online: 16 April 2021; doi:10.1038/s41586-021-03271-2
Author Correction: Rebuilding marine life
in Nature on April 16, 2021 12:00 AM.
• #### Hypothalamic Rax+ tanycytes contribute to tissue repair and tumorigenesis upon oncogene activation in mice
Nature Communications, Published online: 16 April 2021; doi:10.1038/s41467-021-22640-z
Tanycytes contribute to the regulation of multiple hypothalamic functions. Here the authors investigate the regenerative and tumorigenic potential of adult Rax+ tanycytes in the median eminence in the context of the stem cell niche in mice.
in Nature Communications on April 16, 2021 12:00 AM.
• #### DUSP16 promotes cancer chemoresistance through regulation of mitochondria-mediated cell death
Nature Communications, Published online: 16 April 2021; doi:10.1038/s41467-021-22638-7
Chemoresistance is one of the main challenges for cancer therapy success. Here, the authors show that dual-specificity phosphatase 16 (DUSP16) expression is associated with chemoresistance in several types of cancer through impairing mitochondria-associated apoptosis.
in Nature Communications on April 16, 2021 12:00 AM.
• #### Classical MHC expression by DP thymocytes impairs the selection of non-classical MHC restricted innate-like T cells
Nature Communications, Published online: 16 April 2021; doi:10.1038/s41467-021-22589-z
Conventional T cell subsets are selected in the thymus by peptide bearing MHC expressed by cortical epithelial cells, in contrast cortical thymocytes express non-peptide bearing MHC molecules including CD1d and MR1 and select iNKT and MAIT cell populations respectively. Here, the authors generate a novel inducible MHC class-I trasnactivator murine system and suggest the absence of peptide-MHC on thymocytes is involved in the selection of non-peptide specific lymphocytes.
in Nature Communications on April 16, 2021 12:00 AM.
• #### A committed fourfold increase in ocean oxygen loss
Nature Communications, Published online: 16 April 2021; doi:10.1038/s41467-021-22584-4
Ocean warming and changing circulation as a result of climate change are driving down oxygen levels and threatening ecosystems. Here the author shows that though immediate cessation of anthropogenic CO2 emissions would halt upper ocean oxygen loss, it would continue in the deep ocean for 100 s of years.
in Nature Communications on April 16, 2021 12:00 AM.
• #### Detecting protein and DNA/RNA structures in cryo-EM maps of intermediate resolution using deep learning
Nature Communications, Published online: 16 April 2021; doi:10.1038/s41467-021-22577-3
It is challenging to extract structural information from EM density maps at intermediate or low resolutions. Here, the authors present Emap2sec+, a program for detecting nucleotides and protein secondary structures in EM density maps at 5 to 10 Å resolution.
in Nature Communications on April 16, 2021 12:00 AM.
• #### Nanoscale magnonic Fabry-Pérot resonator for low-loss spin-wave manipulation
Nature Communications, Published online: 16 April 2021; doi:10.1038/s41467-021-22520-6
Compared to electromagnetic waves, the wavelength of spin waves is significantly shorter at gigahertz frequencies, enabling the miniaturisation of wave-based devices. Here, the authors present a magnonic Fabry-Pérot resonator allowing for nanoscale and reconfigurable manipulation of spin waves.
in Nature Communications on April 16, 2021 12:00 AM.
• #### Defective viral genomes as therapeutic interfering particles against flavivirus infection in mammalian and mosquito hosts
Nature Communications, Published online: 16 April 2021; doi:10.1038/s41467-021-22341-7
Defective viral genomes (DVGs) can interfere with virus replication and provide a potential approach to control infection. Here, Rezelj et al. use a combined experimental evolution and computational approach to identify DVG sequences that optimally interfere with Zika virus infection and show antiviral activity in mice and mosquitoes.
in Nature Communications on April 16, 2021 12:00 AM.
• #### High-yield, wafer-scale fabrication of ultralow-loss, dispersion-engineered silicon nitride photonic circuits
Nature Communications, Published online: 16 April 2021; doi:10.1038/s41467-021-21973-z
For widespread technological application of nonlinear photonic integrated circuits, ultralow optical losses and high fabrication throughput are required. Here, the authors present a CMOS fabrication technique that realizes integrate photonic microresonators on waver-level with mean quality factors exceeding 30 million and 1 dB/m optical losses.
in Nature Communications on April 16, 2021 12:00 AM.
• #### Daily briefing: 2.5 billion Tyrannosaurus rex roamed Earth
Nature, Published online: 16 April 2021; doi:10.1038/d41586-021-01041-8
Researchers estimate the T. rex population over the 2 million or so years they existed. Plus, the first monkey–human embryos reignite the chimaera debate, and how to curb the spread of COVID-19 vaccine disinformation.
in Nature on April 16, 2021 12:00 AM.
• #### Coronapod: could COVID vaccines cause blood clots? Here's what the science says
Nature, Published online: 16 April 2021; doi:10.1038/d41586-021-01036-5
Scientists are investigating rare blood clots to ask if they could be linked to the Johnson & Johnson and Oxford-AstraZeneca coronavirus vaccines
in Nature on April 16, 2021 12:00 AM.
• #### NIH reverses Trump-era restrictions on fetal-tissue research
Nature, Published online: 16 April 2021; doi:10.1038/d41586-021-01035-6
The US National Institutes of Health will remove limits on government scientists and cancel a controversial grant-reviewing ethics panel.
in Nature on April 16, 2021 12:00 AM.
• #### Will the United States make its most dramatic climate pledge yet?
Nature, Published online: 16 April 2021; doi:10.1038/d41586-021-01000-3
President Joe Biden is preparing to announce the country’s commitment to slashing emissions, but political obstacles remain.
in Nature on April 16, 2021 12:00 AM.
• #### COVID vaccines and blood clots: five key questions
Nature, Published online: 16 April 2021; doi:10.1038/d41586-021-00998-w
As safety concerns delay the use of two COVID-19 vaccines, Nature looks at the questions that scientists want answered.
in Nature on April 16, 2021 12:00 AM.
• #### The race to curb the spread of COVID vaccine disinformation
Nature, Published online: 16 April 2021; doi:10.1038/d41586-021-00997-x
Researchers are applying strategies honed during the 2020 US presidential election to track anti-vax propaganda.
in Nature on April 16, 2021 12:00 AM.
• #### Ice on the Alps’s highest peak details a pollutant’s rise
Nature, Published online: 16 April 2021; doi:10.1038/d41586-021-00995-z
A glacier on Mont Blanc provides a decades-long record of the use of bromine, which corrodes the ozone layer.
in Nature on April 16, 2021 12:00 AM.
• #### Wiggly signal hints of an aurora on a planet far from the Solar System
Nature, Published online: 16 April 2021; doi:10.1038/d41586-021-00994-0
A vast radio observatory on Earth detects signals similar to those generated by the aurora on Jupiter.
in Nature on April 16, 2021 12:00 AM.
• #### Flowers adapt to welcome the birds — but not the bees
Nature, Published online: 16 April 2021; doi:10.1038/d41586-021-00963-7
Once in the Americas, foxgloves swiftly evolved under pressure by pollinating hummingbirds.
in Nature on April 16, 2021 12:00 AM.
• #### Comment on the paper Negative anti-SARS-CoV-2 S antibody response following Pfizer SARS-CoV-2 vaccination in a patient on ocrelizumab : the likely explanation for this phenomenon based on our observations
in Journal of Neurology on April 16, 2021 12:00 AM.
### Abstract
The visuomotor processes involved in grasping a 2-D target are known to be fundamentally different than those involved in grasping a 3-D object, and this has led to concerns regarding the generalizability of 2-D grasping research. This study directly compared participants’ fixation positions and digit placement during interaction with either physical square objects or 2-D virtual versions of these objects. Participants were instructed to either simply grasp the stimulus or grasp and slide it to another location. Participants’ digit placement and fixation positions did not significantly differ as a function of stimulus type when grasping in the center of the display. However, gaze and grasp positions shifted toward the near side of non-central virtual stimuli, while consistently remaining close to the horizontal midline of the physical stimulus. Participants placed their digits at less stable locations when grasping the virtual stimulus in comparison to the physical stimulus on the right side of the display, but this difference disappeared when grasping in the center and on the left. Similar outward shifts in digit placement and lowered fixations were observed when sliding both stimulus types, suggesting participants incorporated similar adjustments in grasp selection in anticipation of manipulation in both Physical and Virtual stimulus conditions. These results suggest that while fixation position and grasp point selection differed between stimulus type as a function of stimulus position, certain eye-hand coordinated behaviours were maintained when grasping both physical and virtual stimuli.
in Experimental Brain Research on April 16, 2021 12:00 AM.
• #### Comparison of Scalp ERP to Faces in Macaques and Humans
Face recognition is an essential activity of social living, common to many primate species. Underlying processes in the brain have been investigated using various techniques and compared between species. Functional imaging studies have shown face-selective cortical regions and their degree of correspondence across species. However, the temporal dynamics of face processing, particularly processing speed, are likely different between them. Across sensory modalities activation of primary sensory cortices in macaque monkeys occurs at about 3/5 the latency of corresponding activation in humans, though this human simian difference may diminish or disappear in higher cortical regions. We recorded scalp event-related potentials (ERPs) to presentation of faces in macaques and estimated the peak latency of ERP components. Comparisons of latencies between macaques and humans indicated that the 3:5 ratio is preserved in higher cognitive regions of face processing between those species.
in Frontiers in Systems Neuroscience on April 16, 2021 12:00 AM.
• #### Restoration of Two-Photon Ca2+ Imaging Data Through Model Blind Spatiotemporal Filtering
Two-photon Ca2+ imaging is a leading technique for recording neuronal activities in vivo with cellular or subcellular resolution. However, during experiments, the images often suffer from corruption due to complex noises. Therefore, the analysis of Ca2+ imaging data requires preprocessing steps, such as denoising, to extract biologically relevant information. We present an approach that facilitates imaging data restoration through image denoising performed by a neural network combining spatiotemporal filtering and model blind learning. Tests with synthetic and real two-photon Ca2+ imaging datasets demonstrate that the proposed approach enables efficient restoration of imaging data. In addition, we demonstrate that the proposed approach outperforms the current state-of-the-art methods by evaluating the qualities of the denoising performance of the models quantitatively. Therefore, our method provides an invaluable tool for denoising two-photon Ca2+ imaging data by model blind spatiotemporal processing.
in Frontiers in Neuroscience: Brain Imaging Methods on April 16, 2021 12:00 AM.
• #### Automated Quantitative Analysis of ex vivo Blood-Brain Barrier Permeability Using Intellesis Machine-Learning
Background
An increase in blood brain barrier permeability commonly precedes neuro-inflammation and cognitive impairment in models of dementia. Common methods to estimate capillary permeability have potential confounders, or require laborious and subjective semi-manual analysis.
New method
Here we used snap frozen mouse and rat brain sections that were double-immunofluorescent labeled for immunoglobulin G (IgG; plasma protein) and laminin-α4 (capillary basement membrane). A Machine Learning Image Analysis program (Zeiss ZEN Intellisis) was trained to recognize and segment laminin-α4 to equivocally identify blood vessels in large sets of images. An IgG subclass based on a threshold intensity was segmented and quantitated only in extravascular regions. The residual parenchymal IgG fluorescence is indicative of blood-to-brain extravasation of IgG and was accurately quantitated.
Results
Automated machine-learning and threshold based segmentation of only parenchymal IgG extravasation accentuates otherwise indistinct capillary permeability, particularly frequent in minor BBB leakage. Comparison with Existing Methods: Large datasets can be processed and analyzed quickly and robustly to provide an overview of vascular permeability throughout the brain. All human bias or ambiguity involved in classifying and measuring leakage is removed.
Conclusion
Here we describe a fast and precise method of visualizing and quantitating BBB permeability in mouse and rat brain tissue, while avoiding the confounding influence of unphysiological conditions such as perfusion and eliminating any human related bias from analysis.
in Frontiers in Neuroscience: Brain Imaging Methods on April 16, 2021 12:00 AM.
• #### Effects of Perturbation Velocity, Direction, Background Muscle Activation, and Task Instruction on Long-Latency Responses Measured From Forearm Muscles
The central nervous system uses feedback processes that occur at multiple time scales to control interactions with the environment. The long-latency response (LLR) is the fastest process that directly involves cortical areas, with a motoneuron response measurable 50 ms following an imposed limb displacement. Several behavioral factors concerning perturbation mechanics and the active role of muscles prior or during the perturbation can modulate the long-latency response amplitude (LLRa) in the upper limbs, but the interactions among many of these factors had not been systematically studied before. We conducted a behavioral study on thirteen healthy individuals to determine the effect and interaction of four behavioral factors – background muscle torque, perturbation direction, perturbation velocity, and task instruction – on the LLRa evoked from the flexor carpi radialis (FCR) and extensor carpi ulnaris (ECU) muscles after velocity-controlled wrist displacements. The effects of the four factors were quantified using both a 0D statistical analysis on the average perturbation-evoked EMG signal in the period corresponding to an LLR, and using a timeseries analysis of EMG signals. All factors significantly modulated LLRa, and their combination nonlinearly contributed to modulating the LLRa. Specifically, all the three-way interaction terms that could be computed without including the interaction between instruction and velocity significantly modulated the LLR. Analysis of the three-way interaction terms of the 0D model indicated that for the ECU muscle, the LLRa evoked when subjects are asked to maintain their muscle activation in response to the perturbations was greater than the one observed when subjects yielded to the perturbations (p < 0.001), but this effect was not measured for muscles undergoing shortening or in absence of background muscle activation. Moreover, higher perturbation velocity increased the LLRa evoked from the stretched muscle in presence of a background torque (p < 0.001), but no effects of velocity were measured in absence of background torque. Also, our analysis identified significant modulations of LLRa in muscles shortened by the perturbation, including an interaction between torque and velocity, and an effect of both torque and velocity. The time-series analysis indicated the significance of additional transient effects in the LLR region for muscles undergoing shortening.
in Frontiers in Human Neuroscience on April 16, 2021 12:00 AM.
• #### Generalizing Longitudinal Age Effects on Brain Structure – A Two-Study Comparison Approach
Cross-sectional studies indicate that normal aging is accompanied by decreases in brain structure. Longitudinal studies, however, are relatively rare and inconsistent regarding their outcomes. Particularly the heterogeneity of methods, sample characteristics and the high inter-individual variability in older adults prevent the deduction of general trends. Therefore, the current study aimed to compare longitudinal age-related changes in brain structure (measured through cortical thickness) in two large independent samples of healthy older adults (n = 161 each); the Longitudinal Healthy Aging Brain (LHAB) database project at the University of Zurich, Switzerland, and 1000BRAINS at the Research Center Juelich, Germany. Annual percentage changes in the two samples revealed stable to slight decreases in cortical thickness over time. After correction for major covariates, i.e., baseline age, sex, education, and image quality, sample differences were only marginally present. Results suggest that general trends across time might be generalizable over independent samples, assuming the same methodology is used, and similar sample characteristics are present.
in Frontiers in Human Neuroscience on April 16, 2021 12:00 AM.
• #### Distributed Functional Connectome of White Matter in Patients With Functional Dyspepsia
Purpose: We aimed to find out the distributed functional connectome of white matter in patients with functional dyspepsia (FD).
Methods: 20 patients with FD and 24 age- and gender-matched healthy controls were included into the study. The functional connectome of white matter and graph theory were used to these participants. Two-sample t-test was used for the detection the abnormal graph properties in FD. Pearson correlation was used for the relationship between properties and the clinical and neuropshychological information.
Results: Patients with FD and healthy controls showed small-world properties in functional connectome of white matter. Compared with healthy controls, the FD group showed decreased global properties (Cp, S, Eglobal, and Elocal). Four pairs of fiber bundles that are connected to the frontal lobe, insula, and thalamus were affected in the FD group. Duration and Pittsburgh Sleep Quality Index positively correlated with the betweenness centrality of white matter regions of interest.
Conclusion: FD patients turned to a non-optimized functional organization of WM brain network. Frontal lobe, insula, and thalamus were key regions in brain information exchange of FD. It provided some novel imaging evidences for the mechanism of FD.
in Frontiers in Human Neuroscience on April 16, 2021 12:00 AM.
• #### Dark Rearing Promotes the Recovery of Visual Cortical Responses but Not the Morphology of Geniculocortical Axons in Amblyopic Cat
Monocular deprivation (MD) of vision during early postnatal life induces amblyopia, and most neurons in the primary visual cortex lose their responses to the closed eye. Anatomically, the somata of neurons in the closed-eye recipient layer of the lateral geniculate nucleus (LGN) shrink and their axons projecting to the visual cortex retract. Although it has been difficult to restore visual acuity after maturation, recent studies in rodents and cats showed that a period of exposure to complete darkness could promote recovery from amblyopia induced by prior MD. However, in cats, which have an organization of central visual pathways similar to humans, the effect of dark rearing only improves monocular vision and does not restore binocular depth perception. To determine whether dark rearing can completely restore the visual pathway, we examined its effect on the three major concomitants of MD in individual visual neurons, eye preference of visual cortical neurons and soma size and axon morphology of LGN neurons. Dark rearing improved the recovery of visual cortical responses to the closed eye compared with the recovery under binocular conditions. However, geniculocortical axons serving the closed eye remained retracted after dark rearing, whereas reopening the closed eye restored the soma size of LGN neurons. These results indicate that dark rearing incompletely restores the visual pathway, and thus exerts a limited restorative effect on visual function.
in Frontiers in Neural Circuits on April 16, 2021 12:00 AM.
• #### Non-Cell-Autonomous Regulation of Optic Nerve Regeneration by Amacrine Cells
Visual information is conveyed from the eye to the brain through the axons of retinal ganglion cells (RGCs) that course through the optic nerve and synapse onto neurons in multiple subcortical visual relay areas. RGCs cannot regenerate their axons once they are damaged, similar to most mature neurons in the central nervous system (CNS), and soon undergo cell death. These phenomena of neurodegeneration and regenerative failure are widely viewed as being determined by cell-intrinsic mechanisms within RGCs or to be influenced by the extracellular environment, including glial or inflammatory cells. However, a new concept is emerging that the death or survival of RGCs and their ability to regenerate axons are also influenced by the complex circuitry of the retina and that the activation of a multicellular signaling cascade involving changes in inhibitory interneurons – the amacrine cells (AC) – contributes to the fate of RGCs. Here, we review our current understanding of the role that interneurons play in cell survival and axon regeneration after optic nerve injury.
in Frontiers in Cellular Neuroscience on April 16, 2021 12:00 AM.
• #### NF-κB-Induced Upregulation of miR-146a-5p Promoted Hippocampal Neuronal Oxidative Stress and Pyroptosis via TIGAR in a Model of Alzheimer’s Disease
in Frontiers in Cellular Neuroscience on April 16, 2021 12:00 AM.
• #### 3D Reconstruction of the Morpho-Functional Topography of the Human Vagal Trigone
The Vagal Trigone, often referred to as Ala Cinerea, is a triangular-shaped area of the floor of the fourth ventricle that is strictly involved in the network of chardiochronotropic, baroceptive, respiratory, and gastrointestinal control systems of the medulla oblongata. While it is frequently identified as the superficial landmark for the underlying Dorsal Motor Nucleus of the Vagus, this correspondence is not univocal in anatomical literature and is often oversimplified in neuroanatomy textbooks and neurosurgical atlases. As the structure represents an important landmark for neurosurgical procedures involving the floor of the fourth ventricle, accurate morphological characterization is required to avoid unwanted side effects (e.g., bradychardia, hypertension) during neuorphysiological monitoring and cranial nerve nuclei stimulation in intraoperative settings. The aim of this study was to address the anatomo-topographical relationships of the Vagal Trigone with the underlying nuclei. For this purpose, we have conducted an anatomo-microscopical examination of serial sections deriving from 54 Human Brainstems followed by 3D reconstruction and rendering of the specimens. Our findings indicate that the Vagal Trigone corresponds only partially with the Dorsal Motor Nucleus of the Vagus, while most of its axial profile is occupied by the dorsal regions of the Solitary Tract Nucleus. Furthermore, basing on literature and our findings we speculate that the neuroblasts of the Dorsal Motor Nucleus of the Vagus undergo neurobiotaxic migration induced by the neuroblasts of the dorsolaterally located solitary tract nucleus, giving rise to the Ala Cinerea, a topographically defined area for parasympathetic visceral control.
in Frontiers in Neuroanatomy on April 16, 2021 12:00 AM.
• #### Multiple Roles in Neuroprotection for the Exercise Derived Myokine Irisin
Exercise has multiple beneficial effects on health including decreasing the risk of neurodegenerative diseases. Such effects are thought to be mediated (at least in part) by myokines, a collection of cytokines and other small proteins released from skeletal muscles. As an endocrine organ, skeletal muscle synthesizes and secretes a wide range of myokines which contribute to different functions in different organs, including the brain. One such myokine is the recently discovered protein Irisin, which is secreted into circulation from skeletal muscle during exercise from its membrane bound precursor Fibronectin type III domain-containing protein 5 (FNDC5). Irisin contributes to metabolic processes such as glucose homeostasis and browning of white adipose tissue. Irisin also crosses the blood brain barrier and initiates a neuroprotective genetic program in the hippocampus that culminates with increased expression of brain derived neurotrophic factor (BDNF). Furthermore, exercise and FNDC5/Irisin have been shown to have several neuroprotective effects against injuries in ischemia and neurodegenerative disease models, including Alzheimer’s disease. In addition, Irisin has anxiolytic and antidepressant effects. In this review we present and summarize recent findings on the multiple effects of Irisin on neural function, including signaling pathways and mechanisms involved. We also discuss how exercise can positively influence brain function and mental health via the “skeletal muscle-brain axis.” While there are still many unanswered questions, we put forward the idea that Irisin is a potentially essential mediator of the skeletal muscle-brain crosstalk.
in Frontiers in Ageing Neuroscience on April 16, 2021 12:00 AM.
• #### Role of mTOR-Regulated Autophagy in Synaptic Plasticity Related Proteins Downregulation and the Reference Memory Deficits Induced by Anesthesia/Surgery in Aged Mice
Postoperative cognitive dysfunction increases mortality and morbidity in perioperative patients and has become a major concern for patients and caregivers. Previous studies demonstrated that synaptic plasticity is closely related to cognitive function, anesthesia and surgery inhibit synaptic function. In central nervous system, autophagy is vital to synaptic plasticity, homeostasis of synapticproteins, synapse elimination, spine pruning, proper axon guidance, and when dysregulated, is associated with behavioral and memory functions disorders. The mammalian target of rapamycin (mTOR) negatively regulates the process of autophagy. This study aimed to explore whether rapamycin can ameliorate anesthesia/surgery-induced cognitive deficits by inhibiting mTOR, activating autophagy and rising synaptic plasticity-related proteins in the hippocampus. Aged C57BL/6J mice were used to establish POCD models with exploratory laparotomy under isoflurane anesthesia. The Morris Water Maze (MWM) was used to measure reference memory after anesthesia and surgery. The levels of mTOR phosphorylation (p-mTOR), Beclin-1 and LC3-II were examined on postoperative days 1, 3 and 7 by western blotting. The levels of synaptophysin (SYN) and postsynaptic density protein 95 (PSD-95) in the hippocampus were also examined by western blotting. Here we showed that anesthesia/surgery impaired reference memory and induced the activation of mTOR, decreased the expression of autophagy-related proteins such as Beclin-1 and LC3-II. A corresponding decline in the expression of neuronal/synaptic, plasticity-related proteins such as SYN and PSD-95 was also observed. Pretreating mice with rapamycin inhibited the activation of mTOR and restored autophagy function, also increased the expression of SYN and PSD-95. Furthermore, anesthesia/surgery-induced learning and memory deficits were also reversed by rapamycin pretreatment. In conclusion, anesthesia/surgery induced mTOR hyperactivation and autophagy impairments, and then reduced the levels of SYN and PSD-95 in the hippocampus. An mTOR inhibitor, rapamycin, ameliorated anesthesia/surgery-related cognitive impairments by inhibiting the mTOR activity, inducing activation of autophagy, enhancing SYN and PSD-95 expression.
in Frontiers in Ageing Neuroscience on April 16, 2021 12:00 AM.
• #### Development and Validation of a Nomogram Based on Motoric Cognitive Risk Syndrome for Cognitive Impairment
Objective
To develop and validate a prediction nomogram based on motoric cognitive risk syndrome for cognitive impairment in healthy older adults.
Methods
Using two longitudinal cohorts of participants (aged ≥ 60 years) with 4-year follow-up, we developed (n = 1,177) and validated (n = 2,076) a prediction nomogram. LASSO (least absolute shrinkage and selection operator) regression model and multivariable Cox regression analysis were used for variable selection and for developing the prediction model, respectively. The performance of the nomogram was assessed with respect to its calibration, discrimination, and clinical usefulness.
Results
The individualized prediction nomogram was assessed based on the following: motoric cognitive risk syndrome, education, gender, baseline cognition, and age. The model showed good discrimination [Harrell’s concordance index (C-index) of 0.814; 95% confidence interval, 0.782–0.835] and good calibration. Comparable results were also seen in the validation cohort, which includes good discrimination (C-index, 0.772; 95% confidence interval, 0.776–0.818) and good calibration. Decision curve analysis demonstrated that the prediction nomogram was clinically useful.
Conclusion
This prediction nomogram provides a practical tool with all necessary predictors, which are accessible to practitioners. It can be used to estimate the risk of cognitive impairment in healthy older adults.
in Frontiers in Ageing Neuroscience on April 16, 2021 12:00 AM.
• #### Why flies look to the skies
Fruit flies rely on an intricate neural pathway to process polarized light signals in order to inform their internal compass about the position of the Sun.
in eLife on April 16, 2021 12:00 AM.
• #### Live-cell single-molecule tracking highlights requirements for stable Smc5/6 chromatin association in vivo
The essential Smc5/6 complex is required in response to replication stress and is best known for ensuring the fidelity of homologous recombination. Using single-molecule tracking in live fission yeast to investigate Smc5/6 chromatin association, we show that Smc5/6 is chromatin associated in unchallenged cells and this depends on the non-SMC protein Nse6. We define a minimum of two Nse6-dependent sub-pathways, one of which requires the BRCT-domain protein Brc1. Using defined mutants in genes encoding the core Smc5/6 complex subunits we show that the Nse3 double-stranded DNA binding activity and the arginine fingers of the two Smc5/6 ATPase binding sites are critical for chromatin association. Interestingly, disrupting the ssDNA binding activity at the hinge region does not prevent chromatin association but leads to elevated levels of gross chromosomal rearrangements during replication restart. This is consistent with a downstream function for ssDNA binding in regulating homologous recombination.
in eLife on April 16, 2021 12:00 AM.
• #### Synaptic learning rules for sequence learning
Remembering the temporal order of a sequence of events is a task easily performed by humans in everyday life, but the underlying neuronal mechanisms are unclear. This problem is particularly intriguing as human behavior often proceeds on a time scale of seconds, which is in stark contrast to the much faster millisecond time-scale of neuronal processing in our brains. One long-held hypothesis in sequence learning suggests that a particular temporal fine-structure of neuronal activity - termed 'phase precession' - enables the compression of slow behavioral sequences down to the fast time scale of the induction of synaptic plasticity. Using mathematical analysis and computer simulations, we find that - for short enough synaptic learning windows - phase precession can improve temporal-order learning tremendously and that the asymmetric part of the synaptic learning window is essential for temporal-order learning. To test these predictions, we suggest experiments that selectively alter phase precession or the learning window and evaluate memory of temporal order.
in eLife on April 16, 2021 12:00 AM.
• #### Heterogeneity in transmissibility and shedding SARS-CoV-2 via droplets and aerosols
Background: Which virological factors mediate overdispersion in the transmissibility of emerging viruses remains a longstanding question in infectious disease epidemiology.
in eLife on April 16, 2021 12:00 AM.
• #### Rapid recycling of glutamate transporters on the astroglial surface
Glutamate uptake by astroglial transporters confines excitatory transmission to the synaptic cleft. The efficiency of this mechanism depends on the transporter dynamics in the astrocyte membrane, which remains poorly understood. Here, we visualise the main glial glutamate transporter GLT1 by generating its pH-sensitive fluorescent analogue, GLT1-SEP. FRAP-based imaging shows that 70-75% of GLT1-SEP dwell on the surface of rat brain astroglia, recycling with a lifetime of ~22 s. Genetic deletion of the C-terminus accelerates GLT1-SEP membrane turnover while disrupting its surface pattern, as revealed by single-molecule localisation microscopy. Excitatory activity boosts surface mobility of GLT1-SEP, involving its C-terminus, metabotropic glutamate receptors, intracellular Ca2+ and calcineurin-phosphatase activity, but not the broad-range kinase activity. The results suggest that membrane turnover, rather than lateral diffusion, is the main 'redeployment' route for the immobile fraction (20-30%) of surface-expressed GLT1. This finding reveals an important mechanism helping to control extrasynaptic escape of glutamate.
in eLife on April 16, 2021 12:00 AM.
• #### Improving oligo-conjugated antibody signal in multimodal single-cell analysis
Simultaneous measurement of surface proteins and gene expression within single cells using oligo-conjugated antibodies offers high-resolution snapshots of complex cell populations. Signal from oligo-conjugated antibodies is quantified by high-throughput sequencing and is highly scalable and sensitive. We investigated the response of oligo-conjugated antibodies towards four variables: concentration, staining volume, cell number at staining, and tissue. We find that staining with recommended antibody concentrations causes unnecessarily high background and amount of antibody used can be drastically reduced without loss of biological information. Reducing staining volume only affects antibodies targeting abundant epitopes used at low concentrations and is counteracted by reducing cell numbers. Adjusting concentrations increases signal, lowers background, and reduces costs. Background signal can account for a major fraction of total sequencing and is primarily derived from antibodies used at high concentrations. This study provides new insight into titration response and background of oligo-conjugated antibodies and offers concrete guidelines to improve such panels.
in eLife on April 16, 2021 12:00 AM.
• #### The Nesprin-1/-2 ortholog ANC-1 regulates organelle positioning in C. elegans independently from its KASH or actin-binding domains
KASH proteins in the outer nuclear membrane comprise the cytoplasmic half of LINC complexes that connect nuclei to the cytoskeleton. Caenorhabditis elegans ANC-1, an ortholog of Nesprin-1/2, contains actin-binding and KASH domains at opposite ends of a long spectrin-like region. Deletion of either the KASH or calponin homology (CH) domains does not completely disrupt nuclear positioning, suggesting neither KASH nor CH domains are essential. Deletions in the spectrin-like region of ANC-1 led to significant defects, but only recapitulated the null phenotype in combination with mutations in the trans-membrane span. In anc-1 mutants, the ER, mitochondria, and lipid droplets were unanchored, moving throughout the cytoplasm. The data presented here support a cytoplasmic integrity model where ANC-1 localizes to the ER membrane and extends into the cytoplasm to position nuclei, ER, mitochondria, and likely other organelles in place.
in eLife on April 16, 2021 12:00 AM.
• #### A convolutional neural network for the prediction and forward design of ribozyme-based gene-control elements
Ribozyme switches are a class of RNA-encoded genetic switch that support conditional regulation of gene expression across diverse organisms. An improved elucidation of the relationships between sequence, structure, and activity can improve our capacity for de novo rational design of ribozyme switches. Here, we generated data on the activity of hundreds of thousands of ribozyme sequences. Using automated structural analysis and machine learning, we leveraged these large datasets to develop predictive models that estimate the in vivo gene-regulatory activity of a ribozyme sequence. These models supported the de novo design of ribozyme libraries with low mean basal gene-regulatory activities and new ribozyme switches that exhibit changes in gene-regulatory activity in the presence of a target ligand, producing functional switches for four out of five aptamers. Our work examines how biases in the model and the dataset that affect prediction accuracy can arise and demonstrates that machine learning can be applied to RNA sequences to predict gene-regulatory activity, providing the basis for design tools for functional RNAs.
in eLife on April 16, 2021 12:00 AM.
• #### Susceptibility to chronic immobilization stress‐induced depressive-like behaviour in middle‐aged female mice and accompanying changes in dopamine D1 and GABAA receptors in related brain regions
Middle-aged females, especially perimenopausal females, are vulnerable to depression, but the potential mechanism remains unclear. Dopaminergic and GABAergic system dysfunction is involved in the pathophysiolo...
in Behavioural and Brain Functions on April 16, 2021 12:00 AM.
• #### Probabilistic value learning in medial temporal lobe amnesia
Abstract A prevailing view in cognitive neuroscience suggests that different forms of learning are mediated by dissociable memory systems, with a mesolimbic (i.e., midbrain and basal ganglia) system supporting incremental trial‐and‐error reinforcement learning and a hippocampal‐based system supporting episodic memory. Yet, growing evidence suggests that the hippocampus may also be important for trial‐and‐error learning, particularly value or reward‐based learning. In the present report, we use a lesion‐based neuropsychological approach to clarify hippocampal contributions to such learning. Six amnesic patients with medial temporal lobe damage and a group of healthy controls were administered a simple value‐based learning task involving probabilistic trial‐and‐error acquisition of stimulus–response‐outcome (reward or none) contingencies modeled after Li et al. (Proceedings of the National Academy of Sciences , 2011, 108 (1), 55–60). As predicted, patients were significantly impaired on the task, demonstrating reduced learning of the contingencies. Our results provide further supportive evidence that the hippocampus' role in cognition extends beyond episodic memory tasks and call for further refinement of theoretical models of hippocampal functioning.
in Hippocampus on April 15, 2021 04:42 PM.
• #### Dorsal and ventral mossy cells differ in their axonal projections throughout the dentate gyrus of the mouse hippocampus
Abstract Glutamatergic hilar mossy cells (MCs) have axons that terminate both near and far from their cell body but stay within the DG, making synapses primarily in the molecular layer. The long‐range axons are considered the primary projection, and extend throughout the DG ipsilateral to the soma, and project to the contralateral DG. The specificity of MC axons for the inner molecular layer (IML) has been considered to be a key characteristic of the DG. In the present study, we made the surprising finding that dorsal MC axons are an exception to this rule. We used two mouse lines that allow for Cre‐dependent viral labeling of MCs and their axons: dopamine receptor D2 (Drd2‐Cre) and calcitonin receptor‐like receptor (Crlr‐Cre). A single viral injection into the dorsal DG to label dorsal MCs resulted in labeling of MC axons in both the IML and middle molecular layer (MML). Interestingly, this broad termination of dorsal MC axons occurred throughout the septotemporal DG. In contrast, long‐range axons of ventral MCs terminated in the IML, consistent with the literature. Taken together, these results suggest that dorsal and ventral MCs differ significantly in their axonal projections. Since MC projections in the ML are thought to terminate primarily on GCs, the results suggest a dorsal–ventral difference in MC activation of GCs. The surprising difference in dorsal and ventral MC projections should therefore be considered when evaluating dorsal–ventral differences in DG function.
in Hippocampus on April 15, 2021 04:42 PM.
• #### Sodium salicylate enhances neural excitation via reducing GABAergic transmission in the dentate gyrus area of rat hippocampus in vivo
Abstract Sodium salicylate, one of the non‐steroidal anti‐inflammatory drugs, is widely prescribed in the clinic, but a high dose of usage can cause hyperactivity in the central nervous system, including the hippocampus. At present, the neural mechanism underlying the induced hyperactivity is not fully understood, in particular, in the hippocampus under an in vivo condition. In this study, we found that systemic administration of sodium salicylate increased the field excitatory postsynaptic potential slope and the population spike amplitude in a dose‐dependent manner in the hippocampal dentate gyrus area of rats with in vivo field potential extracellular recordings, which indicates that sodium salicylate enhances basal synaptic transmission and neural excitation. In the presence of picrotoxin, a GABA‐A receptor antagonist, sodium salicylate failed to increase the initial slope of the field excitatory postsynaptic potential and the amplitude of the population spike in vivo. To further explore how sodium salicylate enhances the neural excitation, we made whole‐cell patch‐clamp recordings from hippocampal slices. We found that perfusion of the slice with sodium salicylate decreased electrically evoked GABA receptor‐mediated currents, increased paired‐pulse ratio, and lowered frequency and amplitude of miniature inhibitory postsynaptic currents. Together, these results demonstrate that sodium salicylate enhances the neural excitation through suppressing GABAergic synaptic transmission in presynaptic and postsynaptic mechanisms in the hippocampal dentate gyrus area. Our findings may help understand the side effects caused by sodium salicylate in the central nervous system.
in Hippocampus on April 15, 2021 04:42 PM.
• #### Hippocampal beta oscillations predict mouse object‐location associative memory performance
Abstract Memorizing the locations of environmental cues is crucial for survival and depends on the hippocampus. We recorded local field potentials (LFPs) from the hippocampus of freely moving mice during an object location task. The power of beta‐band (23–30 Hz) oscillations increased immediately before approaching objects in a memory‐encoding phase. The exploration‐induced beta oscillations gradually decreased during the memory‐encoding session. Mice that exhibited stronger beta oscillation power exhibited better performance in the subsequent memory‐retrieval test. These results suggest that beta oscillations in the hippocampal CA1 region are involved in the memory encoding of object‐location associations.
in Hippocampus on April 15, 2021 04:42 PM.
• #### Expression of reelin with age in the human hippocampal formation
Abstract Reelin plays a key role in neuronal migration and lamination in the cortex and hippocampus. Animal studies have shown that reelin expression decreases with age. The aim of this study was to evaluate the expression of reelin in all layers of the human hippocampal formation across three age groups. We used immunohistochemistry in formalin fixed and paraffin embedded hippocampal tissue from infants (1–10 months; n = 9), children (4–10 years; n = 4), and adults (45–60 years; n = 6) to stain for reelin. Expression was quantified (measured as the number of positive reelin cells/mm2) in the granule cell layer of the dentate gyrus (DG), the molecular layer of the dentate gyrus (ML), the hippocampal fissure (HF), stratum lacunosum moleculare (SLM), CA4/Hilus and the stratum pyramidale layer of CA3, CA2, and CA1. Expression of reelin was highest in the HF irrespective of age, followed by the SLM and ML. Minimal to no expression was seen in the stratum pyramidale layer of CA1‐3. With age, reelin expression decreased and was statistically significant from infancy to childhood in the HF (p = .02). This study confirms that reelin expression decreases with age in the human hippocampus, and shows for the first time that the major decrease occurs between infancy and early childhood.
in Hippocampus on April 15, 2021 04:42 PM.
• #### Cross‐regional phase amplitude coupling supports the encoding of episodic memories
Abstract Phase amplitude coupling (PAC) between theta and gamma oscillations represents a key neurophysiological mechanism that promotes the temporal organization of oscillatory activity. For this reason, PAC has been implicated in item/context integration for episodic processes, including coordinating activity across multiple cortical regions. While data in humans has focused principally on PAC within a single brain region, data in rodents has revealed evidence that the phase of the hippocampal theta oscillation modulates gamma oscillations in the cortex (and vice versa). This pattern, termed cross‐regional PAC (xPAC), has not previously been observed in human subjects engaged in mnemonic processing. We use a unique dataset with intracranial electrodes inserted simultaneously into the hippocampus and seven cortical regions across 40 human subjects to (1) test for the presence of significant cross‐regional PAC (xPAC), (2) to establish that the magnitude of xPAC predicts memory encoding success, (3) to describe specific frequencies within the broad 2–9 Hz theta range that govern hippocampal‐cortical interactions in xPAC, and (4) compare anterior versus posterior hippocampal xPAC patterns. We find that strong functional xPAC occurs principally between the hippocampus and other mesial temporal structures, namely entorhinal and parahippocampal cortices, and that xPAC is overall stronger for posterior hippocampal connections. We also show that our results are not confounded by alternative factors such as inter‐regional phase synchrony, local PAC occurring within cortical regions, or artifactual theta oscillatory waveforms.
in Hippocampus on April 15, 2021 04:42 PM.
• #### Influence of regional white matter hyperintensity volume and apolipoprotein E ε4 status on hippocampal volume in healthy older adults
Abstract While total white matter hyperintensity (WMH) volume on magnetic resonance imaging (MRI) has been associated with hippocampal atrophy, less is known about how the regional distribution of WMH volume may differentially affect the hippocampus in healthy aging. Additionally, apolipoprotein E (APOE) ε4 carriers may be at an increased risk for greater WMH volumes and hippocampal atrophy in aging. The present study sought to investigate whether regional WMH volume mediates the relationship between age and hippocampal volume and if this association is moderated by APOE ε4 status in a group of 190 cognitively healthy adults (APOE ε4 status [carrier/non‐carrier] = 59/131), ages 50–89. Analyses revealed that temporal lobe WMH volume significantly mediated the relationship between age and average bilateral hippocampal volume, and this effect was moderated by APOE ε4 status (−0.020 (SE = 0.009), 95% CI, [−0.039, −0.003]). APOE ε4 carriers, but not non‐carriers, showed negative indirect effects of age on hippocampal volume through temporal lobe WMH volume (APOE ε4 carriers: −0.016 (SE = 0.007), 95% CI, [−0.030, −0.003]; APOE ε4 non‐carriers: .005 (SE = 0.006), 95% CI, [−0.006, 0.017]). These findings remained significant after additionally adjusting for sex, years of education, hypertension status and duration, cholesterol status, diabetes status, Body Mass Index, history of smoking, and the Wechsler Adult Intelligence Scale‐IV Full Scale IQ. There were no significant moderated mediation effects for frontal, parietal, and occipital lobe WMH volumes, with or without covariates. Our findings indicate that in cognitively healthy older adults, elevated WMH volume regionally localized to the temporal lobes in APOE ε4 carriers is associated with reduced hippocampal volume, suggesting greater vulnerability to brain aging and the risk for Alzheimer's disease.
in Hippocampus on April 15, 2021 04:42 PM.
• #### Issue Information ‐ TOC
Hippocampus, Volume 31, Issue 5, Page C4-C4, May 2021.
in Hippocampus on April 15, 2021 04:42 PM.
• #### Issue Information ‐ Editorial Board
Hippocampus, Volume 31, Issue 5, Page 459-459, May 2021.
in Hippocampus on April 15, 2021 04:42 PM.
• #### Linking statistical shape models and simulated function in the healthy adult human heart
by Cristobal Rodero, Marina Strocchi, Maciej Marciniak, Stefano Longobardi, John Whitaker, Mark D. O’Neill, Karli Gillette, Christoph Augustin, Gernot Plank, Edward J. Vigmond, Pablo Lamata, Steven A. Niederer
Cardiac anatomy plays a crucial role in determining cardiac function. However, there is a poor understanding of how specific and localised anatomical changes affect different cardiac functional outputs. In this work, we test the hypothesis that in a statistical shape model (SSM), the modes that are most relevant for describing anatomy are also most important for determining the output of cardiac electromechanics simulations. We made patient-specific four-chamber heart meshes (n = 20) from cardiac CT images in asymptomatic subjects and created a SSM from 19 cases. Nine modes captured 90% of the anatomical variation in the SSM. Functional simulation outputs correlated best with modes 2, 3 and 9 on average (R = 0.49 ± 0.17, 0.37 ± 0.23 and 0.34 ± 0.17 respectively). We performed a global sensitivity analysis to identify the different modes responsible for different simulated electrical and mechanical measures of cardiac function. Modes 2 and 9 were the most important for determining simulated left ventricular mechanics and pressure-derived phenotypes. Mode 2 explained 28.56 ± 16.48% and 25.5 ± 20.85, and mode 9 explained 12.1 ± 8.74% and 13.54 ± 16.91% of the variances of mechanics and pressure-derived phenotypes, respectively. Electrophysiological biomarkers were explained by the interaction of 3 ± 1 modes. In the healthy adult human heart, shape modes that explain large portions of anatomical variance do not explain equivalent levels of electromechanical functional variation. As a result, in cardiac models, representing patient anatomy using a limited number of modes of anatomical variation can cause a loss in accuracy of simulated electromechanical function.
in PLoS Computational Biology on April 15, 2021 02:00 PM.
• #### Accurate contact-based modelling of repeat proteins predicts the structure of new repeats protein families
by Claudio Bassot, Arne Elofsson
Repeat proteins are abundant in eukaryotic proteomes. They are involved in many eukaryotic specific functions, including signalling. For many of these proteins, the structure is not known, as they are difficult to crystallise. Today, using direct coupling analysis and deep learning it is often possible to predict a protein’s structure. However, the unique sequence features present in repeat proteins have been a challenge to use direct coupling analysis for predicting contacts. Here, we show that deep learning-based methods (trRosetta, DeepMetaPsicov (DMP) and PconsC4) overcomes this problem and can predict intra- and inter-unit contacts in repeat proteins. In a benchmark dataset of 815 repeat proteins, about 90% can be correctly modelled. Further, among 48 PFAM families lacking a protein structure, we produce models of forty-one families with estimated high accuracy.
in PLoS Computational Biology on April 15, 2021 02:00 PM.
• #### Transmission delays and frequency detuning can regulate information flow between brain regions
by Aref Pariz, Ingo Fischer, Alireza Valizadeh, Claudio Mirasso
Brain networks exhibit very variable and dynamical functional connectivity and flexible configurations of information exchange despite their overall fixed structure. Brain oscillations are hypothesized to underlie time-dependent functional connectivity by periodically changing the excitability of neural populations. In this paper, we investigate the role of the connection delay and the detuning between the natural frequencies of neural populations in the transmission of signals. Based on numerical simulations and analytical arguments, we show that the amount of information transfer between two oscillating neural populations could be determined by their connection delay and the mismatch in their oscillation frequencies. Our results highlight the role of the collective phase response curve of the oscillating neural populations for the efficacy of signal transmission and the quality of the information transfer in brain networks.
in PLoS Computational Biology on April 15, 2021 02:00 PM.
• #### Speech biomarkers in rapid eye movement sleep behaviour disorder and Parkinson's disease
Objective This multi‐language study employed simple speech recording and high‐end pattern analysis to provide sensitive and reliable non‐invasive biomarkers of prodromal vs. manifest alpha‐synucleinopathy in patients with idiopathic rapid eye movement sleep behaviour disorder (iRBD) and early‐stage Parkinson's disease (PD). Methods We performed a multicentre study across the Czech, English, German, French and Italian languages at seven centres in Europe and North America. A total of 448 participants (337 males) including 150 with iRBD (mean duration of iRBD across language groups 0.5–3.4 years), 149 with PD (mean duration of disease across language groups 1.7–2.5 years), and 149 healthy controls were recorded; 350 of the participants completed the 12‐month follow‐up. We developed a fully‐automated acoustic quantitative assessment approach for the seven distinctive patterns of hypokinetic dysarthria. Results No language differences that impacted clinical parkinsonian phenotypes were found. Compared to the controls, we found significant abnormalities of an overall acoustic speech severity measure via composite dysarthria index for both iRBD (p = 0.002) and PD (p < 0.001). By contrast, only PD (p < 0.001) was perceptually distinct in a blinded subjective analysis. We found significant group differences between PD and controls for monopitch (p < 0.001), prolonged pauses (p < 0.001) and imprecise consonants (p = 0.03); only monopitch was able to differentiate iRBD patients from controls (p = 0.004). At the 12‐month follow‐up, a slight progression of overall acoustic speech impairment was noted for the iRBD (p = 0.04) and PD (p = 0.03) groups. Interpretation Automated speech analysis may provide a useful additional biomarker of parkinsonism for the assessment of disease progression and therapeutic interventions. This article is protected by copyright. All rights reserved.
in Annals of Neurology on April 15, 2021 11:10 AM.
• #### Spatial extent of a single lipid's influence on bilayer mechanics
Author(s): Kayla C. Sapp, Andrew H. Beaven, and Alexander J. Sodt
To what spatial extent does a single lipid affect the mechanical properties of the membrane that surrounds it? The lipid composition of a membrane determines its mechanical properties. The shapes available to the membrane depend on its compositional material properties, and therefore, the lipid envi...
[Phys. Rev. E 103, 042413] Published Thu Apr 15, 2021
in Physical Review E: Biological physics on April 15, 2021 10:00 AM.
• #### Entropy estimation within in vitro neural-astrocyte networks as a measure of development instability
Author(s): Jacopo Teneggi, Xin Chen, Alan Balu, Connor Barrett, Giulia Grisolia, Umberto Lucia, and Rhonda Dzakpasu
The brain demands a significant fraction of the energy budget in an organism; in humans, it accounts for 2% of the body mass, but utilizes 20% of the total energy metabolized. This is due to the large load required for information processing; spiking demands from neurons are high but are a key compo...
[Phys. Rev. E 103, 042412] Published Thu Apr 15, 2021
in Physical Review E: Biological physics on April 15, 2021 10:00 AM.
• #### Peristalsis by pulses of activity
Author(s): N. Gorbushin and L. Truskinovsky
Peristalsis by actively generated waves of muscle contraction is one of the most fundamental ways of producing motion in living systems. We show that peristalsis can be modeled by a train of rectangular-shaped solitary waves of localized activity propagating through otherwise passive matter. Our ana...
[Phys. Rev. E 103, 042411] Published Thu Apr 15, 2021
in Physical Review E: Biological physics on April 15, 2021 10:00 AM.
• #### Controlling for activity‐dependent genes and behavioral states is critical for determining brain relationships within and across species
in Journal of Comparative Neurology on April 15, 2021 07:30 AM.
• #### Cerebrovascular Neuroprotection after Acute Concussion in Adolescents
in Annals of Neurology on April 15, 2021 03:24 AM.
• #### Rescue of maternal immune activation-induced behavioral abnormalities in adult mouse offspring by pathogen-activated maternal Treg cells
Nature Neuroscience, Published online: 15 April 2021; doi:10.1038/s41593-021-00837-1
Xu et al. developed and characterized a new animal model of maternal immune activation based on a parasite mimetic. They show that immune and behavioral abnormalities in adult offspring are reversed by adoptive transfer of regulatory T cells.
in Nature Neuroscience on April 15, 2021 12:00 AM.
• #### Remapping and realignment in the human hippocampal formation predict context-dependent spatial behavior
Nature Neuroscience, Published online: 15 April 2021; doi:10.1038/s41593-021-00835-3
Julian and Doeller show that trial-by-trial modulation of map-like representations in the human hippocampal–entorhinal system predicts contextual memory retrieval during virtual reality navigation independent of visual experience.
in Nature Neuroscience on April 15, 2021 12:00 AM.
• #### Phasor S-FLIM: a new paradigm for fast and robust spectral fluorescence lifetime imaging
Nature Methods, Published online: 15 April 2021; doi:10.1038/s41592-021-01108-4
Phasor S-FLIM combines novel electronics for multichannel fluorescence lifetime acquisition and a phasor-based unmixing algorithm for real-time analysis of reliable spectral lifetime imaging data, enabling new biological observations.
in Nature Methods on April 15, 2021 12:00 AM.
• #### PCprophet: a framework for protein complex prediction and differential analysis using proteomic data
Nature Methods, Published online: 15 April 2021; doi:10.1038/s41592-021-01107-5
PCprophet combines complex-level scoring and machine learning to predict novel protein complexes from protein cofractionation mass spectrometry data and to perform differential analysis across experimental conditions.
in Nature Methods on April 15, 2021 12:00 AM.
• #### Two views on the cognitive brain
Nature Reviews Neuroscience, Published online: 15 April 2021; doi:10.1038/s41583-021-00448-6
Neuroscience can explain cognition by considering single neurons and their connections (a ‘Sherringtonian’ view) or by considering neural spaces constructed by populations of neurons (a ‘Hopfieldian’ view). In this Perspective, Barack and Krakauer argue that the Hopfieldian view has the conceptual resources to explain cognition more fully the Sherringtonian view.
in Nature Reviews on April 15, 2021 12:00 AM.
• #### Two-fold symmetric superconductivity in few-layer NbSe2
Nature Physics, Published online: 15 April 2021; doi:10.1038/s41567-021-01219-x
A two-fold rotational symmetry is observed in the superconducting state of NbSe2. This is strikingly different from the three-fold symmetry of the lattice, and suggests that a mixed conventional and unconventional order parameter exists in this material.
in Nature Physics on April 15, 2021 12:00 AM.
• #### One-dimensional Kronig–Penney superlattices at the LaAlO3/SrTiO3 interface
Nature Physics, Published online: 15 April 2021; doi:10.1038/s41567-021-01217-z
The two-dimensional electron gas at an oxide interface is patterned to form a channel with a periodic potential imposed on top. This replicates the textbook Kronig–Penney model and leads to fractionalization of electron bands in the channel.
in Nature Physics on April 15, 2021 12:00 AM.
• #### On-chip sampling of optical fields with attosecond resolution
Nature Photonics, Published online: 15 April 2021; doi:10.1038/s41566-021-00792-0
An on-chip, sub-optical-cycle sampling technique for measuring arbitrary electric fields of few-femtojoule near-infrared optical pulses in ambient conditions is demonstrated, offering an improvement of roughly six orders of magnitude in energy sensitivity compared with those previous works in the near-infrared.
in Nature Photomics on April 15, 2021 12:00 AM.
• #### Publisher Correction: A single-dose mRNA vaccine provides a long-term protection for hACE2 transgenic mice from SARS-CoV-2
Nature Communications, Published online: 15 April 2021; doi:10.1038/s41467-021-22820-x
Publisher Correction: A single-dose mRNA vaccine provides a long-term protection for hACE2 transgenic mice from SARS-CoV-2
in Nature Communications on April 15, 2021 12:00 AM.
• #### Author Correction: A model for the fragmentation kinetics of crumpled thin sheets
Nature Communications, Published online: 15 April 2021; doi:10.1038/s41467-021-22791-z
Author Correction: A model for the fragmentation kinetics of crumpled thin sheets
in Nature Communications on April 15, 2021 12:00 AM.
• #### Selenoprotein W ensures physiological bone remodeling by preventing hyperactivity of osteoclasts
Nature Communications, Published online: 15 April 2021; doi:10.1038/s41467-021-22565-7
Selenoproteins containing selenium have a variety of physiological functions including redox homeostasis and thyroid hormone metabolism. Here, the authors show that RANKL-dependent repression of selenoprotein W regulates cell fusion during osteoclast differentiation and bone remodelling in mice.
in Nature Communications on April 15, 2021 12:00 AM.
• #### Microglial neuropilin-1 promotes oligodendrocyte expansion during development and remyelination by trans-activating platelet-derived growth factor receptor
Nature Communications, Published online: 15 April 2021; doi:10.1038/s41467-021-22532-2
Oligodendrocyte precursor cell (OPC) proliferation and differentiation is greater in white matter than gray matter. Here, the authors show regulation of OPC proliferation in white matter involved trans-activation of PDGFRα on OPCs via Nrp1 expressed by adjacent microglia.
in Nature Communications on April 15, 2021 12:00 AM.
• #### Daily briefing: Video guide to the science of coronavirus variants
Nature, Published online: 15 April 2021; doi:10.1038/d41586-021-01017-8
SARS-CoV-2 variants and the future of the pandemic, the tough problem of space junk and India’s COVID-vaccine crunch is putting global supplies at risk.
in Nature on April 15, 2021 12:00 AM.
• #### First monkey–human embryos reignite debate over hybrid animals
Nature, Published online: 15 April 2021; doi:10.1038/d41586-021-01001-2
The chimaeras lived up to 19 days — but some scientists question the need for such research.
in Nature on April 15, 2021 12:00 AM.
• #### India’s COVID-vaccine woes — by the numbers
Nature, Published online: 15 April 2021; doi:10.1038/d41586-021-00996-y
How an explosion of coronavirus cases in India is putting global vaccine supplies at risk.
in Nature on April 15, 2021 12:00 AM.
• #### The Cerebellum
in The Cerebellum on April 15, 2021 12:00 AM.
### Abstract
Emerging evidence suggests that the cerebellum may contribute to variety of cognitive capacities, including social cognition. Nonverbal learning disability (NVLD) is characterized by visual-spatial and social impairment. Recent functional neuroimaging studies have shown that children with NVLD have altered cerebellar resting-state functional connectivity, which is associated with various symptom domains. However, little is known about cerebellar white matter microstructure in NVLD and whether it contributes to social deficits. Twenty-seven children (12 with NVLD, 15 typically developing (TD)) contributed useable diffusion tensor imaging data. Tract-based spatial statistics (TBSS) were used to quantify fractional anisotropy (FA) in the cerebellar peduncles. Parents completed the Child Behavior Checklist, providing a measure of social difficulty. Children with NVLD had greater fractional anisotropy in the left and right inferior cerebellar peduncle. Furthermore, right inferior cerebellar peduncle FA was associated with social impairment as measured by the Child Behavior Checklist Social Problems subscale. Finally, the association between NVLD diagnosis and greater social impairment was mediated by right inferior cerebellar peduncle FA. These findings provide additional evidence that the cerebellum contributes both to social cognition and to the pathophysiology of NVLD.
in The Cerebellum on April 15, 2021 12:00 AM.
### Background
Hashimoto’s encephalopathy with serum anti-NH2-terminal of α-enolase (NAE) antibodies occasionally displays clinical symptoms such as cerebellar ataxia and parkinsonism. We studied the frequency of anti-NAE antibodies in patients with Parkinson-plus syndrome.
### Methods
We examined the positive rates of anti-NAE antibodies in 47 patients with multiple system atrophy (MSA), 29 patients with Parkinson’s disease (PD), eight patients with corticobasal syndrome (CBS), and 18 patients with progressive supranuclear palsy (PSP) using conventional immunoblot analysis.
### Results
Positive anti-NAE antibody rates of 31.9%, 10.3%, 50.0%, and 11.1% were reported in the MSA, PD, CBS, and PSP patients, respectively. The duration from onset to a wheelchair-bound state in seropositive MSA patients tended to be shorter than that in seronegative MSA patients.
### Conclusions
Anti-NAE antibodies are detected in some patients clinically diagnosed with MSA and CBS. Although its pathophysiological significance remains uncertain, serum anti-NAE antibodies might represent a prognostic marker in the clinical course of MSA.
in Journal of Neurology on April 15, 2021 12:00 AM.
• #### Motility Profile of Captive-Bred Marmosets Revealed by a Long-Term In-Cage Monitoring System
A quantitative evaluation of motility is crucial for studies employing experimental animals. Here, we describe the development of an in-cage motility monitoring method for new world monkeys using off-the-shelf components, and demonstrate its capability for long-term operation (e.g., a year). Based on this novel system, we characterized the motility of the common marmoset over different time scales (seconds, hours, days, and weeks). Monitoring of seven young animals belonging to two different age groups (sub-adult and young-adult) over a 231-day period revealed: (1) strictly diurnal activity (97.3% of movement during daytime), (2) short-cycle (∼20 s) transition in activity, and (3) bimodal diurnal activity including a “siesta” break. Additionally, while the mean duration of short-cycle activity, net daily activity, and diurnal activity changed over the course of development, 24-h periodicity remained constant. Finally, the method allowed for detection of progressive motility deterioration in a transgenic marmoset. Motility measurement offers a convenient way to characterize developmental and pathological changes in animals, as well as an economical and labor-free means for long-term evaluation in a wide range of basic and translational studies.
in Frontiers in Systems Neuroscience on April 15, 2021 12:00 AM.
• #### The Forward Model: A Unifying Theory for the Role of the Cerebellum in Motor Control and Sense of Agency
For more than two decades, there has been converging evidence for an essential role of the cerebellum in non-motor functions. The cerebellum is not only important in learning and sensorimotor processes, some growing evidences show its implication in conditional learning and reward, which allows building our expectations about behavioral outcomes. More recent work has demonstrated that the cerebellum is also required for the sense of agency, a cognitive process that allows recognizing an action as our own, suggesting that the cerebellum might serve as an interface between sensorimotor function and cognition. A unifying model that would explain the role of the cerebellum across these processes has not been fully established. Nonetheless, an important heritage was given by the field of motor control: the forward model theory. This theory stipulates that movements are controlled based on the constant interactions between our organism and its environment through feedforward and feedback loops. Feedforward loops predict what is going to happen, while feedback loops confront the prediction with what happened so that we can react accordingly. From an anatomical point of view, the cerebellum is at an ideal location at the interface between the motor and sensory systems, as it is connected to cerebral, striatal, and spinal entities via parallel loops, so that it can link sensory and motor systems with cognitive processes. Recent findings showing that the cerebellum participates in building the sense of agency as a predictive and comparator system will be reviewed together with past work on motor control within the context of the forward model theory.
in Frontiers in Systems Neuroscience on April 15, 2021 12:00 AM.
• #### Naturalistic Spike Trains Drive State-Dependent Homeostatic Plasticity in Superficial Layers of Visual Cortex
The history of neural activity determines the synaptic plasticity mechanisms employed in the brain. Previous studies report a rapid reduction in the strength of excitatory synapses onto layer 2/3 (L2/3) pyramidal neurons of the primary visual cortex (V1) following two days of dark exposure and subsequent re-exposure to light. The abrupt increase in visually driven activity is predicted to drive homeostatic plasticity, however, the parameters of neural activity that trigger these changes are unknown. To determine this, we first recorded spike trains in vivo from V1 layer 4 (L4) of dark exposed (DE) mice of both sexes that were re-exposed to light through homogeneous or patterned visual stimulation. We found that delivering the spike patterns recorded in vivo to L4 of V1 slices was sufficient to reduce the amplitude of miniature excitatory postsynaptic currents (mEPSCs) of V1 L2/3 neurons in DE mice, but not in slices obtained from normal reared (NR) controls. Unexpectedly, the same stimulation pattern produced an up-regulation of mEPSC amplitudes in V1 L2/3 neurons from mice that received 2 h of light re-exposure (LE). A Poisson spike train exhibiting the same average frequency as the patterns recorded in vivo was equally effective at depressing mEPSC amplitudes in L2/3 neurons in V1 slices prepared from DE mice. Collectively, our results suggest that the history of visual experience modifies the responses of V1 neurons to stimulation and that rapid homeostatic depression of excitatory synapses can be driven by non-patterned input activity.
in Frontiers in Synaptic Neuroscience on April 15, 2021 12:00 AM.
• #### Myosin V Regulates Spatial Localization of Different Forms of Neurotransmitter Release in Central Synapses
Synaptic active zone (AZ) contains multiple specialized release sites for vesicle fusion. The utilization of release sites is regulated to determine spatiotemporal organization of the two main forms of synchronous release, uni-vesicular (UVR) and multi-vesicular (MVR). We previously found that the vesicle-associated molecular motor myosin V regulates temporal utilization of release sites by controlling vesicle anchoring at release sites in an activity-dependent manner. Here we show that acute inhibition of myosin V shifts preferential location of vesicle docking away from AZ center toward periphery, and results in a corresponding spatial shift in utilization of release sites during UVR. Similarly, inhibition of myosin V also reduces preferential utilization of central release sites during MVR, leading to more spatially distributed and temporally uniform MVR that occurs farther away from the AZ center. Using a modeling approach, we provide a conceptual framework that unites spatial and temporal functions of myosin V in vesicle release by controlling the gradient of release site release probability across the AZ, which in turn determines the spatiotemporal organization of both UVR and MVR. Thus myosin V regulates both temporal and spatial utilization of release sites during two main forms of synchronous release.
in Frontiers in Synaptic Neuroscience on April 15, 2021 12:00 AM.
• #### Visual Outcomes in Experimental Rodent Models of Blast-Mediated Traumatic Brain Injury
Blast-mediated traumatic brain injuries (bTBI) cause long-lasting physical, cognitive, and psychological disorders, including persistent visual impairment. No known therapies are currently utilized in humans to lessen the lingering and often serious symptoms. With TBI mortality decreasing due to advancements in medical and protective technologies, there is growing interest in understanding the pathology of visual dysfunction after bTBI. However, this is complicated by numerous variables, e.g., injury location, severity, and head and body shielding. This review summarizes the visual outcomes observed by various, current experimental rodent models of bTBI, and identifies data showing that bTBI activates inflammatory and apoptotic signaling leading to visual dysfunction. Pharmacologic treatments blocking inflammation and cell death pathways reported to alleviate visual deficits in post-bTBI animal models are discussed. Notably, techniques for assessing bTBI outcomes across exposure paradigms differed widely, so we urge future studies to compare multiple models of blast injury, to allow data to be directly compared.
in Frontiers in Molecular Neuroscience on April 15, 2021 12:00 AM.
• #### A Metabolic Landscape for Maintaining Retina Integrity and Function
Neurons have high metabolic demands that are almost exclusively met by glucose supplied from the bloodstream. Glucose is utilized in complex metabolic interactions between neurons and glia cells, described by the astrocyte-neuron lactate shuttle (ANLS) hypothesis. The neural retina faces similar energy demands to the rest of the brain, with additional high anabolic needs to support continuous renewal of photoreceptor outer segments. This demand is met by a fascinating variation of the ANLS in which photoreceptors are the central part of a metabolic landscape, using glucose and supplying surrounding cells with metabolic intermediates. In this review we summarize recent evidence on how neurons, in particular photoreceptors, meet their energy and biosynthetic requirements by comprising a metabolic landscape of interdependent cells.
in Frontiers in Molecular Neuroscience on April 15, 2021 12:00 AM.
• #### Advances in Proteomics Allow Insights Into Neuronal Proteomes
Protein–protein interaction networks and signaling complexes are essential for normal brain function and are often dysregulated in neurological disorders. Nevertheless, unraveling neuron- and synapse-specific proteins interaction networks has remained a technical challenge. New techniques, however, have allowed for high-resolution and high-throughput analyses, enabling quantification and characterization of various neuronal protein populations. Over the last decade, mass spectrometry (MS) has surfaced as the primary method for analyzing multiple protein samples in tandem, allowing for the precise quantification of proteomic data. Moreover, the development of sophisticated protein-labeling techniques has given MS a high temporal and spatial resolution, facilitating the analysis of various neuronal substructures, cell types, and subcellular compartments. Recent studies have leveraged these novel techniques to reveal the proteomic underpinnings of well-characterized neuronal processes, such as axon guidance, long-term potentiation, and homeostatic plasticity. Translational MS studies have facilitated a better understanding of complex neurological disorders, such as Alzheimer’s disease (AD), Schizophrenia (SCZ), and Autism Spectrum Disorder (ASD). Proteomic investigation of these diseases has not only given researchers new insight into disease mechanisms but has also been used to validate disease models and identify new targets for research.
in Frontiers in Molecular Neuroscience on April 15, 2021 12:00 AM.
• #### Depletion of TrkB Receptors From Adult Serotonergic Neurons Increases Brain Serotonin Levels, Enhances Energy Metabolism and Impairs Learning and Memory
Neurotrophin brain-derived neurotrophic factor (BDNF) and neurotransmitter serotonin (5-HT) regulate each other and have been implicated in several neuronal mechanisms, including neuroplasticity. We have investigated the effects of BDNF on serotonergic neurons by deleting BDNF receptor TrkB from serotonergic neurons in the adult brain. The transgenic mice show increased 5-HT and Tph2 levels with abnormal behavioral phenotype. In spite of increased food intake, the transgenic mice are significantly leaner than their wildtype littermates, which may be due to increased metabolic activity. Consistent with increased 5-HT, the proliferation of hippocampal progenitors is significantly increased, however, long-term survival of newborn cells is unchanged. Our data indicates that BDNF-TrkB signaling regulates the functional phenotype of 5-HT neurons with long-term behavioral consequences.
in Frontiers in Molecular Neuroscience on April 15, 2021 12:00 AM.
• #### Management and Quality Control of Large Neuroimaging Datasets: Developments From the Barcelonaβeta Brain Research Center
Recent decades have witnessed an increasing number of large to very large imaging studies, prominently in the field of neurodegenerative diseases. The datasets collected during these studies form essential resources for the research aiming at new biomarkers. Collecting, hosting, managing, processing, or reviewing those datasets is typically achieved through a local neuroinformatics infrastructure. In particular for organizations with their own imaging equipment, setting up such a system is still a hard task, and relying on cloud-based solutions, albeit promising, is not always possible. This paper proposes a practical model guided by core principles including user involvement, lightweight footprint, modularity, reusability, and facilitated data sharing. This model is based on the experience from an 8-year-old research center managing cohort research programs on Alzheimer’s disease. Such a model gave rise to an ecosystem of tools aiming at improved quality control through seamless automatic processes combined with a variety of code libraries, command line tools, graphical user interfaces, and instant messaging applets. The present ecosystem was shaped around XNAT and is composed of independently reusable modules that are freely available on GitLab/GitHub. This paradigm is scalable to the general community of researchers working with large neuroimaging datasets.
in Frontiers in Neuroscience: Brain Imaging Methods on April 15, 2021 12:00 AM.
• #### Contextual Cueing Accelerated and Enhanced by Monetary Reward: Evidence From Event-Related Brain Potentials
The vital role of reward in guiding visual attention has been supported by previous literatures. Here, we examined the motivational impact of monetary reward feedback stimuli on visual attention selection using an event-related potential (ERP) component called stimulus-preceding negativity (SPN) and a standard contextual cueing (CC) paradigm. It has been proposed that SPN reflects affective and motivational processing. We focused on whether incidentally learned context knowledge could be affected by reward. Both behavior and brain data demonstrated that contexts followed by reward feedback not only gave rise to faster implicit learning but also obtained a larger CC effect.
in Frontiers in Human Neuroscience on April 15, 2021 12:00 AM.
• #### Hemodynamic Signal Changes During Motor Imagery Task Performance Are Associated With the Degree of Motor Task Learning
Purpose
This study aimed to investigate whether oxygenated hemoglobin (oxy-Hb) generated during a motor imagery (MI) task is associated with the motor learning level of the task.
Methods
We included 16 right-handed healthy participants who were trained to perform a ball rotation (BR) task. Hemodynamic brain activity was measured using near-infrared spectroscopy to monitor changes in oxy-Hb concentration during the BR MI task. The experimental protocol used a block design, and measurements were performed three times before and after the initial training of the BR task as well as after the final training. The BR count during training was also measured. Furthermore, subjective vividness of MI was evaluated three times after NIRS measurement using the Visual Analog Scale (VAS).
Results
The results showed that the number of BRs increased significantly with training (P < 0.001). VAS scores also improved with training (P < 0.001). Furthermore, oxy-Hb concentration and the region of interest (ROI) showed a main effect (P = 0.001). An interaction was confirmed (P < 0.001), and it was ascertained that the change in oxy-Hb concentrations due to training was different for each ROI. The most significant predictor of subjective MI vividness was supplementary motor area (SMA) oxy-Hb concentration (coefficient = 0.365).
Discussion
Hemodynamic brain activity during MI tasks may be correlated with task motor learning levels, since significant changes in oxy-Hb concentrations were observed following initial and final training in the SMA. In particular, hemodynamic brain activity in the SMA was suggested to reflect the MI vividness of participants.
in Frontiers in Human Neuroscience on April 15, 2021 12:00 AM.
• #### Striatal Dopamine Transporter Function Is Facilitated by Converging Biology of α-Synuclein and Cholesterol
Striatal dopamine transporters (DAT) powerfully regulate dopamine signaling, and can contribute risk to degeneration in Parkinson’s disease (PD). DATs can interact with the neuronal protein α-synuclein, which is associated with the etiology and molecular pathology of idiopathic and familial PD. Here, we tested whether DAT function in governing dopamine (DA) uptake and release is modified in a human-α-synuclein-overexpressing (SNCA-OVX) transgenic mouse model of early PD. Using fast-scan cyclic voltammetry (FCV) in ex vivo acute striatal slices to detect DA release, and biochemical assays, we show that several aspects of DAT function are promoted in SNCA-OVX mice. Compared to background control α-synuclein-null mice (Snca-null), the SNCA-OVX mice have elevated DA uptake rates, and more pronounced effects of DAT inhibitors on evoked extracellular DA concentrations ([DA]o) and on short-term plasticity (STP) in DA release, indicating DATs play a greater role in limiting DA release and in driving STP. We found that DAT membrane levels and radioligand binding sites correlated with α-synuclein level. Furthermore, DAT function in Snca-null and SNCA-OVX mice could also be promoted by applying cholesterol, and using Tof-SIMS we found genotype-differences in striatal lipids, with lower striatal cholesterol in SNCA-OVX mice. An inhibitor of cholesterol efflux transporter ABCA1 or a cholesterol chelator in SNCA-OVX mice reduced the effects of DAT-inhibitors on evoked [DA]o. Together these data indicate that human α-synuclein in a mouse model of PD promotes striatal DAT function, in a manner supported by extracellular cholesterol, suggesting converging biology of α-synuclein and cholesterol that regulates DAT function and could impact DA function and PD pathophysiology.
in Frontiers in Cellular Neuroscience on April 15, 2021 12:00 AM.
• #### Nasal Delivery of D-Penicillamine Hydrogel Upregulates a Disintegrin and Metalloprotease 10 Expression via Melatonin Receptor 1 in Alzheimer’s Disease Models
Alzheimer’s disease (AD) is a type of neurodegenerative disease that is associated with the accumulation of amyloid plaques. Increasing non-amyloidogenic processing and/or manipulating amyloid precursor protein signaling could reduce AD amyloid pathology and cognitive impairment. D-penicillamine (D-Pen) is a water-soluble metal chelator and can reduce the aggregation of amyloid-β (Aβ) with metals in vitro. However, the potential mechanism of D-Pen for treating neurodegenerative disorders remains unexplored. In here, a novel type of chitosan-based hydrogel to carry D-Pen was designed and the D-Pen-CS/β-glycerophosphate hydrogel were characterized by scanning electron microscopy and HPLC. Behavior tests investigated the learning and memory levels of APP/PS1 mice treated through the D-Pen hydrogel nasal delivery. In vivo and in vitro findings showed that nasal delivery of D-Pen-CS/β-GP hydrogel had properly chelated metal ions that reduced Aβ deposition. Furthermore, D-Pen mainly regulated A disintegrin and metalloprotease 10 (ADAM10) expression via melatonin receptor 1 (MTNR1α) and the downstream PKA/ERK/CREB pathway. The present data demonstrated D-Pen significantly improved the cognitive ability of APP/PS1 mice and reduced Aβ generation through activating ADAM10 and accelerating non-amyloidogenic processing. Hence, these findings indicate the potential of D-Pen as a promising agent for treating AD.
in Frontiers in Ageing Neuroscience on April 15, 2021 12:00 AM.
• #### Neuromodulation Effect of Very Low Intensity Transcranial Ultrasound Stimulation on Multiple Nuclei in Rat Brain
Objective
Low-intensity transcranial ultrasound stimulation (TUS) is a non-invasive neuromodulation technique with high spatial resolution and feasible penetration depth. To date, the mechanisms of TUS modulated neural oscillations are not fully understood. This study designed a very low acoustic intensity (AI) TUS system that produces considerably reduced AI Ultrasound pulses (ISPTA < 0.5 W/cm2) when compared to previous methods used to measure regional neural oscillation patterns under different TUS parameters.
Methods
We recorded the local field potential (LFP) of five brain nuclei under TUS with three groups of simulating parameters. Spectrum estimation, time-frequency analysis (TFA), and relative power analysis methods have been applied to investigate neural oscillation patterns under different stimulation parameters.
Results
Under PRF, 500 Hz and 1 kHz TUS, high-amplitude LFP activity with the auto-rhythmic pattern appeared in selected nuclei when ISPTA exceeded 12 mW/cm2. With TFA, high-frequency energy (slow gamma and high gamma) was significantly increased during the auto-rhythmic patterns. We observed an initial plateau in nuclei response when ISPTA reached 16.4 mW/cm2 for RPF 500 Hz and 20.8 mW/cm2 for RPF 1 kHz. The number of responding nuclei started decreasing while ISPTA continued increasing. Under 1.5 kHz TUS, no auto-rhythmic patterns have been observed, but slow frequency power was increased during TUS. TUS inhibited most of the frequency band and generated obvious slow waves (theta and delta band) when stimulated at RPF = 1.5 kHz, ISPTA = 8.8 mW/cm2.
Conclusion
These results demonstrate that very low intensity Transcranial Ultrasound Stimulation (VLTUS) exerts significant neuromodulator effects under specific parameters in rat models and may be a valid tool to study neuronal physiology.
in Frontiers in Ageing Neuroscience on April 15, 2021 12:00 AM.
• #### The Benefits of High-Intensity Interval Training on Cognition and Blood Pressure in Older Adults With Hypertension and Subjective Cognitive Decline: Results From the Heart & Mind Study
Background: The impact of exercise on cognition in older adults with hypertension and subjective cognitive decline (SCD) is unclear.
Objectives: We determined the influence of high-intensity interval training (HIIT) combined with mind-motor training on cognition and systolic blood pressure (BP) in older adults with hypertension and SCD.
Methods: We randomized 128 community-dwelling older adults [age mean (SD): 71.1 (6.7), 47.7% females] with history of hypertension and SCD to either HIIT or a moderate-intensity continuous training (MCT) group. Both groups received 15 min of mind-motor training followed by 45 min of either HIIT or MCT. Participants exercised in total 60 min/day, 3 days/week for 6 months. We assessed changes in global cognitive functioning (GCF), Trail-Making Test (TMT), systolic and diastolic BP, and cardiorespiratory fitness.
Results: Participants in both groups improved diastolic BP [F(1, 87.32) = 4.392, p = 0.039], with greatest effect within the HIIT group [estimated mean change (95% CI): −2.64 mmHg, (−4.79 to −0.48), p = 0.017], but no between-group differences were noted (p = 0.17). Both groups also improved cardiorespiratory fitness [F(1, 69) = 34.795, p < 0.001], and TMT A [F(1, 81.51) = 26.871, p < 0.001] and B [F(1, 79.49) = 23.107, p < 0.001]. There were, however, no within- or between-group differences in GCF and systolic BP at follow-up.
Conclusion: Despite improvements in cardiorespiratory fitness, exercise of high- or moderate-intensity, combined with mind-motor training, did not improve GCF or systolic BP in individuals with hypertension and SCD.
Clinical Trial Registration:ClinicalTrials.gov (NCT03545958).
in Frontiers in Ageing Neuroscience on April 15, 2021 12:00 AM.
• #### Lower Plasma Total Testosterone Levels Were Associated With Steeper Decline in Brain Glucose Metabolism in Non-demented Older Men
Objective
There is growing evidence that testosterone may be implicated in the pathogenesis of Alzheimer’s disease (AD). We aimed to examine the relationship between plasma total testosterone levels and change in brain glucose metabolism over time among non-demented older people.
Methods
The association of plasma total testosterone levels with change in brain glucose metabolism among non-demented older people was investigated cross-sectionally and longitudinally. Given a significant difference in levels of plasma total testosterone between gender, we performed our analysis in a sex-stratified way. At baseline, 228 non-demented older people were included: 152 males and 76 females.
Results
In the cross-sectional analysis, no significant relationship between plasma total testosterone levels and brain glucose metabolism was found in males or females. In the longitudinal analysis, we found a significant association of plasma total testosterone levels with change in brain glucose metabolism over time in males, but not in females. More specifically, in males, higher levels of total testosterone in plasma at baseline were associated with slower decline in brain glucose metabolism.
Conclusion
We found that higher levels of total testosterone in plasma at baseline were associated with slower decline in brain glucose metabolism in males without dementia, indicating that testosterone may have beneficial effects on brain function.
in Frontiers in Ageing Neuroscience on April 15, 2021 12:00 AM.
• #### Molecular mechanisms of axo-axonic innervation
Publication date: August 2021
Source: Current Opinion in Neurobiology, Volume 69
Author(s): Fabrice Ango, Nicholas Biron Gallo, Linda Van Aelst
in Current Opinion in Neurobiology on April 14, 2021 06:00 PM.
• #### CX3CR1 mutation alters synaptic and astrocytic protein expression, topographic gradients, and response latencies in the auditory brainstem
Mice lacking the microglial receptor CX3CR1 display several phenotypes in the auditory brainstem. They have normal pruning in the medial nucleus of the trapezoid body (MNTB) and increased expression of mature astrocyte markers. They have normal auditory brainstem recording (ABR) thresholds, but the ABR latencies are decreased. They show increased expression of inhibitory presynaptic terminal markers, possibly reflecting impaired inhibitory pruning. Finally, CX3CR1−/− mice lack the tonotopic gradients seen for cell body size and calyx of Held size in the MNTB. Abstract The precise and specialized circuitry in the auditory brainstem develops through adaptations of cellular and molecular signaling. We previously showed that elimination of microglia during development impairs synaptic pruning that leads to maturation of the calyx of Held, a large encapsulating synapse that terminates on neurons of the medial nucleus of the trapezoid body (MNTB). Microglia depletion also led to a decrease in glial fibrillary acidic protein (GFAP), a marker for mature astrocytes. Here, we investigated the role of signaling through the fractalkine receptor (CX3CR1), which is expressed by microglia and mediates communication with neurons. CX3CR1−/− and wild‐type mice were studied before and after hearing onset and at 9 weeks of age. Levels of GFAP were significantly increased in the MNTB in mutants at 9 weeks. Pruning was unaffected at the calyx of Held, but we found an increase in expression of glycinergic synaptic marker in mutant mice at P14, suggesting an effect on maturation of inhibitory inputs. We observed disrupted tonotopic gradients of neuron and calyx size in MNTB in mutant mice. Auditory brainstem recording (ABR) revealed that CX3CR1−/− mice had normal thresholds and amplitudes but decreased latencies and interpeak latencies, particularly for the highest frequencies. These results demonstrate that disruption of fractalkine signaling has a significant effect on auditory brainstem development. Our findings highlight the importance of neuron–microglia–astrocyte communication in pruning of inhibitory synapses and establishment of tonotopic gradients early in postnatal development.
in Journal of Comparative Neurology on April 14, 2021 05:36 PM.
• #### VolPy: Automated and scalable analysis pipelines for voltage imaging datasets
by Changjia Cai, Johannes Friedrich, Amrita Singh, M. Hossein Eybposh, Eftychios A. Pnevmatikakis, Kaspar Podgorski, Andrea Giovannucci
Voltage imaging enables monitoring neural activity at sub-millisecond and sub-cellular scale, unlocking the study of subthreshold activity, synchrony, and network dynamics with unprecedented spatio-temporal resolution. However, high data rates (>800MB/s) and low signal-to-noise ratios create bottlenecks for analyzing such datasets. Here we present VolPy, an automated and scalable pipeline to pre-process voltage imaging datasets. VolPy features motion correction, memory mapping, automated segmentation, denoising and spike extraction, all built on a highly parallelizable, modular, and extensible framework optimized for memory and speed. To aid automated segmentation, we introduce a corpus of 24 manually annotated datasets from different preparations, brain areas and voltage indicators. We benchmark VolPy against ground truth segmentation, simulations and electrophysiology recordings, and we compare its performance with existing algorithms in detecting spikes. Our results indicate that VolPy’s performance in spike extraction and scalability are state-of-the-art.
in PLoS Computational Biology on April 14, 2021 02:00 PM.
• #### Differential contributions of synaptic and intrinsic inhibitory currents to speech segmentation via flexible phase-locking in neural oscillators
by Benjamin R. Pittman-Polletta, Yangyang Wang, David A. Stanley, Charles E. Schroeder, Miles A. Whittington, Nancy J. Kopell
Current hypotheses suggest that speech segmentation—the initial division and grouping of the speech stream into candidate phrases, syllables, and phonemes for further linguistic processing—is executed by a hierarchy of oscillators in auditory cortex. Theta (∼3-12 Hz) rhythms play a key role by phase-locking to recurring acoustic features marking syllable boundaries. Reliable synchronization to quasi-rhythmic inputs, whose variable frequency can dip below cortical theta frequencies (down to ∼1 Hz), requires “flexible” theta oscillators whose underlying neuronal mechanisms remain unknown. Using biophysical computational models, we found that the flexibility of phase-locking in neural oscillators depended on the types of hyperpolarizing currents that paced them. Simulated cortical theta oscillators flexibly phase-locked to slow inputs when these inputs caused both (i) spiking and (ii) the subsequent buildup of outward current sufficient to delay further spiking until the next input. The greatest flexibility in phase-locking arose from a synergistic interaction between intrinsic currents that was not replicated by synaptic currents at similar timescales. Flexibility in phase-locking enabled improved entrainment to speech input, optimal at mid-vocalic channels, which in turn supported syllabic-timescale segmentation through identification of vocalic nuclei. Our results suggest that synaptic and intrinsic inhibition contribute to frequency-restricted and -flexible phase-locking in neural oscillators, respectively. Their differential deployment may enable neural oscillators to play diverse roles, from reliable internal clocking to adaptive segmentation of quasi-regular sensory inputs like speech.
in PLoS Computational Biology on April 14, 2021 02:00 PM.
• #### Nonequilibrium thermodynamics of the RNA-RNA interaction underlying a genetic transposition program
Author(s): Lucas Goiriz and Guillermo Rodrigo
Thermodynamic descriptions are powerful tools to formally study complex gene expression programs evolved in living cells on the basis of macromolecular interactions. While transcriptional regulations are often modeled in the equilibrium, other interactions that occur in the cell follow a more comple...
[Phys. Rev. E 103, 042410] Published Wed Apr 14, 2021
in Physical Review E: Biological physics on April 14, 2021 10:00 AM.
• #### Thermodynamic and kinetic properties of a single base pair in A-DNA and B-DNA
Author(s): Taigang Liu, Ting Yu, Shuhao Zhang, Yujie Wang, and Wenbing Zhang
Double stranded DNA can adopt different forms, the so-called A-, B-, and Z-DNA, which play different biological roles. In this work, the thermodynamic and the kinetic parameters for the base-pair closing and opening in A-DNA and B-DNA were calculated by all-atom molecular dynamics simulations at dif...
[Phys. Rev. E 103, 042409] Published Wed Apr 14, 2021
in Physical Review E: Biological physics on April 14, 2021 10:00 AM.
• #### Abnormal Intrinsic Functional Interactions Within Pain Network in Cervical Discogenic Pain
Cervical discogenic pain (CDP) is mainly induced by cervical disc degeneration. However, how CDP modulates the functional interactions within the pain network remains unclear. In the current study, we studied the changed resting-state functional connectivities of pain network with 40 CDP patients and 40 age-, gender-matched healthy controls. We first defined the pain network with the seeds of the posterior insula (PI). Then, whole brain and seed-to-target functional connectivity analyses were performed to identify the differences in functional connectivity between CDP and healthy controls. Finally, correlation analyses were applied to reveal the associations between functional connectivities and clinical measures. Whole-brain functional connectivity analyses of PI identified increased functional connectivity between PI and thalamus (THA) and decreased functional connectivity between PI and middle cingulate cortex (MCC) in CDP patients. Functional connectivity analyses within the pain network further revealed increased functional connectivities between bilateral PI and bilateral THA, and decreased functional connectivities between left PI and MCC, between left postcentral gyrus (PoCG) and MCC in CDP patients. Moreover, we found that the functional connectivities between right PI and left THA, between left PoCG and MCC were negatively and positively correlated with the visual analog scale, respectively. Our findings provide direct evidence of how CDP modulates the pain network, which may facilitate understanding of the neural basis of CDP.
in Frontiers in Neuroscience: Brain Imaging Methods on April 14, 2021 12:00 AM.
• #### Effect of Low Intensity Transcranial Ultrasound Stimulation on Neuromodulation in Animals and Humans: An Updated Systematic Review
Background: Although low-intensity transcranial ultrasound stimulation (LI-TUS) has received more recognition for its neuromodulation potential, there remains a crucial knowledge gap regarding the neuromodulatory effects of LI-TUS and its potential for translation as a therapeutic tool in humans.
Objective: In this review, we summarized the findings reported by recently published studies regarding the effect of LI-TUS on neuromodulation in both animals and humans. We also aim to identify challenges and opportunities for the translation process.
Methods: A literature search of PubMed, Medline, EMBASE, and Web of Science was performed from January 2019 to June 2020 with the following keywords and Boolean operators: [transcranial ultrasound OR transcranial focused ultrasound OR ultrasound stimulation] AND [neuromodulation]. The methodological quality of the animal studies was assessed by the SYRCLE's risk of bias tool, and the quality of human studies was evaluated by the PEDro score and the NIH quality assessment tool.
Results: After applying the inclusion and exclusion criteria, a total of 26 manuscripts (24 animal studies and two human studies) out of 508 reports were included in this systematic review. Although both inhibitory (10 studies) and excitatory (16 studies) effects of LI-TUS were observed in animal studies, only inhibitory effects have been reported in primates (five studies) and human subjects (two studies). The ultrasonic parameters used in animal and human studies are different. The SYRCLE quality score ranged from 25 to 43%, with a majority of the low scores related to performance and detection bias. The two human studies received high PEDro scores (9/10).
Conclusion: LI-TUS appears to be capable of targeting both superficial and deep cerebral structures to modulate cognitive or motor behavior in both animals and humans. Further human studies are needed to more precisely define the effective modulation parameters and thereby translate this brain modulatory tool into the clinic.
in Frontiers in Neuroscience: Neural Technology on April 14, 2021 12:00 AM.
• #### Spatial-Temporal Functional Mapping Combined With Cortico-Cortical Evoked Potentials in Predicting Cortical Stimulation Results
Functional human brain mapping is commonly performed during invasive monitoring with intracranial electroencephalographic (iEEG) electrodes prior to resective surgery for drug resistant epilepsy. The current gold standard, electrocortical stimulation mapping (ESM), is time consuming, sometimes elicits pain, and often induces after discharges or seizures. Moreover, there is a risk of overestimating eloquent areas due to propagation of the effects of stimulation to a broader network of language cortex. Passive iEEG spatial-temporal functional mapping (STFM) has recently emerged as a potential alternative to ESM. However, investigators have observed less correspondence between STFM and ESM maps of language than between their maps of motor function. We hypothesized that incongruities between ESM and STFM of language function may arise due to propagation of the effects of ESM to cortical areas having strong effective connectivity with the site of stimulation. We evaluated five patients who underwent invasive monitoring for seizure localization, whose language areas were identified using ESM. All patients performed a battery of language tasks during passive iEEG recordings. To estimate the effective connectivity of stimulation sites with a broader network of task-activated cortical sites, we measured cortico-cortical evoked potentials (CCEPs) elicited across all recording sites by single-pulse electrical stimulation at sites where ESM was performed at other times. With the combination of high gamma power as well as CCEPs results, we trained a logistic regression model to predict ESM results at individual electrode pairs. The average accuracy of the classifier using both STFM and CCEPs results combined was 87.7%, significantly higher than the one using STFM alone (71.8%), indicating that the correspondence between STFM and ESM results is greater when effective connectivity between ESM stimulation sites and task-activated sites is taken into consideration. These findings, though based on a small number of subjects to date, provide preliminary support for the hypothesis that incongruities between ESM and STFM may arise in part from propagation of stimulation effects to a broader network of cortical language sites activated by language tasks, and suggest that more studies, with larger numbers of patients, are needed to understand the utility of both mapping techniques in clinical practice.
in Frontiers in Human Neuroscience on April 14, 2021 12:00 AM.
• #### A P300 Brain-Computer Interface Paradigm Based on Electric and Vibration Simple Command Tactile Stimulation
This paper proposed a novel tactile-stimuli P300 paradigm for Brain-Computer Interface (BCI), which potentially targeted at people with less learning ability or difficulty in maintaining attention. The new paradigm using only two types of stimuli was designed, and different targets were distinguished by frequency and spatial information. The classification algorithm was developed by introducing filters for frequency bands selection and conducting optimization with common spatial pattern (CSP) on the tactile evoked EEG signals. It features a combination of spatial and frequency information, with the spatial information distinguishing the sites of stimuli and frequency information identifying target stimuli and disturbances. We investigated both electrical stimuli and vibration stimuli, in which only one target site was stimulated in each block. The results demonstrated an average accuracy of 94.88% for electrical stimuli and 95.21% for vibration stimuli, respectively.
in Frontiers in Human Neuroscience on April 14, 2021 12:00 AM.
• #### Magnetoencephalography Responses to Unpredictable and Predictable Rare Somatosensory Stimuli in Healthy Adult Humans
Mismatch brain responses to unpredicted rare stimuli are suggested to be a neural indicator of prediction error, but this has rarely been studied in the somatosensory modality. Here, we investigated how the brain responds to unpredictable and predictable rare events. Magnetoencephalography responses were measured in adults frequently presented with somatosensory stimuli (FRE) that were occasionally replaced by two consecutively presented rare stimuli [unpredictable rare stimulus (UR) and predictable rare stimulus (PR); p = 0.1 for each]. The FRE and PR were electrical stimulations administered to either the little finger or the forefinger in a counterbalanced manner between the two conditions. The UR was a simultaneous electrical stimulation to both the forefinger and the little finger (for a smaller subgroup, the UR and FRE were counterbalanced for the stimulus properties). The grand-averaged responses were characterized by two main components: one at 30–100 ms (M55) and the other at 130–230 ms (M150) latency. Source-level analysis was conducted for the primary somatosensory cortex (SI) and the secondary somatosensory cortex (SII). The M55 responses were larger for the UR and PR than for the FRE in both the SI and the SII areas and were larger for the UR than for the PR. For M150, both investigated areas showed increased activity for the UR and the PR compared to the FRE. Interestingly, although the UR was larger in stimulus energy (stimulation of two fingers at the same time) and had a larger prediction error potential than the PR, the M150 responses to these two rare stimuli did not differ in source strength in either the SI or the SII area. The results suggest that M55, but not M150, can possibly be associated with prediction error signals. These findings highlight the need for disentangling prediction error and rareness-related effects in future studies investigating prediction error signals.
in Frontiers in Human Neuroscience on April 14, 2021 12:00 AM.
• #### Working Memory Training Effects on White Matter Integrity in Young and Older Adults
Objectives
Working memory is essential for daily life skills like reading comprehension, reasoning, and problem-solving. Healthy aging of the brain goes along with working memory decline that can affect older people’s independence in everyday life. Interventions in the form of cognitive training are a promising tool for delaying age-related working memory decline, yet the underlying structural plasticity of white matter is hardly studied.
Methods
We conducted a longitudinal diffusion tensor imaging study to investigate the effects of an intensive four-week adaptive working memory training on white matter integrity quantified by global and tract-wise mean diffusivity. We compared diffusivity measures of fiber tracts that are associated with working memory of 32 young and 20 older participants that were randomly assigned to a working memory training group or an active control group.
Results
The behavioral analysis showed an increase in working memory performance after the four-week adaptive working memory training. The neuroanatomical analysis revealed a decrease in mean diffusivity in the working memory training group after the training intervention in the right inferior longitudinal fasciculus for the older adults. There was also a decrease in mean diffusivity in the working memory training group in the right superior longitudinal fasciculus for the older and young participants after the intervention.
Conclusion
This study shows that older people can benefit from working memory training by improving their working memory performance that is also reflected in terms of improved white matter integrity in the superior longitudinal fasciculus and the inferior longitudinal fasciculus, where the first is an essential component of the frontoparietal network known to be essential in working memory.
in Frontiers in Human Neuroscience on April 14, 2021 12:00 AM.
• #### A Computational Framework for Controlling the Self-Restorative Brain Based on the Free Energy and Degeneracy Principles
The brain is a non-linear dynamical system with a self-restoration process, which protects itself from external damage but is often a bottleneck for clinical treatment. To treat the brain to induce the desired functionality, formulation of a self-restoration process is necessary for optimal brain control. This study proposes a computational model for the brain's self-restoration process following the free-energy and degeneracy principles. Based on this model, a computational framework for brain control is established. We posited that the pre-treatment brain circuit has long been configured in response to the environmental (the other neural populations') demands on the circuit. Since the demands persist even after treatment, the treated circuit's response to the demand may gradually approximate the pre-treatment functionality. In this framework, an energy landscape of regional activities, estimated from resting-state endogenous activities by a pairwise maximum entropy model, is used to represent the pre-treatment functionality. The approximation of the pre-treatment functionality occurs via reconfiguration of interactions among neural populations within the treated circuit. To establish the current framework's construct validity, we conducted various simulations. The simulations suggested that brain control should include the self-restoration process, without which the treatment was not optimal. We also presented simulations for optimizing repetitive treatments and optimal timing of the treatment. These results suggest a plausibility of the current framework in controlling the non-linear dynamical brain with a self-restoration process.
in Frontiers in Computational Neuroscience on April 14, 2021 12:00 AM.
• #### Whole-Brain Mapping the Direct Inputs of Dorsal and Ventral CA1 Projection Neurons
The CA1, an important subregion of the hippocampus, is anatomically and functionally heterogeneous in the dorsal and ventral hippocampus. Here, to dissect the distinctions between the dorsal (dCA1) and ventral CA1 (vCA1) in anatomical connections, we systematically analyzed the direct inputs to dCA1 and vCA1 projection neurons (PNs) with the rabies virus-mediated retrograde trans-monosynaptic tracing system in Thy1-Cre mice. Our mapping results revealed that the input proportions and distributions of dCA1 and vCA1 PNs varied significantly. Inside the hippocampal region, dCA1 and vCA1 PNs shared the same upstream brain regions, but with distinctive distribution patterns along the rostrocaudal axis. The intrahippocampal inputs to the dCA1 and vCA1 exhibited opposite trends, decreasing and increasing gradually along the dorsoventral axis, respectively. For extrahippocampal inputs, dCA1 and vCA1 shared some monosynaptic projections from certain regions such as pallidum, striatum, hypothalamus, and thalamus. However, vCA1, not dCA1, received innervations from the subregions of olfactory areas and amygdala nuclei. Characterization of the direct input networks of dCA1 and vCA1 PNs may provide a structural basis to understand the differential functions of dCA1 and vCA1.
in Frontiers in Neural Circuits on April 14, 2021 12:00 AM.
• #### RNA-Seq Dataset From Isolated Leukocytes Following Spontaneous Intracerebral Hemorrhage in Zebrafish Larvae
in Frontiers in Cellular Neuroscience on April 14, 2021 12:00 AM.
• #### Single-Cell Multiomic Approaches Reveal Diverse Labeling of the Nervous System by Common Cre-Drivers
Neural crest development involves a series of dynamic, carefully coordinated events that result in human disease when not properly orchestrated. Cranial neural crest cells acquire unique multipotent developmental potential upon specification to generate a broad variety of cell types. Studies of early mammalian neural crest and nervous system development often use the Cre-loxP system to lineage trace and mark cells for further investigation. Here, we carefully profile the activity of two common neural crest Cre-drivers at the end of neurulation in mice. RNA sequencing of labeled cells at E9.5 reveals that Wnt1-Cre2 marks cells with neuronal characteristics consistent with neuroepithelial expression, whereas Sox10-Cre predominantly labels the migratory neural crest. We used single-cell mRNA and single-cell ATAC sequencing to profile the expression of Wnt1 and Sox10 and identify transcription factors that may regulate the expression of Wnt1-Cre2 in the neuroepithelium and Sox10-Cre in the migratory neural crest. Our data identify cellular heterogeneity during cranial neural crest development and identify specific populations labeled by two Cre-drivers in the developing nervous system.
in Frontiers in Cellular Neuroscience on April 14, 2021 12:00 AM.
• #### The Oral-Gut-Brain AXIS: The Influence of Microbes in Alzheimer’s Disease
in Frontiers in Cellular Neuroscience on April 14, 2021 12:00 AM.
• #### Stem Cell-Based Therapy for Experimental Ischemic Stroke: A Preclinical Systematic Review
Stem cell transplantation offers promise in the treatment of ischemic stroke. Here we utilized systematic review, meta-analysis, and meta-regression to study the biological effect of stem cell treatments in animal models of ischemic stroke. A total of 98 eligible publications were included by searching PubMed, EMBASE, and Web of Science from inception to August 1, 2020. There are about 141 comparisons, involving 5,200 animals, that examined the effect of stem cell transplantation on neurological function and infarct volume as primary outcome measures in animal models for stroke. Stem cell-based therapy can improve both neurological function (effect size, −3.37; 95% confidence interval, −3.83 to −2.90) and infarct volume (effect size, −11.37; 95% confidence interval, −12.89 to −9.85) compared with controls. These results suggest that stem cell therapy could improve neurological function deficits and infarct volume, exerting potential neuroprotective effect for experimental ischemic stroke, but further clinical studies are still needed.
in Frontiers in Cellular Neuroscience on April 14, 2021 12:00 AM.
• #### Myelin Repair: From Animal Models to Humans
It is widely thought that brain repair does not occur, but myelin regeneration provides clear evidence to the contrary. Spontaneous remyelination may occur after injury or in multiple sclerosis (MS). However, the efficiency of remyelination varies considerably between MS patients and between the lesions of each patient. Myelin repair is essential for optimal functional recovery, so a profound understanding of the cells and mechanisms involved in this process is required for the development of new therapeutic strategies. In this review, we describe how animal models and modern cell tracing and imaging methods have helped to identify the cell types involved in myelin regeneration. In addition to the oligodendrocyte progenitor cells identified in the 1990s as the principal source of remyelinating cells in the central nervous system (CNS), other cell populations, including subventricular zone-derived neural progenitors, Schwann cells, and even spared mature oligodendrocytes, have more recently emerged as potential contributors to CNS remyelination. We will also highlight the conditions known to limit endogenous repair, such as aging, chronic inflammation, and the production of extracellular matrix proteins, and the role of astrocytes and microglia in these processes. Finally, we will present the discrepancies between observations in humans and in rodents, discussing the relationship of findings in experimental models to myelin repair in humans. These considerations are particularly important from a therapeutic standpoint.
in Frontiers in Cellular Neuroscience on April 14, 2021 12:00 AM.
• #### Exercise Training-Increased FBXO32 and FOXO1 in a Gender-Dependent Manner in Mild Cognitively Impaired African Americans: GEMS-1 Study
The ubiquitin proteasome system (UPS) and FOXOs transcription factors play a pivotal role in cellular clearance and minimizing the accumulation of Aβ in neurodegeneration (ND). In African Americans (AAs) with mild cognitive impairment (MCI), the role of components of UPS and FOXOs; and whether they are amenable to exercise effects is unknown. We hypothesized that exercise can enhance cellular clearance systems during aging and ND by increasing expressions of FBXO32 and FOXO1. To test this hypothesis, we used TaqMan gene expression analysis in peripheral blood (PB) to investigate the component of UPS and FOXOs; and provide mechanistic insight at baseline, during exercise, and in both genders. At baseline, levels of FBXO32 were higher in women than in men. In our attempt to discern gender-specific exercise-related changes, we observed that levels of FBXO32 increased in men but not in women. Similarly, levels of FOXO1 increased in men only. These data suggest that a graded dose of FBXO32 and FOXO1 may be beneficial when PB cells carrying FBXO32 and FOXO1 summon into the brain in response to Alzheimer’s disease (AD) perturbation (docking station PB cells). Our observation is consistent with emerging studies that exercise allows the trafficking of blood factors. Given the significance of FBXO32 and FOXO1 to ND and associated muscle integrity, our findings may explain, at least in part, the benefits of exercise on memory, associated gait, and balance perturbation acknowledged to herald the emergence of MCI.
in Frontiers in Ageing Neuroscience on April 14, 2021 12:00 AM.
• #### Investigating Neuroimaging Correlates of Early Frailty in Patients With Behavioral Variant Frontotemporal Dementia: A MRI and FDG-PET Study
Frailty is a dynamic clinical condition characterized by the reduction of interconnections among different psychobiological domains, which leads to a homeostatic vulnerability. The association between physical frailty and cognitive dysfunctions is a possible predictor of poor prognosis in patients with neurodegenerative disorders. However, this construct has not been fully analyzed by a multidimensional neuropsychogeriatric assessment matched with multimodal neuroimaging methods in patients with behavioral variant frontotemporal dementia (bvFTD). We have investigated cognitive dysfunctions and frailty status, assessed by both a neuropsychological evaluation and the Multidimensional Prognostic Index (MPI), in a sample of 18 bvFTD patients and compared to matched healthy controls. Gray matter (GM) volume (as assessed by voxel-based morphometry) and metabolism (on 18fluorodeoxyglucose positron emission tomography) were first separately compared between groups, then voxelwise compared and correlated to each other within patients. Linear regression of the MPI was performed on those voxels presenting a significant correlation between altered GM volume and metabolism. The neuropsychological assessment reflected the diagnoses and the functional–anatomical alterations documented by neuroimaging analyses. In particular, the majority of patients presented significant executive dysfunction and mood changes in terms of apathy, depression, and anxiety. In the overall MPI score, the patients fell in the lower range (indicating an early frailty status). On imaging, they exhibited a bilateral decrease of GM density and hypometabolism involving the frontal pole, the anterior opercular region, and the anterior cingulate cortex. Greater atrophy than hypometabolism was observed in the bilateral orbitofrontal cortex, the triangular part of the inferior frontal gyrus, and the ventral striatum, whereas the contrary was detected in the bilateral dorsal anterior cingulate cortex and pre-supplementary motor area. MPI scores significantly correlated only with the co-occurrence of a decrease of GM density and hypometabolism in the right anterior insular cortex, but not with the separate pathological phenomena. Our results show a correlation between a specific pattern of co-occurring GM atrophy and hypometabolism with early frailty in bvFTD patients. These aspects, combined with executive dysfunction and mood changes, may lead to an increased risk of poor prognosis, highlighting a potentially critical and precocious role of the insula in the pathogenesis of frailty.
in Frontiers in Ageing Neuroscience on April 14, 2021 12:00 AM.
• #### Nomogram to Predict Cognitive Dysfunction After a Minor Ischemic Stroke in Hospitalized-Population
An easily scoring system to predict the risk of cognitive impairment after minor ischemic stroke has not been available. We aimed to develop and externally validate a nomogram for predicting the probability of post-stroke cognitive impairment (PSCI) among hospitalized population with minor stroke. Moreover, the association of Trimethylamine N-oxide (TMAO) with PSCI is also investigated. We prospectively conducted a developed cohort on collected data in stroke center from June 2017 to February 2018, as well as an external validation cohort between June 2018 and February 2019. The main outcome is cognitive impairment defined as <22 Montreal Cognition Assessment (MoCA) score points 6 – 12 months following a minor stroke onset. Based on multivariate logistic models, the nomogram model was generated. Plasma TMAO levels were assessed at admission using liquid chromatography tandem mass spectrometry. A total of 228 participants completed the follow-up data for generating the nomogram. After multivariate logistic regression, seven variables remained independent predictors of PSCI to compose the nomogram included age, female, Fazekas score, educational level, number of intracranial atherosclerotic stenosis (ICAS), HbA1c, and cortical infarction. The area under the receiver-operating characteristic (AUC-ROC) curve of model was 0.829, C index was good (0.810), and the AUC-ROC of the model applied in validation cohort was 0.812. Plasma TMAO levels were higher in patients with cognitive impairment than in them without cognitive dysfunction (median 4.56 vs. 3.22 μmol/L; p ≤ 0.001). In conclusion, this scoring system is the first nomogram developed and validated in a stroke center cohort for individualized prediction of cognitive impairment after minor stroke. Higher plasma TMAO level at admission suggests a potential marker of PSCI.
in Frontiers in Ageing Neuroscience on April 14, 2021 12:00 AM.
• #### Iron Deposition Characteristics of Deep Gray Matter in Elderly Individuals in the Community Revealed by Quantitative Susceptibility Mapping and Multiple Factor Analysis
Purpose
The objective of this study was to determine which factors influence brain iron concentrations in deep gray matter in elderly individuals and how these factors influence regional brain iron concentrations.
Methods
A total of 105 elderly individuals were enrolled in this study. All participants underwent detailed magnetic resonance imaging (MRI) examinations from October 2018 to August 2019. Among them, 44 individuals had undergone a previous MRI examination from July 2010 to August 2011. Quantitative susceptibility mapping (QSM) was utilized as an indirect quantitative marker of brain iron, and the susceptibility values of deep gray matter structures were obtained. Univariate analysis and multiple linear regression analysis were used to investigate 11 possible determinants for cerebral iron deposition.
Results
Our results showed no sex- or hemisphere-related differences in susceptibility values in any of the regions studied. Aging was significantly correlated with increased insusceptibility values in almost all analyzed brain regions (except for the thalamus) when we compared the susceptibility values at the two time points. In a cross-sectional analysis, the relationship between gray matter nucleus susceptibility values and age was conducted using Pearson’s linear regression. Aging was significantly correlated with the susceptibility values of the globus pallidus (GP), putamen (Put), and caudate nucleus (CN), with the Put having the strongest correlations. In multiple linear regression models, associations with increased susceptibility values were found in the CN, Put, red nucleus, and dentate nucleus for individuals with a history of type 2 diabetes mellitus (T2DM). However, the patients with hypertension showed significantly reduced susceptibility values in the red nucleus and dentate nucleus. Our data suggested that smokers had increased susceptibility values in the thalamus. No significant associations were found for individuals with a history of hypercholesterolemia and Apolipoprotein E4 carrier status.
Conclusion
Our data revealed that aging, T2DM, and smoking could increase iron deposition in some deep gray matter structures. However, hypertension had the opposite effects in the red nuclei and dentate nuclei. Brain iron metabolism could be influenced by many factors in different modes. In future studies, we should strictly control for confounding factors.
in Frontiers in Ageing Neuroscience on April 14, 2021 12:00 AM.
• #### Modulation of stimulated dopamine release in rat nucleus accumbens shell by GABA in vitro: Effect of sub‐chronic phencyclidine pretreatment
Using fast‐scan cyclic voltammetry in rat brain slices in vitro, we showed an attenuation of electrically‐stimulated dopamine release by agonists at both GABA‐A (muscimol) and GABA‐B (baclofen) receptors. The attenuation by the baclofen was abolished in slices from animals pretreated with phencyclidine, modelling schizophrenia. Abstract Dopamine signaling in nucleus accumbens (NAc) is modulated by γ‐aminobutyric acid (GABA), acting through GABA‐A and GABA‐B receptors: dysregulation of GABAergic control of dopamine function may be important in behavioral deficits in schizophrenia. We investigated the effect of GABA‐A (muscimol) and GABA‐B (baclofen) receptor agonists on electrically stimulated dopamine release. Furthermore, we explored whether drug‐induced changes were disrupted by pretreatment with phencyclidine, which provides a well‐validated model of schizophrenia. Using brain slices from female rats, fast‐scan cyclic voltammetry was used to measure electrically stimulated dopamine release in NAc shell. Both muscimol and baclofen caused concentration‐dependent attenuation of evoked dopamine release: neither effect was changed by dihydro‐β‐erythroidine, a nicotinic acetylcholine receptor antagonist, or the α‐amino‐3‐hydroxy‐5‐methyl‐4‐isoxazolepropionic acid (AMPA)‐type glutamate receptor antagonist, 6‐cyano‐7‐nitroquinoxaline‐2,3‐dione (CNQX), precluding indirect mechanisms using these transmitter systems in the GABAergic actions. In slices taken from rats pretreated with phencyclidine, the attenuation of evoked dopamine release by baclofen was abolished, but the attenuation by muscimol was unaffected. Since phencyclidine pretreatment was followed by drug‐free washout period of at least a week, the drug was not present during recording. Therefore, disruption of GABA‐B modulation of dopamine is due to long‐term functional changes resulting from the treatment, rather than transient changes due to the drug's presence at test. This enduring dysregulation of GABA‐B modulation of accumbal dopamine release provides a plausible mechanism through which GABA dysfunction influences accumbal dopamine leading to behavioral changes seen in schizophrenia and may provide a route for novel therapeutic strategies to treat the condition.
in Journal of Neuroscience Research on April 13, 2021 04:24 PM.
• #### Other ways of communicating the pandemic - memes and stickers against COVID-19: a systematic review [version 1; peer review: awaiting peer review]
doi:10.12688/f1000research.51541.1
in F1000Research on April 13, 2021 03:15 PM.
• #### Neurocognitive processing efficiency for discriminating human non-alarm rather than alarm scream calls
by Sascha Frühholz, Joris Dietziker, Matthias Staib, Wiebke Trost
Across many species, scream calls signal the affective significance of events to other agents. Scream calls were often thought to be of generic alarming and fearful nature, to signal potential threats, with instantaneous, involuntary, and accurate recognition by perceivers. However, scream calls are more diverse in their affective signaling nature than being limited to fearfully alarming a threat, and thus the broader sociobiological relevance of various scream types is unclear. Here we used 4 different psychoacoustic, perceptual decision-making, and neuroimaging experiments in humans to demonstrate the existence of at least 6 psychoacoustically distinctive types of scream calls of both alarming and non-alarming nature, rather than there being only screams caused by fear or aggression. Second, based on perceptual and processing sensitivity measures for decision-making during scream recognition, we found that alarm screams (with some exceptions) were overall discriminated the worst, were responded to the slowest, and were associated with a lower perceptual sensitivity for their recognition compared with non-alarm screams. Third, the neural processing of alarm compared with non-alarm screams during an implicit processing task elicited only minimal neural signal and connectivity in perceivers, contrary to the frequent assumption of a threat processing bias of the primate neural system. These findings show that scream calls are more diverse in their signaling and communicative nature in humans than previously assumed, and, in contrast to a commonly observed threat processing bias in perceptual discriminations and neural processes, we found that especially non-alarm screams, and positive screams in particular, seem to have higher efficiency in speeded discriminations and the implicit neural processing of various scream types in humans.
in PLoS Biology on April 13, 2021 02:00 PM.
• #### Unravelling the Neural Basis of Spatial Delusions After Stroke
Objective Knowing explicitly where we are is an interpretation of our spatial representations. Reduplicative paramnesia is a disrupting syndrome in which patients present a firm belief of spatial mislocation. Here, we studied the largest sample of patients with delusional misidentifications of space (ie, reduplicative paramnesia) after stroke to shed light on their neurobiology. Methods In a prospective, cumulative, case‐control study, we screened 400 patients with acute right‐hemispheric stroke. We included 64 cases and 233 controls. First, lesions were delimited and normalized. Then, we computed structural and functional disconnection maps using methods of lesion‐track and network‐mapping. The maps were compared, controlling for confounders. Second, we built a multivariate logistic model, including clinical, behavioral, and neuroimaging data. Finally, we performed a nested cross‐validation of the model with a support‐vector machine analysis. Results The most frequent misidentification subtype was confabulatory mislocation (56%), followed by place reduplication (19%), and chimeric assimilation (13%). Our results indicate that structural disconnection is the strongest predictor of the syndrome and included 2 distinct streams, connecting right fronto‐thalamic and right occipitotemporal structures. In the multivariate model, the independent predictors of reduplicative paramnesia were the structural disconnection map, lesion sparing of right dorsal fronto‐parietal regions, age, and anosognosia. Good discrimination accuracy was demonstrated (area under the curve = 0.80 [0.75–0.85]). Interpretation Our results localize the anatomic circuits that may have a role in the abnormal spatial‐emotional binding and in the defective updating of spatial representations underlying reduplicative paramnesia. This novel data may contribute to better understand the pathophysiology of delusional syndromes after stroke. ANN NEUROL 2021
in Annals of Neurology on April 13, 2021 12:59 PM.
• #### Rapidly Progressing Honeycomb‐Like Germinoma in the Ventricular System
Annals of Neurology, EarlyView.
in Annals of Neurology on April 13, 2021 12:57 PM.
• #### A pan-genome method to determine core regions of the Bacillus subtilis and Escherichia coli genomes [version 1; peer review: awaiting peer review]
doi:10.12688/f1000research.51873.1
in F1000Research on April 13, 2021 06:56 AM.
• #### Effects of the combination therapy of electric field stimulation and polyethylene glycol in the ex vivo spinal cord of female rats after compression
In the present study, the effects of electrical field stimulation (EFS) and polyethylene glycol (PEG) were determined in the compression model of ex vivo spinal cords by using a double sucrose gap recording chamber (DSGRC). The effects of the combination therapy are better than those of the individual treatments of PEG and EFS. The methods of establishing ex vivo spinal cord compression model in the DSGRC can be useful in the fundamental and the clinical applications of the spinal cord injury in future. Abstract The application of electric field stimulation (EFS) can reduce the cation influx after spinal cord injury. However, regenerated cation influx and reestablished injury potential are observed after EFS. Polyethylene glycol (PEG) is popular as an effective cell membrane fusion agent. This study aims to determine the effects of the combination therapy of EFS and PEG in the ex vivo spinal cord after compression. The ex vivo spinal cords of female rats with compression injury were incubated in a double sucrose gap recording chamber (DSGRC) and randomly divided into the following four groups: (1) compression group: compression only, (2) EFS group: EFS for 15 min, (3) PEG group: PEG treatment for 4 min, and (4) EFS + PEG group: EFS for 15 min and PEG treatment for 4 min. The hematoxylin–eosin staining was performed to measure the necrotic area of the spinal cords. The gap potential was detected, and the area under the curve of the gap potential was calculated. The intracellular cation concentration, membrane permeability, and compound action potential were measured and quantified. Results revealed no significant difference in the necrotic areas among different groups, and the compression model of the ex vivo spinal cord in the DSGRC had high consistency and stability. The combination therapy could attenuate cation inflow, promote cell membrane restoration, and promote the functional recovery of the spinal cord conduction after compression in ex vivo spinal cords.
in Journal of Neuroscience Research on April 13, 2021 05:20 AM.
• #### Sex differences in synaptic plasticity underlying learning
Abstract Although sex differences in learning behaviors are well documented, sexual dimorphism in the synaptic processes of encoding is only recently appreciated. Studies in male rodents have built upon the discovery of long‐term potentiation (LTP), and acceptance of this activity‐dependent increase in synaptic strength as a mechanism of encoding, to identify synaptic receptors and signaling activities that coordinate the activity‐dependent remodeling of the subsynaptic actin cytoskeleton that is critical for enduring potentiation and memory. These molecular substrates together with other features of LTP, as characterized in males, have provided an explanation for a range of memory phenomena including multiple stages of consolidation, the efficacy of spaced training, and the location of engrams at the level of individual synapses. In the present report, we summarize these findings and describe more recent results from our laboratories showing that in females the same actin regulatory mechanisms are required for hippocampal LTP and memory but, in females only, the engagement of both modulatory receptors such as TrkB and synaptic signaling intermediaries including Src and ERK1/2 requires neuron‐derived estrogen and signaling through membrane‐associated estrogen receptor α (ERα). Moreover, in association with the additional ERα involvement, females exhibit a higher threshold for hippocampal LTP and spatial learning. We propose that the distinct LTP threshold in females contributes to as yet unappreciated sex differences in information processing and features of learning and memory.
in Journal of Neuroscience Research on April 13, 2021 05:14 AM.
• #### Validation of Olfactory Network Based on Brain Structural Connectivity and Its Association With Olfactory Test Scores
Olfactory perception is a complicated process involving multiple cortical and subcortical regions, of which the underlying brain dynamics are still not adequately mapped. Even in the definition of the olfactory primary cortex, there is a large degree of variation in parcellation templates used for investigating olfaction in neuroimaging studies. This complicates comparison between human olfactory neuroimaging studies. The present study aims to validate an olfactory parcellation template derived from both functional and anatomical data that applies structural connectivity (SC) to ensure robust connectivity to key secondary olfactory regions. Furthermore, exploratory analyses investigate if different olfactory parameters are associated with differences in the strength of connectivity of this structural olfactory fingerprint. By combining diffusion data with an anatomical atlas and advanced probabilistic tractography, we found that the olfactory parcellation had a robust SC network to key secondary olfactory regions. Furthermore, the study indicates that higher ratings of olfactory significance were associated with increased intra- and inter-hemispheric SC of the primary olfactory cortex. Taken together, these results suggest that the patterns of SC between the primary olfactory cortex and key secondary olfactory regions has potential to be used for investigating the nature of olfactory significance, hence strengthening the theory that individual differences in olfactory behaviour are encoded in the structural network fingerprint of the olfactory cortex.
in Frontiers in Systems Neuroscience on April 13, 2021 12:00 AM.
• #### Ocular Health of Octodon degus as a Clinical Marker for Age-Related and Age-Independent Neurodegeneration
The aging process and age-related diseases such as Alzheimer’s disease (AD), are very heterogeneous and multifactorial, making it challenging to diagnose the disease based solely on genetic, behavioral tests, or clinical history. It is yet to be explained what ophthalmological tests relate specifically to aging and AD. To this end, we have selected the common degu (Octodon degus) as a model for aging which develops AD-like signs to conduct ophthalmological screening methods that could be clinical markers of aging and AD. We investigated ocular health using ophthalmoscopy, fundus photography, intraocular pressure (IOP), and pupillary light reflex (PLR). The results showed significant presence of cataracts in adult degus and IOP was also found to increase significantly with advancing age. Age had a significant effect on the maximum pupil constriction but other pupil parameters changed in an age-independent manner (PIPR retention index, resting pupil size, constriction velocity, redilation plateau). We concluded that degus have underlying factors at play that regulate PLR and may be connected to sympathetic, parasympathetic, and melanopsin retinal ganglion cell (ipRGC) deterioration. This study provides the basis for the use of ocular tests as screening methods for the aging process and monitoring of neurodegeneration in non-invasive ways.
in Frontiers in Integrative Neuroscience on April 13, 2021 12:00 AM.
• #### Optimized Parameters for Transducing the Locus Coeruleus Using Canine Adenovirus Type 2 (CAV2) Vector in Rats for Chemogenetic Modulation Research
Introduction
The locus coeruleus noradrenergic (LC-NA) system is studied for its role in various neurological and psychiatric disorders such as epilepsy and Major Depression Dissorder. Chemogenetics is a powerful technique for specific manipulation of the LC to investigate its functioning. Local injection of AAV2/7 viral vectors has limitations with regards to efficiency and specificity of the transduction, potentially due to low tropism of AAV2/7 for LC neurons. In this study we used a canine adenovirus type 2 (CAV2) vector with different volumes and viral particle numbers to achieve high and selective expression of hM3Dq, an excitatory Designer Receptor Exclusively Activated by Designer Drugs (DREADD), for chemogenetic modulation of LC neurons.
Methods
Adult male Sprague-Dawley rats were injected in the LC with different absolute numbers of CAV2-PRSx8-hM3Dq-mCherry physical particles (0.1E9, 1E9, 5E9,10E9, or 20E9 pp) using different volumes (LowV = 3 nl × 300 nl, MediumV = 3 × 600 nl, HighV = 3 × 1200 nl). Two weeks post-injection, double-labeling immunohistochemistry for dopamine β hydroxylase (DBH) and mCherry was performed to determine hM3Dq expression and its specificity for LC neurons. The size of the transduced LC was compared to the contralateral LC to identify signs of toxicity.
Results
Administration of Medium volume (3 × 600 nl) and 1E9 particles resulted in high expression levels with 87.3 ± 9.8% of LC neurons expressing hM3Dq, but low specificity with 36.2 ± 17.3% of hM3Dq expression in non-LC neurons. The most diluted conditions (Low volume_0.1E pp and Medium Volume_0.1E pp) presented similar high transduction of LC neurons (70.9 ± 12.7 and 77.2 ± 9.8%) with lower aspecificity (5.5 ± 3.5 and 4.0 ± 1.9%, respectively). Signs of toxicity were observed in all undiluted conditions as evidenced by a decreased size of the transduced LC.
Conclusion
This study identified optimal conditions (Low and Medium Volume with 0.1E9 particles of CAV2-PRSx8-hM3Dq-mCherry) for safe and specific transduction of LC neurons with excitatory DREADDs to study the role of the LC-NA system in health and disease.
in Frontiers in Neuroscience: Neural Technology on April 13, 2021 12:00 AM.
• #### Multiparametric Analysis of Cerebral Development in Preterm Infants Using Magnetic Resonance Imaging
Objectives
The severity of neurocognitive impairment increases with prematurity. However, its mechanisms remain poorly understood. Our aim was firstly to identify multiparametric magnetic resonance imaging (MRI) markers that differ according to the degree of prematurity, and secondly to evaluate the impact of clinical complications on these markers.
Materials and Methods
We prospectively enrolled preterm infants who were divided into two groups according to their degree of prematurity: extremely preterm (<28 weeks’ gestational age) and very preterm (28–32 weeks’ gestational age). They underwent a multiparametric brain MRI scan at term-equivalent age including morphological, diffusion tensor and arterial spin labeling (ASL) perfusion sequences. We quantified overall and regional volumes, diffusion parameters, and cerebral blood flow (CBF). We then compared the parameters for the two groups. We also assessed the effects of clinical data and potential MRI morphological abnormalities on those parameters.
Results
Thirty-four preterm infants were included. Extremely preterm infants (n = 13) had significantly higher frontal relative volumes (p = 0.04), frontal GM relative volumes (p = 0.03), and regional CBF than very preterm infants, but they had lower brainstem and insular relative volumes (respectively p = 0.008 and 0.04). Preterm infants with WM lesions on MRI had significantly lower overall GM CBF (13.3 ± 2 ml/100 g/min versus 17.7 ± 2.5, < ml/100 g/min p = 0.03).
Conclusion
Magnetic resonance imaging brain scans performed at term-equivalent age in preterm infants provide quantitative imaging parameters that differ with respect to the degree of prematurity, related to brain maturation.
in Frontiers in Neuroscience: Brain Imaging Methods on April 13, 2021 12:00 AM.
• #### A Real-Time Stability Control Method Through sEMG Interface for Lower Extremity Rehabilitation Exoskeletons
Herein, we propose a real-time stable control gait switching method for the exoskeleton rehabilitation robot. Exoskeleton rehabilitation robots have been extensively developed during the past decade and are able to offer valuable motor ability to paraplegics. However, achieving stable states of the human-exoskeleton system while conserving wearer strength remains challenging. The constant switching of gaits during walking may affect the center of gravity, resulting in imbalance of human–exoskeleton system. In this study, it was determined that forming an equilateral triangle with two crutch-supporting points and a supporting leg has a positive impact on walking stability and ergonomic interaction. First, the gaits planning and stability analysis based on human kinematics model and zero moment point method for the lower limb exoskeleton are demonstrated. Second, a neural interface based on surface electromyography (sEMG), which realizes the intention recognition and muscle fatigue estimation, is constructed. Third, the stability of human–exoskeleton system and ergonomic effects are tested through different gaits with planned and unplanned gait switching strategy on the SIAT lower limb rehabilitation exoskeleton. The intention recognition based on long short-term memory (LSTM) model can achieve an accuracy of nearly 99%. The experimental results verified the feasibility and efficiency of the proposed gait switching method for enhancing stability and ergonomic effects of lower limb rehabilitation exoskeleton.
in Frontiers in Neuroscience: Neural Technology on April 13, 2021 12:00 AM.
• #### Brain Density Clustering Analysis: A New Approach to Brain Functional Dynamics
Background
A number of studies in recent years have explored whole-brain dynamic connectivity using pairwise approaches. There has been less focus on trying to analyze brain dynamics in higher dimensions over time.
Methods
We introduce a new approach that analyzes time series trajectories to identify high traffic nodes in a high dimensional space. First, functional magnetic resonance imaging (fMRI) data are decomposed using spatial ICA to a set of maps and their associated time series. Next, density is calculated for each time point and high-density points are clustered to identify a small set of high traffic nodes. We validated our method using simulations and then implemented it on a real data set.
Results
We present a novel approach that captures dynamics within a high dimensional space and also does not use any windowing in contrast to many existing approaches. The approach enables one to characterize and study the time series in a potentially high dimensional space, rather than looking at each component pair separately. Our results show that schizophrenia patients have a lower dynamism compared to healthy controls. In addition, we find patients spend more time in nodes associated with the default mode network and less time in components strongly correlated with auditory and sensorimotor regions. Interestingly, we also found that subjects oscillate between state pairs that show opposite spatial maps, suggesting an oscillatory pattern.
Conclusion
Our proposed method provides a novel approach to analyze the data in its native high dimensional space and can possibly provide new information that is undetectable using other methods.
in Frontiers in Neuroscience: Brain Imaging Methods on April 13, 2021 12:00 AM.
• #### Regulation of Glutamate, GABA and Dopamine Transporter Uptake, Surface Mobility and Expression
Neurotransmitter transporters limit spillover between synapses and maintain the extracellular neurotransmitter concentration at low yet physiologically meaningful levels. They also exert a key role in providing precursors for neurotransmitter biosynthesis. In many cases, neurons and astrocytes contain a large intracellular pool of transporters that can be redistributed and stabilized in the plasma membrane following activation of different signaling pathways. This means that the uptake capacity of the brain neuropil for different neurotransmitters can be dynamically regulated over the course of minutes, as an indirect consequence of changes in neuronal activity, blood flow, cell-to-cell interactions, etc. Here we discuss recent advances in the mechanisms that control the cell membrane trafficking and biophysical properties of transporters for the excitatory, inhibitory and modulatory neurotransmitters glutamate, GABA, and dopamine.
in Frontiers in Cellular Neuroscience on April 13, 2021 12:00 AM.
• #### Predicting Dementia With Prefrontal Electroencephalography and Event-Related Potential
Objective: To examine whether prefrontal electroencephalography (EEG) can be used for screening dementia.
Methods: We estimated the global cognitive decline using the results of Mini-Mental Status Examination (MMSE), measurements of brain activity from resting-state EEG, responses elicited by auditory stimulation [sensory event-related potential (ERP)], and selective attention tasks (selective-attention ERP) from 122 elderly participants (dementia, 35; control, 87). We investigated that the association between MMSE and each EEG/ERP variable by using Pearson’s correlation coefficient and performing univariate linear regression analysis. Kernel density estimation was used to examine the distribution of each EEG/ERP variable in the dementia and non-dementia groups. Both Univariate and multiple logistic regression analyses with the estimated odds ratios were conducted to assess the associations between the EEG/ERP variables and dementia prevalence. To develop the predictive models, five-fold cross-validation was applied to multiple classification algorithms.
Results: Most prefrontal EEG/ERP variables, previously known to be associated with cognitive decline, show correlations with the MMSE score (strongest correlation has |r| = 0.68). Although variables such as the frontal asymmetry of the resting-state EEG are not well correlated with the MMSE score, they indicate risk factors for dementia. The selective-attention ERP and resting-state EEG variables outperform the MMSE scores in dementia prediction (areas under the receiver operating characteristic curve of 0.891, 0.824, and 0.803, respectively). In addition, combining EEG/ERP variables and MMSE scores improves the model predictive performance, whereas adding demographic risk factors do not improve the prediction accuracy.
Conclusion: Prefrontal EEG markers outperform MMSE scores in predicting dementia, and additional prediction accuracy is expected when combining them with MMSE scores.
Significance: Prefrontal EEG is effective for screening dementia when used independently or in combination with MMSE.
in Frontiers in Ageing Neuroscience on April 13, 2021 12:00 AM.
• #### Choriocapillaris Changes Are Correlated With Disease Duration and MoCA Score in Early-Onset Dementia
Purpose: Imaging of the choroid may detect the microvascular changes associated with early-onset dementia (EOD) and may represent an indicator for detection of the disease. We aimed to analyze the in vivo choriocapillaris (CC) flow density in EOD patients using optical coherence tomography angiography (OCTA) and evaluate the association with its clinical measures.
Methods: This cross-sectional study used the OCTA to image and analyze the choriocapillaris (CC) of 25 EOD patients and 20 healthy controls. Choriocapillaris flow density in the 3 mm area and 6 mm area was measured by an inbuilt algorithm in the OCT tool. Brain volume using magnetic resonance imaging and cognitive assessment was done and recorded.
Results: Significantly reduced capillary flow density of the choriocapillaris was seen in EOD patients when compared to healthy controls in the 3.0 mm (P = 0.001) and 6.0 mm (P < 0.001) area respectively. Montreal Cognitive Assessment (MoCA) scores in EOD patients positively correlated with choriocapillaris flow density in the 3 mm area (Rho = 0.466, P = 0.021). Disease duration of EOD patients also negatively correlated with choriocapillaris density in the 3 mm area (Rho = −0.497, P = 0.008).
Discussion: Our report suggests that choriocapillaris damage may be a potential indicator of early-onset dementia. Microvascular impairment may be involved in the early phase of dementia without aging playing a role in its impairment.
Clinical Trial Registration: www.ClinicalTrials.gov, ChiCTR2000041386.
in Frontiers in Ageing Neuroscience on April 13, 2021 12:00 AM.
• #### Mechanism of White Matter Injury and Promising Therapeutic Strategies of MSCs After Intracerebral Hemorrhage
Intracerebral hemorrhage (ICH) is the most fatal subtype of stroke with high disability and high mortality rates, and there is no effective treatment. The predilection site of ICH is in the area of the basal ganglia and internal capsule (IC), where exist abundant white matter (WM) fiber tracts, such as the corticospinal tract (CST) in the IC. Proximal or distal white matter injury (WMI) caused by intracerebral parenchymal hemorrhage is closely associated with poor prognosis after ICH, especially motor and sensory dysfunction. The pathophysiological mechanisms involved in WMI are quite complex and still far from clear. In recent years, the neuroprotection and repairment capacity of mesenchymal stem cells (MSCs) has been widely investigated after ICH. MSCs exert many unique biological effects, including self-recovery by producing growth factors and cytokines, regenerative repair, immunomodulation, and neuroprotection against oxidative stress, providing a promising cellular therapeutic approach for the treatment of WMI. Taken together, our goal is to discuss the characteristics of WMI following ICH, including the mechanism and potential promising therapeutic targets of MSCs, aiming at providing new clues for future therapeutic strategies.
in Frontiers in Ageing Neuroscience on April 13, 2021 12:00 AM.
• #### Reduced Retinal Microvascular Perfusion in Patients With Stroke Detected by Optical Coherence Tomography Angiography
Currently there is a shortage of biomarkers for stroke, one of the leading causes of death and disability in aging populations. Retinal vessels offer a unique and accessible “window” to study the microvasculature in vivo. However, the relationship between the retinal microvasculature and stroke is not entirely clear. To investigate the retinal microvascular characteristics in stroke, we recruited patients with stroke and age-matched control subjects from a tertiary hospital in China. The macular vessel density (VD) in the superficial capillary plexus (SCP) and deep capillary plexus (DCP), foveal avascular zone (FAZ) metrics, and optical coherence tomography angiography (OCTA) measured optic disc VD were recorded for analysis. A total of 189 patients with stroke and 195 control subjects were included. After adjusting for sex, visual acuity, systolic and diastolic blood pressure, a history of smoking, levels of hemoglobulin (HbA1c), cholesterol, and high-density lipoprotein (HDL), the macular VD of SCP and DCP in all sectors was decreased in patients with stroke. In the stroke group, the VD around the FAZ and the VD of the optic disk were lower. Logistic regression found the parafovea-superior-hemi VD of DCP > 54.53% [odds ratio (OR): 0.169] as a protective factor of stroke. Using the integration of all OCTA parameters and traditional risk factors, the area under the receiver operating characteristic (AUC) curve of distinguishing patients with stroke was 0.962, with a sensitivity of 0.944 and a specificity of 0.871. Our study demonstrates that the retinal VD is decreased in patients with stroke independently of the traditional risk factors of stroke, which may shed light on the monitoring of stroke using the retinal microvascular parameters.
in Frontiers in Ageing Neuroscience on April 13, 2021 12:00 AM.
• #### Phenotyping Neuropsychiatric Symptoms Profiles of Alzheimer’s Disease Using Cluster Analysis on EEG Power
Background: There has been an increasing interest in studying electroencephalogram (EEG) as a biomarker of Alzheimer’s disease but the association between EEG signals and patients’ neuropsychiatric symptoms remains unclear. We studied EEG signals of patients with Alzheimer’s disease to explore the associations between patients’ neuropsychiatric symptoms and clusters of patients based on their EEG powers.
Methods: A total of 69 patients with mild Alzheimer’s disease (the Clinical Dementia Rating = 1) were enrolled and their EEG signals from 19 channels/electrodes were recorded in three sessions for each patient. The EEG power was calculated by Fourier transform for the four frequency bands (beta: 13–40 Hz, alpha: 8–13 Hz, theta: 4–8 Hz, and delta: <4 Hz). We performed K-means cluster analysis to classify the 69 patients into two distinct groups by the log-transformed EEG powers (4 frequency bands × 19 channels) for the three EEG sessions. In each session, both clusters were compared with each other to assess the differences in their behavioral/psychological symptoms in terms of the Neuropsychiatric Inventory (NPI) score.
Results: While EEG band powers were highly consistent across all three sessions before clustering, EEG band powers were different between the two clusters in each session, especially for the delta waves. The delta band powers differed significantly between the two clusters in most channels across the three sessions. Patients’ demographics and cognitive function were not different between both clusters. However, their behavioral/psychological symptoms were different between the two clusters classified based on EEG powers. A higher NPI score was associated with the clustering of higher EEG powers.
Conclusion: The present study suggests that EEG power correlates to behavioral and psychological symptoms among patients with mild Alzheimer’s disease. The clustering approach of EEG signals may provide a novel and cost-effective method to differentiate the severity of neuropsychiatric symptoms and/or predict the prognosis for Alzheimer’s patients.
in Frontiers in Ageing Neuroscience on April 13, 2021 12:00 AM.
• #### Chemogenetic manipulation of astrocytic signaling in the basolateral amygdala reduces binge‐like alcohol consumption in male mice
Binge ethanol activates astrocytes resulting in an increase in glial fibrillary acidic protein (GFAP) immunoreactivity (red) in the basolateral amygdala. When astrocytic receptor signaling was activated using GFAP‐Gq‐GPCR designer receptors exclusively activated by designer drugs, it decreased ethanol consumption and ameliorated alcohol‐induced depression of glutamate in the amygdala. Abstract Binge drinking is a common occurrence in the United States, but a high concentration of alcohol in the blood has been shown to have reinforcing and reciprocal effects on the neuroimmune system in both dependent and non‐dependent scenarios. The first part of this study examined alcohol's effects on the astrocytic response in the central amygdala and basolateral amygdala (BLA) in a non‐dependent model. C57BL/6J mice were given access to either ethanol, water, or sucrose during a “drinking in the dark” paradigm, and astrocyte number and astrogliosis were measured using immunohistochemistry. Results indicate that non‐dependent consumption increased glial fibrillary acidic protein (GFAP) density but not the number of GFAP+ cells, suggesting that non‐dependent ethanol is sufficient to elicit astrocyte activation. The second part of this study examined how astrocytes impacted behaviors and the neurochemistry related to alcohol using the chemogenetic tool, DREADDs (designer receptors exclusively activated by designer drugs). Transgenic GFAP‐hM3Dq mice were administered clozapine N‐oxide both peripherally, affecting the entire central nervous system (CNS), or directly into the BLA. In both instances, GFAP‐Gq‐signaling activation significantly reduced ethanol consumption and correlating blood ethanol concentrations. However, GFAP‐Gq‐DREADD activation throughout the CNS had more broad effects resulting in decreased locomotor activity and sucrose consumption. More targeted GFAP‐Gq‐signaling activation in the BLA only impacted ethanol consumption. Finally, a glutamate assay revealed that after GFAP‐Gq‐signaling activation glutamate concentrations in the amygdala were partially normalized to control levels. Altogether, these studies support the theory that astrocytes represent a viable target for alcohol use disorder therapies.
in Journal of Neuroscience Research on April 12, 2021 08:49 PM.
• #### Knowledge and compliance with Covid-19 infection prevention and control measures among health workers in regional referral hospitals in northern Uganda: a cross-sectional online survey [version 2; peer review: 1 approved with reservations]
doi:10.12688/f1000research.51333.2
in F1000Research on April 12, 2021 04:07 PM.
• #### Image analysis method for heterogeneity and porosity characterization of biomimetic hydrogels [version 2; peer review: 1 approved with reservations]
doi:10.12688/f1000research.27372.2
in F1000Research on April 12, 2021 03:08 PM.
• #### Structural differences in the hippocampus and amygdala of behaviorally inhibited macaque monkeys
Abstract Behavioral inhibition is a temperamental disposition to react warily when confronted by unfamiliar people, objects, or events. Behaviorally inhibited children are at greater risk of developing anxiety disorders later in life. Previous studies reported that individuals with a history of childhood behavioral inhibition exhibit abnormal activity in the hippocampus and amygdala. However, few studies have investigated the structural differences that may underlie these functional abnormalities. In this exploratory study, we evaluated rhesus monkeys exhibiting a phenotype consistent with human behavioral inhibition. We performed quantitative neuroanatomical analyses that cannot be performed in humans including estimates of the volume and neuron number of distinct hippocampal regions and amygdala nuclei in behaviorally inhibited and control rhesus monkeys. Behaviorally inhibited monkeys had larger volumes of the rostral third of the hippocampal field CA3, smaller volumes of the rostral third of CA2, and smaller volumes of the accessory basal nucleus of the amygdala. Furthermore, behaviorally inhibited monkeys had fewer neurons in the rostral third of CA2. These structural differences may contribute to the functional abnormalities in the hippocampus and amygdala of behaviorally inhibited individuals. These structural findings in monkeys are consistent with a reduced modulation of amygdala activity via prefrontal cortex projections to the accessory basal nucleus. Given the putative roles of the amygdala in affective processing, CA3 in associative learning and CA2 in social memory, increased amygdala and CA3 activity, and diminished CA2 structure and function, may be associated with increased social anxiety and the heritability of behavioral inhibition. The findings from this exploratory study compel follow‐up investigations with larger sample sizes and additional analyses to provide greater insight and more definitive answers regarding the neurobiological bases of behavioral inhibition.
in Hippocampus on April 12, 2021 02:59 PM.
• #### Transition in successors’ behavior and mindset while managing long-lived small and medium-sized manufacturing enterprises: a qualitative study [version 1; peer review: awaiting peer review]
doi:10.12688/f1000research.52226.1
in F1000Research on April 12, 2021 02:48 PM.
• #### Use of the bar chart/S-curve and computerized precedence diagram method on scheduling and controlling building construction projects by contractors: a cross-sectional study [version 1; peer review: awaiting peer review]
doi:10.12688/f1000research.51646.1
in F1000Research on April 12, 2021 02:11 PM.
• #### Sperm chemotaxis in marine species is optimal at physiological flow rates according theory of filament surfing
by Steffen Lange, Benjamin M. Friedrich
Sperm of marine invertebrates have to find eggs cells in the ocean. Turbulent flows mix sperm and egg cells up to the millimeter scale; below this, active swimming and chemotaxis become important. Previous work addressed either turbulent mixing or chemotaxis in still water. Here, we present a general theory of sperm chemotaxis inside the smallest eddies of turbulent flow, where signaling molecules released by egg cells are spread into thin concentration filaments. Sperm cells ‘surf’ along these filaments towards the egg. External flows make filaments longer, but also thinner. These opposing effects set an optimal flow strength. The optimum predicted by our theory matches flow measurements in shallow coastal waters. Our theory quantitatively agrees with two previous fertilization experiments in Taylor-Couette chambers and provides a mechanistic understanding of these early experiments. ‘Surfing along concentration filaments’ could be a paradigm for navigation in complex environments in the presence of turbulent flow.
in PLoS Computational Biology on April 12, 2021 02:00 PM.
• #### Mistakes can stabilise the dynamics of rock-paper-scissors games
by Maria Kleshnina, Sabrina S. Streipert, Jerzy A. Filar, Krishnendu Chatterjee
A game of rock-paper-scissors is an interesting example of an interaction where none of the pure strategies strictly dominates all others, leading to a cyclic pattern. In this work, we consider an unstable version of rock-paper-scissors dynamics and allow individuals to make behavioural mistakes during the strategy execution. We show that such an assumption can break a cyclic relationship leading to a stable equilibrium emerging with only one strategy surviving. We consider two cases: completely random mistakes when individuals have no bias towards any strategy and a general form of mistakes. Then, we determine conditions for a strategy to dominate all other strategies. However, given that individuals who adopt a dominating strategy are still prone to behavioural mistakes in the observed behaviour, we may still observe extinct strategies. That is, behavioural mistakes in strategy execution stabilise evolutionary dynamics leading to an evolutionary stable and, potentially, mixed co-existence equilibrium.
in PLoS Computational Biology on April 12, 2021 02:00 PM.
• #### Peroxiredoxin alleviates the fitness costs of imidacloprid resistance in an insect pest of rice
by Rui Pang, Ke Xing, Longyu Yuan, Zhikun Liang, Meng Chen, Xiangzhao Yue, Yi Dong, Yan Ling, Xionglei He, Xianchun Li, Wenqing Zhang
Chemical insecticides have been heavily employed as the most effective measure for control of agricultural and medical pests, but evolution of resistance by pests threatens the sustainability of this approach. Resistance-conferring mutations sometimes impose fitness costs, which may drive subsequent evolution of compensatory modifier mutations alleviating the costs of resistance. However, how modifier mutations evolve and function to overcome the fitness cost of resistance still remains unknown. Here we show that overexpression of P450s not only confers imidacloprid resistance in the brown planthopper, Nilaparvata lugens, the most voracious pest of rice, but also leads to elevated production of reactive oxygen species (ROS) through metabolism of imidacloprid and host plant compounds. The inevitable production of ROS incurs a fitness cost to the pest, which drives the increase or fixation of the compensatory modifier allele T65549 within the promoter region of N. lugens peroxiredoxin (NlPrx) in the pest populations. T65549 allele in turn upregulates the expression of NlPrx and thus increases resistant individuals’ ability to clear the cost-incurring ROS of any source. The frequent involvement of P450s in insecticide resistance and their capacity to produce ROS while metabolizing their substrates suggest that peroxiredoxin or other ROS-scavenging genes may be among the common modifier genes for alleviating the fitness cost of insecticide resistance.
in PLoS Biology on April 12, 2021 02:00 PM.
• #### Molecular basis of F-actin regulation and sarcomere assembly via myotilin
by Julius Kostan, Miha Pavšič, Vid Puž, Thomas C. Schwarz, Friedel Drepper, Sibylle Molt, Melissa Ann Graewert, Claudia Schreiner, Sara Sajko, Peter F. M. van der Ven, Adekunle Onipe, Dmitri I. Svergun, Bettina Warscheid, Robert Konrat, Dieter O. Fürst, Brigita Lenarčič, Kristina Djinović-Carugo
Sarcomeres, the basic contractile units of striated muscle cells, contain arrays of thin (actin) and thick (myosin) filaments that slide past each other during contraction. The Ig-like domain-containing protein myotilin provides structural integrity to Z-discs—the boundaries between adjacent sarcomeres. Myotilin binds to Z-disc components, including F-actin and α-actinin-2, but the molecular mechanism of binding and implications of these interactions on Z-disc integrity are still elusive. To illuminate them, we used a combination of small-angle X-ray scattering, cross-linking mass spectrometry, and biochemical and molecular biophysics approaches. We discovered that myotilin displays conformational ensembles in solution. We generated a structural model of the F-actin:myotilin complex that revealed how myotilin interacts with and stabilizes F-actin via its Ig-like domains and flanking regions. Mutant myotilin designed with impaired F-actin binding showed increased dynamics in cells. Structural analyses and competition assays uncovered that myotilin displaces tropomyosin from F-actin. Our findings suggest a novel role of myotilin as a co-organizer of Z-disc assembly and advance our mechanistic understanding of myotilin’s structural role in Z-discs.
in PLoS Biology on April 12, 2021 02:00 PM.
• #### A new analysis approach for single nephron GFR in intravital microscopy of mice [version 2; peer review: 1 approved with reservations, 1 not approved]
doi:10.12688/f1000research.26888.2
in F1000Research on April 12, 2021 01:43 PM.
• #### Cigarette consumption in adult dual users of cigarettes and e-cigarettes: a review of the evidence, including new results from the PATH study [version 2; peer review: 1 approved with reservations]
doi:10.12688/f1000research.24589.2
in F1000Research on April 12, 2021 01:26 PM.
• #### Characterization of the Insular Role in Cardiac Function through Intracranial Electrical Stimulation of the Human Insula
Objective The link between brain function and cardiovascular dynamics is an important issue yet to be elucidated completely. The insula is a neocortical brain area that is thought to have a cardiac chronotropic regulatory function, but its role in cardiac contractility is unknown. We aimed to analyze the variability in heart rate and cardiac contractility after functional activation of different insular regions through direct electrical stimulation (E‐stim) in humans. Methods This was an observational, prospective study, including patients admitted for stereo‐electroencephalographic recording because of refractory epilepsy, in whom the insular cortex was implanted. Patients with anatomical or electrophysiological insular abnormalities and those in whom E‐stim produced subjective symptoms were excluded. Variations in heart rate (HR), stroke volume (SV), and cardiac output (CO) were analyzed during insular E‐stim and compared with control E‐stim of non‐eloquent brain regions and sham stimulations. Results Ten patients were included, 5 implanted in the right insula (52 E‐stim) and 5 in the left (37 E‐stim). Demographic and clinical characteristics of both groups were similar. E‐stim of both right and left insulas induced a significant decrease of the CO and HR, and an increase of the SV. E‐stim of control electrodes and sham stimulations were not associated with variations in cardiac function. Blood pressure and respiratory rate remained unaltered. Interpretation Our results suggest a direct chronotropic and inotropic cardiac depressor function of the right and left insulas. The evidence of an insular direct cardiac regulatory function might open a path in the prevention or treatment of heart failure, arrhythmias, and sudden unexpected death in epilepsy. ANN NEUROL 2021
in Annals of Neurology on April 12, 2021 12:54 PM.
• #### Temporal development of research publications on SARS-CoV-2 and COVID-19 [version 1; peer review: awaiting peer review]
doi:10.12688/f1000research.42122.1
in F1000Research on April 12, 2021 12:41 PM.
• #### Health related quality of life in COVID-19 survivors discharged from acute hospitals: results of a short-form 36-item survey [version 1; peer review: awaiting peer review]
doi:10.12688/f1000research.50781.1
in F1000Research on April 12, 2021 12:37 PM.
• #### PUblications Metadata Augmentation (PUMA) pipeline [version 2; peer review: 2 approved with reservations]
doi:10.12688/f1000research.25484.2
in F1000Research on April 12, 2021 12:27 PM.
• #### Length-scale-dependent elasticity in DNA from coarse-grained and all-atom models
Author(s): Enrico Skoruppa, Aderik Voorspoels, Jocelyne Vreede, and Enrico Carlon
We investigate the influence of nonlocal couplings on the torsional and bending elasticities of DNA. Such couplings have been observed in the past by several simulation studies. Here, we use a description of DNA conformations based on the variables tilt, roll, and twist. Our analysis of both coarse-...
[Phys. Rev. E 103, 042408] Published Mon Apr 12, 2021
in Physical Review E: Biological physics on April 12, 2021 10:00 AM.
• #### Robust cortical criticality and diverse dynamics resulting from functional specification
Author(s): Lei Gu and Ruqian Wu
Despite the recognition of the layered structure and evident criticality in the cortex, how the specification of input, output, and computational layers affects the self-organized criticality has not been much explored. By constructing heterogeneous structures with a well-accepted model of leaky neu...
[Phys. Rev. E 103, 042407] Published Mon Apr 12, 2021
in Physical Review E: Biological physics on April 12, 2021 10:00 AM.
• #### Exploring the critical points of teaching STEM subjects in the time of COVID 19: the experience of the course "Microscopy Techniques for Forensic Biology" [version 2; peer review: 1 approved with reservations]
doi:10.12688/f1000research.28455.2
in F1000Research on April 12, 2021 08:20 AM.
• #### Synaptic inputs to broad thorny ganglion cells in macaque retina
in Journal of Comparative Neurology on April 12, 2021 08:10 AM.
• #### Changes in concentrations of NMDA receptor subunit GluN2B, Arc and syntaxin‐1 in dorsal hippocampus Schaffer collateral synapses in a rat learned helplessness model of depression
Long‐term stress is linked to depression in humans. We used a rat learned helplessness model of depression to study the effects on synaptic proteins important for plasticity. With quantitative immunogold EM we show an increase in NMDA receptor subunit GluN2B in the PSD of excitatory synapses in CA1 area of hippocampus. Abstract Major depressive disorder involves changes in synaptic structure and function, but the molecular underpinnings of these changes are still not established. In an initial pilot experiment, whole‐brain synaptosome screening with quantitative western blotting was performed to identify synaptic proteins that may show concentration changes in a congenital rat learned helplessness model of depression. We found that the N‐methyl‐D‐aspartate receptor (NMDAR) subunits GluN2A/GluN2B, activity‐regulated cytoskeleton‐associated protein (Arc) and syntaxin‐1 showed significant concentration differences between congenitally learned helpless (LH) and non‐learned helpless (NLH) rats. Having identified these three proteins, we then performed more elaborate quantitative immunogold electron microscopic analyses of the proteins in a specific synapse type in the dorsal hippocampus: the Schaffer collateral synapse in the CA1 region. We expanded the setup to include also un‐stressed wild‐type (WT) rats. The concentrations of the proteins in the LH and NLH groups were compared to WT animals. In this specific synapse, we found that the concentration of NMDARs was increased in postsynaptic spines in both LH and NLH rats. The concentration of Arc was significantly increased in postsynaptic densities in LH animals as well as in presynaptic cytoplasm of NLH rats. The concentration of syntaxin‐1 was significantly increased in both presynaptic terminals and postsynaptic spines in LH animals, while pre‐ and postsynaptic syntaxin‐1 concentrations were significantly decreased in NLH animals. These protein changes suggest pathways by which synaptic plasticity may be increased in dorsal hippocampal Schaffer collateral synapses during depression, corresponding to decreased synaptic stability. This article is protected by copyright. All rights reserved.
in Journal of Comparative Neurology on April 12, 2021 07:53 AM.
• #### The association of socio-demographic and environmental factors on childhood diarrhea in Cambodia [version 3; peer review: 1 approved with reservations]
doi:10.12688/f1000research.23246.3
in F1000Research on April 12, 2021 07:52 AM.
• #### Brain proteome-wide association study implicates novel proteins in depression pathogenesis
Nature Neuroscience, Published online: 12 April 2021; doi:10.1038/s41593-021-00832-6
Wingo et al. integrate depression GWAS results with human brain proteomes to perform proteome-wide association studies followed by Mendelian randomization. They identify 25 proteins as potential causal mediators of depression, of which 20 are new.
in Nature Neuroscience on April 12, 2021 12:00 AM.
• #### Flexible modulation of sequence generation in the entorhinal–hippocampal system
Nature Neuroscience, Published online: 12 April 2021; doi:10.1038/s41593-021-00831-7
McNamee et al. develop a theory of entorhinal–hippocampal processing. Distributed entorhinal input drives hippocampal activity between distinct statistical and dynamical regimes of activity, thereby unifying several empirical observations.
in Nature Neuroscience on April 12, 2021 12:00 AM.
• #### Molecular mechanisms of brain water transport
Nature Reviews Neuroscience, Published online: 12 April 2021; doi:10.1038/s41583-021-00454-8
The impairment of brain fluid homeostasis is a feature of various conditions, highlighting the need to better understand brain water transport for drug development. Here, Nanna MacAulay reviews the molecular mechanisms underlying transmembrane water movement in neurons and glia and across brain barriers, emphasizing the part played by water cotransporters in this process.
in Nature Reviews on April 12, 2021 12:00 AM.
• #### Unusual high-field metal in a Kondo insulator
Nature Physics, Published online: 12 April 2021; doi:10.1038/s41567-021-01216-0
Transport and thermodynamic measurements on strongly correlated Kondo metal YbB12 reveal the coexistence of charged and charge-neutral fermions in the material and the crucial role played by the latter in the quantum oscillations of resistivity.
in Nature Physics on April 12, 2021 12:00 AM.
• #### Embryonic tissues as active foams
Nature Physics, Published online: 12 April 2021; doi:10.1038/s41567-021-01215-1
A computational framework draws analogy with foams to offer a comprehensive picture of how cell behaviours influence fluidization in embryonic tissues, highlighting the role of tension fluctuations in regulating tissue rigidity.
in Nature Physics on April 12, 2021 12:00 AM.
• #### Measurement of the proton spin structure at long distances
Nature Physics, Published online: 12 April 2021; doi:10.1038/s41567-021-01198-z
Measurements of the proton’s spin structure in experiments scattering a polarized electron beam off polarized protons in regions of low momentum transfer squared test predictions from chiral effective field theory of the strong interaction.
in Nature Physics on April 12, 2021 12:00 AM.
• #### Large-scale neuromorphic optoelectronic computing with a reconfigurable diffractive processing unit
Nature Photonics, Published online: 12 April 2021; doi:10.1038/s41566-021-00796-w
Linear diffractive structures are by themselves passive systems but researchers here exploit the non-linearity of a photodetector to realize a reconfigurable diffractive ‘processing’ unit. High-speed image and video recognition is demonstrated.
in Nature Photomics on April 12, 2021 12:00 AM.
• #### Identification of Two Novel Peptides That Inhibit α-Synuclein Toxicity and Aggregation
Aggregation of α-synuclein (αSyn) into proteinaceous deposits is a pathological hallmark of a range of neurodegenerative diseases including Parkinson’s disease (PD). Numerous lines of evidence indicate that the accumulation of toxic oligomeric and prefibrillar αSyn species may underpin the cellular toxicity and spread of pathology between cells. Therefore, aggregation of αSyn is considered a priority target for drug development, as aggregation inhibitors are expected to reduce αSyn toxicity and serve as therapeutic agents. Here, we used the budding yeast S. cerevisiae as a platform for the identification of short peptides that inhibit αSyn aggregation and toxicity. A library consisting of approximately one million peptide variants was utilized in two high-throughput screening approaches for isolation of library representatives that reduce αSyn-associated toxicity and aggregation. Seven peptides were isolated that were able to suppress specifically αSyn toxicity and aggregation in living cells. Expression of the peptides in yeast reduced the accumulation of αSyn-induced reactive oxygen species and increased cell viability. Next, the peptides were chemically synthesized and probed for their ability to modulate αSyn aggregation in vitro. Two synthetic peptides, K84s and K102s, of 25 and 19 amino acids, respectively, significantly inhibited αSyn oligomerization and aggregation at sub-stoichiometric molar ratios. Importantly, K84s reduced αSyn aggregation in human cells. These peptides represent promising αSyn aggregation antagonists for the development of future therapeutic interventions.
in Frontiers in Molecular Neuroscience on April 12, 2021 12:00 AM.
• #### Editorial: From Oxidative Stress to Cognitive Decline - Towards Novel Therapeutic Approaches
in Frontiers in Molecular Neuroscience on April 12, 2021 12:00 AM.
• #### Artemin Is Upregulated by TrkB Agonist and Protects the Immature Retina Against Hypoxic-Ischemic Injury by Suppressing Neuroinflammation and Astrogliosis
Hypoxic-ischemia (HI) is a major cause of acquired visual impairment in children from developed countries. Previous studies have shown that systemic administration of 7,8-dihydroxyavone (DHF), a selective tropomyosin receptor kinase B (TrkB) agonist, provides long-term neuroprotection against HI injury in an immature retina. However, the target genes and the mechanisms of the neuroprotective effects of TrkB signaling are not known. In the present study, we induced an HI retinal injury through unilateral common carotid artery ligation followed by 8% oxygen for 2 h in P7 rat pups. DHF was administered intraperitoneally 2 h before and 18 h after the HI injury. A polymerase chain reaction (PCR) array was used to identify the target genes upregulated after the DHF treatment, which was then confirmed with quantitative real-time reverse transcriptase PCR and a western blot. Effects of the downstream mediator of DHF were assessed using an intravitreal injection of neutralizing antibody 4 h after DHF administration (24 h after HI). Meanwhile, the target protein was injected into the vitreous 24 h after HI to validate its protective effect when exogenously supplemented. We found that systemic DHF treatment after HI significantly increased the expression of the artemin (ARTN) gene and protein at P8 and P10, respectively. The neuroprotective effects of DHF were inhibited after the ARTN protein blockade, with an increase in neuroinflammation and astrogliosis. ARTN treatment showed long-term protection against HI injury at both the histopathological and functional levels. The neuroprotective effects of ARTN were related to a decrease in microglial activation at P17 and attenuation of astrogliosis at P29. ARTN enhances phosphorylation of RET, ERK, and JNK, but not AKT or p38 in the immature retina. Altogether, these results suggest that the neuroprotective effect of a TrkB agonist is partially exerted through a mechanism that involves ARTN because the protective effect is ameliorated by ARTN sequestration. ARTN treatment after HI injury protects the immature retina by attenuating late neuroinflammation and astrogliosis in the immature retina relating to the ARTN/RET/JNK/ERK signaling pathway. ARTN may be a strategy by which to provide long-term protection in the immature retina against HI injury.
in Frontiers in Molecular Neuroscience on April 12, 2021 12:00 AM.
• #### Daytime Restricted Feeding Affects Day–Night Variations in Mouse Cerebellar Proteome
The cerebellum harbors a circadian clock that can be shifted by scheduled mealtime and participates in behavioral anticipation of food access. Large-scale two-dimensional difference gel electrophoresis (2D-DIGE) combined with mass spectrometry was used to identify day–night variations in the cerebellar proteome of mice fed either during daytime or nighttime. Experimental conditions led to modified expression of 89 cerebellar proteins contained in 63 protein spots. Five and 33 spots were changed respectively by time-of-day or feeding conditions. Strikingly, several proteins of the heat-shock protein family (i.e., Hsp90aa1, 90ab1, 90b1, and Hspa2, 4, 5, 8, 9) were down-regulated in the cerebellum of daytime food-restricted mice. This was also the case for brain fatty acid protein (Fabp7) and enzymes involved in oxidative phosphorylation (Ndufs1) or folate metabolism (Aldh1l1). In contrast, aldolase C (Aldoc or zebrin II) and pyruvate carboxylase (Pc), two enzymes involved in carbohydrate metabolism, and vesicle-fusing ATPase (Nsf) were up-regulated during daytime restricted feeding, possibly reflecting increased neuronal activity. Significant feeding × time-of-day interactions were found for changes in the intensity of 20 spots. Guanine nucleotide-binding protein G(o) subunit alpha (Gnao1) was more expressed in the cerebellum before food access. Neuronal calcium-sensor proteins [i.e., parvalbumin (Pvalb) and visinin-like protein 1 (Vsnl1)] were inversely regulated in daytime food-restricted mice, compared to control mice fed at night. Furthermore, expression of three enzymes modulating the circadian clockwork, namely heterogeneous nuclear ribonucleoprotein K (Hnrnpk), serine/threonine-protein phosphatases 1 (Ppp1cc and Ppp1cb subunits) and 5 (Ppp5), was differentially altered by daytime restricted feeding. Besides cerebellar proteins affected only by feeding conditions or daily cues, specific changes in in protein abundance before food access may be related to behavioral anticipation of food access and/or feeding-induced shift of the cerebellar clockwork.
in Frontiers in Molecular Neuroscience on April 12, 2021 12:00 AM.
• #### Altered Relationship Between Parvalbumin and Perineuronal Nets in an Autism Model
Altered function or presence of inhibitory neurons is documented in autism spectrum disorders (ASD), but the mechanism underlying this alternation is poorly understood. One major subtype of inhibitory neurons altered is the parvalbumin (PV)-containing neurons with reduced density and intensity in ASD patients and model mice. A subpopulation of PV+ neurons expresses perineuronal nets (PNN). To better understand whether the relationship between PV and PNN is altered in ASD, we measured quantitatively the intensities of PV and PNN in single PV+ neurons in the prelimbic prefrontal cortex (PrL-PFC) of a valproic acid (VPA) model of ASD at different ages. We found a decreased PV intensity but increased PNN intensity in VPA mice. The relationship between PV and PNN intensities is altered in VPA mice, likely due to an “abnormal” subpopulation of neurons with an altered PV-PNN relationship. Furthermore, reducing PNN level using in vivo injection of chondroitinase ABC corrects the PV expression in adult VPA mice. We suggest that the interaction between PV and PNN is disrupted in PV+ neurons in VPA mice which may contribute to the pathology in ASD.
in Frontiers in Molecular Neuroscience on April 12, 2021 12:00 AM.
• #### The Neuroprotective Beta Amyloid Hexapeptide Core Reverses Deficits in Synaptic Plasticity in the 5xFAD APP/PS1 Mouse Model
Alzheimer’s disease (AD) is the most common cause of dementia in the aging population. Evidence implicates elevated soluble oligomeric Aβ as one of the primary triggers during the prodromic phase leading to AD, effected largely via hyperphosphorylation of the microtubule-associated protein tau. At low, physiological levels (pM-nM), however, oligomeric Aβ has been found to regulate synaptic plasticity as a neuromodulator. Through mutational analysis, we found a core hexapeptide sequence within the N-terminal domain of Aβ (N-Aβcore) accounting for its physiological activity, and subsequently found that the N-Aβcore peptide is neuroprotective. Here, we characterized the neuroprotective potential of the N-Aβcore against dysfunction of synaptic plasticity assessed in ex vivo hippocampal slices from 5xFAD APP/PS1 mice, specifically hippocampal long-term potentiation (LTP) and long-term depression (LTD). The N-Aβcore was shown to reverse impairment in synaptic plasticity in hippocampal slices from 5xFAD APP/PS1 model mice, both for LTP and LTD. The reversal by the N-Aβcore correlated with alleviation of downregulation of hippocampal AMPA-type glutamate receptors in preparations from 5xFAD mice. The action of the N-Aβcore depended upon a critical di-histidine sequence and involved the phosphoinositide-3 (PI3) kinase pathway via mTOR (mammalian target of rapamycin). Together, the present findings indicate that the non-toxic N-Aβcore hexapeptide is not only neuroprotective at the cellular level but is able to reverse synaptic dysfunction in AD-like models, specifically alterations in synaptic plasticity.
in Frontiers in Molecular Neuroscience on April 12, 2021 12:00 AM.
• #### Advances in Carbon-Based Microfiber Electrodes for Neural Interfacing
Neural interfacing devices using penetrating microelectrode arrays have emerged as an important tool in both neuroscience research and medical applications. These implantable microelectrode arrays enable communication between man-made devices and the nervous system by detecting and/or evoking neuronal activities. Recent years have seen rapid development of electrodes fabricated using flexible, ultrathin carbon-based microfibers. Compared to electrodes fabricated using rigid materials and larger cross-sections, these microfiber electrodes have been shown to reduce foreign body responses after implantation, with improved signal-to-noise ratio for neural recording and enhanced resolution for neural stimulation. Here, we review recent progress of carbon-based microfiber electrodes in terms of material composition and fabrication technology. The remaining challenges and future directions for development of these arrays will also be discussed. Overall, these microfiber electrodes are expected to improve the longevity and reliability of neural interfacing devices.
in Frontiers in Neuroscience: Neural Technology on April 12, 2021 12:00 AM.
• #### Mitochondrial Dynamics: A Key Role in Neurodegeneration and a Potential Target for Neurodegenerative Disease
In neurodegenerative diseases, neurodegeneration has been related to several mitochondrial dynamics imbalances such as excessive fragmentation of mitochondria, impaired mitophagy, and blocked mitochondria mitochondrial transport in axons. Mitochondria are dynamic organelles, and essential for energy conversion, neuron survival, and cell death. As mitochondrial dynamics have a significant influence on homeostasis, in this review, we mainly discuss the role of mitochondrial dynamics in several neurodegenerative diseases. There is evidence that several mitochondrial dynamics-associated proteins, as well as related pathways, have roles in the pathological process of neurodegenerative diseases with an impact on mitochondrial functions and metabolism. However, specific pathological mechanisms need to be better understood in order to propose new therapeutic strategies targeting mitochondrial dynamics that have shown promise in recent studies.
in Frontiers in Neuroscience: Neurodegeneration on April 12, 2021 12:00 AM.
• #### Boosting Multilabel Semantic Segmentation for Somata and Vessels in Mouse Brain
Deep convolutional neural networks (DCNNs) are widely utilized for the semantic segmentation of dense nerve tissues from light and electron microscopy (EM) image data; the goal of this technique is to achieve efficient and accurate three-dimensional reconstruction of the vasculature and neural networks in the brain. The success of these tasks heavily depends on the amount, and especially the quality, of the human-annotated labels fed into DCNNs. However, it is often difficult to acquire the gold standard of human-annotated labels for dense nerve tissues; human annotations inevitably contain discrepancies or even errors, which substantially impact the performance of DCNNs. Thus, a novel boosting framework consisting of a DCNN for multilabel semantic segmentation with a customized Dice-logarithmic loss function, a fusion module combining the annotated labels and the corresponding predictions from the DCNN, and a boosting algorithm to sequentially update the sample weights during network training iterations was proposed to systematically improve the quality of the annotated labels; this framework eventually resulted in improved segmentation task performance. The microoptical sectioning tomography (MOST) dataset was then employed to assess the effectiveness of the proposed framework. The result indicated that the framework, even trained with a dataset including some poor-quality human-annotated labels, achieved state-of-the-art performance in the segmentation of somata and vessels in the mouse brain. Thus, the proposed technique of artificial intelligence could advance neuroscience research.
in Frontiers in Neuroscience: Brain Imaging Methods on April 12, 2021 12:00 AM.
• #### Baseline Cerebral Ischemic Core Quantified by Different Automatic Software and Its Predictive Value for Clinical Outcome
Purpose
This study aims to investigate the agreement of three software packages in measuring baseline ischemic core volume (ICV) and penumbra volume (PV), and determine their predictive values for unfavorable clinical outcome in patients with endovascular thrombectomy (EVT).
Methods
Patients with acute ischemic stroke who underwent computed tomographic perfusion (CTP) were recruited. Baseline CTP measurements including ICV and PV were calculated by three software packages of IntelliSpace Portal (ISP), Rapid Processing of Perfusion and Diffusion (RAPID), and fast-processing of ischemic stroke (F-STROKE). All patients received EVT, and the modified Rankin scale (mRS) at 90 days after EVT was assessed to determine the clinical outcomes (favorable: mRS = 0–2; unfavorable: mRS = 3–6). The agreement of CTP measurements among three software packages was determined using intraclass correlation coefficient (ICC). The associations between CTP measurements and unfavorable clinical outcome were analyzed using logistic regression. Receiver operating characteristic curves were conducted to calculate the area under the curve (AUC) of CTP measurements in predicting unfavorable clinical outcome.
Results
Of 223 recruited patients (68.2 ± 11.3 years old; 145 males), 17.0% had unfavorable clinical outcome after EVT. Excellent agreement between F-STROKE and RAPID was found in measuring ICV (ICC 0.965; 95% CI 0.956–0.973) and PV (ICC 0.966; 95% CI 0.956–0.973). ICVs measured by three software packages were significantly associated with unfavorable clinical outcome before (odds ratios 1.012–1.018, all P < 0.01) and after (odds ratios 1.003–1.014, all P < 0.05) adjusted for confounding factors (age, gender, TOAST classification, and NIHSS on admission). In predicting unfavorable clinical outcome, ICV measured by F-STROKE showed similar performance to that measured by RAPID (AUC 0.701 vs. 0.717) but higher performance than that measured by ISP (AUC 0.629).
Conclusions
The software of F-STROKE has excellent agreement with the widely used analysis tool of RAPID in measuring ICV and PV. The ischemic core volume measured by both F-STROKE and RAPID is a stronger predictor for unfavorable clinical outcome after EVT compared to ISP.
in Frontiers in Neuroscience: Brain Imaging Methods on April 12, 2021 12:00 AM.
• #### Prevalence and Therapy Rates for Stuttering, Cluttering, and Developmental Disorders of Speech and Language: Evaluation of German Health Insurance Data
Purpose
To evaluate the prevalence and treatment patterns of speech and language disorders in Germany.
Methods
A retrospective analysis of data collected from 32% of the German population, insured by the statutory German health insurance (AOK, Local Health Care Funds). We used The International Statistical Classification of Diseases and Related Health Problems, 10th revision, German Modification (ICD-10 GM) codes for stuttering (F98.5), cluttering (F98.6), and developmental disorders of speech and language (F80) to identify prevalent and newly diagnosed cases each year. Prescription and speech therapy reimbursement data were used to evaluate treatment patterns.
Results
In 2017, 27,977 patients of all ages were diagnosed with stuttering (21,045 males, 75% and 6,932 females, 25%). Stuttering prevalence peaks at age 5 years (boys, 0.89% and girls, 0.40%). Cluttering was diagnosed in 1,800 patients of all ages (1,287 males, 71.5% and 513 females, 28.5%). Developmental disorders of speech and language were identified in 555,774 AOK-insurants (61.2% males and 38.8% females). Treatment data indicate a substantial proportion newly diagnosed stuttering individuals receive treatment (up to 45% of 6-year-old patients), with slightly fewer than 20 sessions per year, on average. We confirmed a previous study showing increased rates of atopic disorders and neurological and psychiatric comorbidities in individuals with stuttering, cluttering, and developmental disorders of speech and language.
Conclusion
This is the first nationwide study using health insurance data to analyze the prevalence and newly diagnosed cases of a speech and language disorder. Prevalence and gender ratio data were consistent with the international literature. The crude prevalence of developmental disorders of speech and language increased from 2015 to 2018, whereas the crude prevalence for stuttering remained stable. For cluttering, the numbers were too low to draw reliable conclusions. Proportional treatment allocation for stuttering peaked at 6 years of age, which is the school entrance year, and is later than the prevalence peak of stuttering.
in Frontiers in Human Neuroscience on April 12, 2021 12:00 AM.
• #### Alteration in Resting-State EEG Microstates Following 24 Hours of Total Sleep Deprivation in Healthy Young Male Subjects
Purpose: The cognitive effects of total sleep deprivation (TSD) on the brain remain poorly understood. Electroencephalography (EEG) is a very useful tool for detecting spontaneous brain activity in the resting state. Quasi-stable electrical distributions, known as microstates, carry useful information about the dynamics of large-scale brain networks. In this study, microstate analysis was used to study changes in brain activity after 24 h of total sleep deprivation.
Participants and Methods: Twenty-seven healthy volunteers were recruited and underwent EEG scans before and after 24 h of TSD. Microstate analysis was applied, and six microstate classes (A–F) were identified. Topographies and temporal parameters of the microstates were compared between the rested wakefulness (RW) and TSD conditions.
Results: Microstate class A (a right-anterior to left-posterior orientation of the mapped field) showed lower global explained variance (GEV), frequency of occurrence, and time coverage in TSD than RW, whereas microstate class D (a fronto-central extreme location of the mapped field) displayed higher GEV, frequency of occurrence, and time coverage in TSD compared to RW. Moreover, subjective sleepiness was significantly negatively correlated with the microstate parameters of class A and positively correlated with the microstate parameters of class D. Transition analysis revealed that class B exhibited a higher probability of transition than did classes D and F in TSD compared to RW.
Conclusion: The observation suggests alterations of the dynamic brain-state properties of TSD in healthy young male subjects, which may serve as system-level neural underpinnings for cognitive declines in sleep-deprived subjects.
in Frontiers in Human Neuroscience on April 12, 2021 12:00 AM.
• #### Supervised Learning With First-to-Spike Decoding in Multilayer Spiking Neural Networks
Experimental studies support the notion of spike-based neuronal information processing in the brain, with neural circuits exhibiting a wide range of temporally-based coding strategies to rapidly and efficiently represent sensory stimuli. Accordingly, it would be desirable to apply spike-based computation to tackling real-world challenges, and in particular transferring such theory to neuromorphic systems for low-power embedded applications. Motivated by this, we propose a new supervised learning method that can train multilayer spiking neural networks to solve classification problems based on a rapid, first-to-spike decoding strategy. The proposed learning rule supports multiple spikes fired by stochastic hidden neurons, and yet is stable by relying on first-spike responses generated by a deterministic output layer. In addition to this, we also explore several distinct, spike-based encoding strategies in order to form compact representations of presented input data. We demonstrate the classification performance of the learning rule as applied to several benchmark datasets, including MNIST. The learning rule is capable of generalizing from the data, and is successful even when used with constrained network architectures containing few input and hidden layer neurons. Furthermore, we highlight a novel encoding strategy, termed “scanline encoding,” that can transform image data into compact spatiotemporal patterns for subsequent network processing. Designing constrained, but optimized, network structures and performing input dimensionality reduction has strong implications for neuromorphic applications.
in Frontiers in Computational Neuroscience on April 12, 2021 12:00 AM.
• #### Automated in vivo Tracking of Cortical Oligodendrocytes
Oligodendrocytes exert a profound influence on neural circuits by accelerating action potential conduction, altering excitability, and providing metabolic support. As oligodendrogenesis continues in the adult brain and is essential for myelin repair, uncovering the factors that control their dynamics is necessary to understand the consequences of adaptive myelination and develop new strategies to enhance remyelination in diseases such as multiple sclerosis. Unfortunately, few methods exist for analysis of oligodendrocyte dynamics, and even fewer are suitable for in vivo investigation. Here, we describe the development of a fully automated cell tracking pipeline using convolutional neural networks (Oligo-Track) that provides rapid volumetric segmentation and tracking of thousands of cells over weeks in vivo. This system reliably replicated human analysis, outperformed traditional analytic approaches, and extracted injury and repair dynamics at multiple cortical depths, establishing that oligodendrogenesis after cuprizone-mediated demyelination is suppressed in deeper cortical layers. Volumetric data provided by this analysis revealed that oligodendrocyte soma size progressively decreases after their generation, and declines further prior to death, providing a means to predict cell age and eventual cell death from individual time points. This new CNN-based analysis pipeline offers a rapid, robust method to quantitatively analyze oligodendrocyte dynamics in vivo, which will aid in understanding how changes in these myelinating cells influence circuit function and recovery from injury and disease.
in Frontiers in Cellular Neuroscience on April 12, 2021 12:00 AM.
• #### The Cellular Prion Protein—ROCK Connection: Contribution to Neuronal Homeostasis and Neurodegenerative Diseases
Amyloid-based neurodegenerative diseases such as prion, Alzheimer's, and Parkinson's diseases have distinct etiologies and clinical manifestations, but they share common pathological events. These diseases are caused by abnormally folded proteins (pathogenic prions PrPSc in prion diseases, β-amyloids/Aβ and Tau in Alzheimer's disease, α-synuclein in Parkinson's disease) that display β-sheet-enriched structures, propagate and accumulate in the nervous central system, and trigger neuronal death. In prion diseases, PrPSc-induced corruption of the physiological functions exerted by normal cellular prion proteins (PrPC) present at the cell surface of neurons is at the root of neuronal death. For a decade, PrPC emerges as a common cell surface receptor for other amyloids such as Aβ and α-synuclein, which relays, at least in part, their toxicity. In lipid-rafts of the plasma membrane, PrPC exerts a signaling function and controls a set of effectors involved in neuronal homeostasis, among which are the RhoA-associated coiled-coil containing kinases (ROCKs). Here we review (i) how PrPC controls ROCKs, (ii) how PrPC-ROCK coupling contributes to neuronal homeostasis, and (iii) how the deregulation of the PrPC-ROCK connection in amyloid-based neurodegenerative diseases triggers a loss of neuronal polarity, affects neurotransmitter-associated functions, contributes to the endoplasmic reticulum stress cascade, renders diseased neurons highly sensitive to neuroinflammation, and amplifies the production of neurotoxic amyloids.
in Frontiers in Cellular Neuroscience on April 12, 2021 12:00 AM.
• #### More Than Cell Markers: Understanding Heterogeneous Glial Responses to Implantable Neural Devices
in Frontiers in Cellular Neuroscience on April 12, 2021 12:00 AM.
• #### Nucleolin Rescues TDP-43 Toxicity in Yeast and Human Cell Models
TDP-43 is a nuclear protein involved in pivotal processes, extensively studied for its implication in neurodegenerative disorders. TDP-43 cytosolic inclusions are a common neuropathologic hallmark in amyotrophic lateral sclerosis (ALS) and related diseases, and it is now established that TDP-43 misfolding and aggregation play a key role in their etiopathology. TDP-43 neurotoxic mechanisms are not yet clarified, but the identification of proteins able to modulate TDP-43-mediated damage may be promising therapeutic targets for TDP-43 proteinopathies. Here we show by the use of refined yeast models that the nucleolar protein nucleolin (NCL) acts as a potent suppressor of TDP-43 toxicity, restoring cell viability. We provide evidence that NCL co-expression is able to alleviate TDP-43-induced damage also in human cells, further supporting its beneficial effects in a more consistent pathophysiological context. Presented data suggest that NCL could promote TDP-43 nuclear retention, reducing the formation of toxic cytosolic TDP-43 inclusions.
in Frontiers in Cellular Neuroscience on April 12, 2021 12:00 AM.
• #### Colonic Dopaminergic Neurons Changed Reversely With Those in the Midbrain via Gut Microbiota-Mediated Autophagy in a Chronic Parkinson’s Disease Mice Model
The role of gut-brain axis in the pathogenesis of Parkinson’s disease (PD) have become a research hotspot, appropriate animal model to study gut-brain axis in PD is yet to be confirmed. Our study employed a classical PD mice model achieved by chronic MPTP (1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine) injection to study concurrent changes of dopaminergic neurons in the midbrain and the colon of mice. Our results showed such a PD model exhibited apparent locomotor deficits but not gastrointestinal dysfunction. Tyrosine hydroxylase expressions and dopamine content reduced greatly in the substantia nigra pars compacta (SNpc) or striatum, but increased in the colon of PD mice. Mechanism investigation indicated autophagy activity and apoptosis were stimulated in the SNpc, but inhibited in the colon of PD mice. Interplay of gut microbiota (GM) and autophagy in response to chronic MPTP injection led to GM dysbiosis and defective autophagy in mice colon. Meanwhile, fecal short chain fatty acids (SCFAs), acetate and propionate in particular, declined greatly in PD mice, which could be attributed to the decreased bacteria abundance of phylum Bacteroidetes, but increased abundance of phylum Firmicutes. GM dysbiosis derived fecal SCFAs might be one of the mediators of downregulated autophagy in the colon of PD mice. In conclusion, colonic dopaminergic neurons changed in the opposition direction with those in the midbrain via GM dysbiosis-mediated autophagy inhibition followed by suppressed apoptosis in response to chronic MPTP injection. Such a chronic PD mice model might not be an ideal model to study role of gut-brain axis in PD progression.
in Frontiers in Ageing Neuroscience on April 12, 2021 12:00 AM.
• #### Wearable Devices for Assessing Function in Alzheimer's Disease: A European Public Involvement Activity About the Features and Preferences of Patients and Caregivers
Background: Alzheimer's Disease (AD) impairs the ability to carry out daily activities, reduces independence and quality of life and increases caregiver burden. Our understanding of functional decline has traditionally relied on reports by family and caregivers, which are subjective and vulnerable to recall bias. The Internet of Things (IoT) and wearable sensor technologies promise to provide objective, affordable, and reliable means for monitoring and understanding function. However, human factors for its acceptance are relatively unexplored.
Objective: The Public Involvement (PI) activity presented in this paper aims to capture the preferences, priorities and concerns of people with AD and their caregivers for using monitoring wearables. Their feedback will drive device selection for clinical research, starting with the study of the RADAR-AD project.
Method: The PI activity involved the Patient Advisory Board (PAB) of the RADAR-AD project, comprised of people with dementia across Europe and their caregivers (11 and 10, respectively). A set of four devices that optimally represent various combinations of aspects and features from the variety of currently available wearables (e.g., weight, size, comfort, battery life, screen types, water-resistance, and metrics) was presented and experienced hands-on. Afterwards, sets of cards were used to rate and rank devices and features and freely discuss preferences.
Results: Overall, the PAB was willing to accept and incorporate devices into their daily lives. For the presented devices, the aspects most important to them included comfort, convenience and affordability. For devices in general, the features they prioritized were appearance/style, battery life and water resistance, followed by price, having an emergency button and a screen with metrics. The metrics valuable to them included activity levels and heart rate, followed by respiration rate, sleep quality and distance. Some concerns were the potential complexity, forgetting to charge the device, the potential stigma and data privacy.
Conclusions: The PI activity explored the preferences, priorities and concerns of the PAB, a group of people with dementia and caregivers across Europe, regarding devices for monitoring function and decline, after a hands-on experience and explanation. They highlighted some expected aspects, metrics and features (e.g., comfort and convenience), but also some less expected (e.g., screen with metrics).
in Frontiers in Ageing Neuroscience on April 12, 2021 12:00 AM.
• #### Older Adults Show Reduced Spatial Precision but Preserved Strategy-Use During Spatial Navigation Involving Body-Based Cues
in Frontiers in Ageing Neuroscience on April 12, 2021 12:00 AM.
• #### Quantitative Analysis of Synthetic Magnetic Resonance Imaging in Alzheimer’s Disease
Objectives: The purpose of this study was to evaluate the feasibility and whether synthetic MRI can benefit diagnosis of Alzheimer’s disease (AD).
Materials and Methods: Eighteen patients and eighteen age-matched normal controls (NCs) underwent MR examination. The mini-mental state examination (MMSE) scores were obtained from all patients. The whole brain volumetric characteristics, T1, T2, and proton density (PD) values of different cortical and subcortical regions were obtained. The volumetric characteristics and brain regional relaxation values between AD patients and NCs were compared using independent-samples t-test. The correlations between these quantitative parameters and MMSE score were assessed by the Pearson correlation in AD patients.
Results: Although the larger volume of cerebrospinal fluid (CSF), lower brain parenchymal volume (BPV), and the ratio of brain parenchymal volume to intracranial volume (BPV/ICV) were found in AD patients compared with NCs, there were no significant differences (p > 0.05). T1 values of right insula cortex and T2 values of left hippocampus and right insula cortex were significantly higher in AD patients than in NCs, but T1 values of left caudate showed a reverse trend (p < 0.05). As the MMSE score decreased in AD patients, the BPV and BPV/ICV decreased, while the volume of CSF and T1 values of bilateral insula cortex and bilateral hippocampus as well as T2 values of bilateral hippocampus increased (p < 0.05).
Conclusion: Synthetic MRI not only provides more information to differentiate AD patients from normal controls, but also reflects the disease severity of AD.
in Frontiers in Ageing Neuroscience on April 12, 2021 12:00 AM.
• #### Olfactory Impairment Among Rural-Dwelling Chinese Older Adults: Prevalence and Associations With Demographic, Lifestyle, and Clinical Factors
Objective: Olfactory impairment (OI) refers to decreased (hyposmia) or absent (anosmia) ability to smell. We sought to estimate the prevalence and correlates of OI among rural-dwelling Chinese older adults.
Methods: This population-based cross-sectional analysis included 4,514 participants (age ≥65 years; 56.7% women) from the Multidomain Interventions to Delay Dementia and Disability in Rural China (MIND-China). The 16-item Sniffin' Sticks identification test (SSIT) was used to assess olfactory function. Olfactory impairment was defined as the SSIT score ≤10, hyposmia as SSIT score of 8–10, and anosmia as SSIT score <8. Multivariable logistic regression models were used to examine factors associated with OI.
Results: The overall prevalence was 67.7% for OI, 35.3% for hyposmia, and 32.5% for anosmia. The prevalence increased with age for OI and anosmia, but not for hyposmia. The multivariable-adjusted odds ratio (OR) of OI was 2.10 (95% CI 1.69–2.61) for illiteracy and 1.41 (1.18–1.70) for elementary school (vs. middle school or above), 1.30 (1.01–1.67) for current smoking (vs. never smoking), 0.86 (0.74–0.99) for overweight and 0.73 (0.61–0.87) for obesity (vs. normal weight), 4.21 (2.23–7.94) for dementia, 1.68 (1.23–2.30) for head injury, and 1.44 (1.14–1.83) for sinonasal disease. Illiteracy in combination with either male sex or diabetes was significantly associated with an over two-fold increased OR of OI (p for interactions <0.05).
Conclusion: Olfactory impairment is highly prevalent that affects over two-thirds of rural-dwelling older adults in China. OI is correlated with illiteracy, current smoking, dementia, head injury, and sinonasal disease, but negatively associated with overweight or obesity. Olfactory impairment as a potential clinical marker of neurodegenerative disorders among older adults deserves further investigation.
in Frontiers in Ageing Neuroscience on April 12, 2021 12:00 AM.
• #### Learning to reweight examples in multi-label classification
Publication date: Available online 10 April 2021
Source: Neural Networks
Author(s): Yongjian Zhong, Bo Du, Chang Xu
in Neural Networks on April 11, 2021 06:00 PM.
• #### End-to-end keyword search system based on attention mechanism and energy scorer for low resource languages
Publication date: Available online 10 April 2021
Source: Neural Networks
Author(s): Zeyu Zhao, Wei-Qiang Zhang
in Neural Networks on April 11, 2021 06:00 PM.
• #### Neural representations of kinship
Publication date: June 2021
Source: Current Opinion in Neurobiology, Volume 68
Author(s): Ann M. Clemens, Michael Brecht
in Current Opinion in Neurobiology on April 10, 2021 01:00 PM.
• #### Residual wide-kernel deep convolutional auto-encoder for intelligent rotating machinery fault diagnosis with limited samples
Publication date: Available online 9 April 2021
Source: Neural Networks
Author(s): Daoguang Yang, Hamid Reza Karimi, Kangkang Sun
in Neural Networks on April 10, 2021 01:00 PM.
• #### Multistability of delayed fractional-order competitive neural networks
Publication date: Available online 8 April 2021
Source: Neural Networks
Author(s): Fanghai Zhang, Tingwen Huang, Qiujie Wu, Zhigang Zeng
in Neural Networks on April 09, 2021 06:00 PM.
• #### Multi-periodicity of switched neural networks with time delays and periodic external inputs under stochastic disturbances
Publication date: September 2021
Source: Neural Networks, Volume 141
Author(s): Zhenyuan Guo, Jingxuan Ci, Jun Wang
in Neural Networks on April 09, 2021 06:00 PM.
• #### Approaches to understanding COVID‐19 and its neurological associations
There is an accumulating volume of research into neurological manifestations of COVID‐19. However, inconsistent study designs, inadequate controls, poorly‐validated tests, and differing settings, interventions, and cultural norms weaken study quality, comparability, and thus the understanding of the spectrum, burden and pathophysiology of these complications. Therefore, a global COVID‐19 Neuro Research Coalition, together with the WHO, has reviewed reports of COVID‐19 neurological complications and harmonised clinical measures for future research. This will facilitate well‐designed studies using precise, consistent case definitions of SARS‐CoV2 infection and neurological complications, with standardised forms for pooled data analyses that non‐specialists can use, including in low‐income settings. This article is protected by copyright. All rights reserved.
in Annals of Neurology on April 09, 2021 04:04 PM.
• #### Bibliometric assessment and key messages of sporotrichosis research (1945-2018) [version 2; peer review: 3 approved with reservations]
doi:10.12688/f1000research.24250.2
in F1000Research on April 09, 2021 04:02 PM.
• #### APOE moderates the effect of hippocampal blood flow on memory pattern separation in clinically normal older adults
Abstract Pattern separation, the ability to differentiate new information from previously experienced similar information, is highly sensitive to hippocampal structure and function and declines with age. Functional MRI studies have demonstrated hippocampal hyperactivation in older adults compared to young, with greater task‐related activation associated with worse pattern separation performance. The current study was designed to determine whether pattern separation was sensitive to differences in task‐free hippocampal cerebral blood flow (CBF) in 130 functionally intact older adults. Given prior evidence that apolipoprotein E e4 (APOE e4) status moderates the relationship between CBF and episodic memory, we predicted a stronger negative relationship between hippocampal CBF and pattern separation in APOE e4 carriers. An interaction between APOE group and right hippocampal CBF was present, such that greater right hippocampal CBF was related to better lure discrimination in noncarriers, whereas the effect reversed directionality in e4 carriers. These findings suggest that neurovascular changes in the medial temporal lobe may underlie memory deficits in cognitively normal older adults who are APOE e4 carriers.
in Hippocampus on April 09, 2021 03:44 PM.
• #### Reduced anterior hippocampal and ventromedial prefrontal activity when repeatedly retrieving autobiographical memories
Abstract Research has reported that repeatedly retrieving a novel or imagined event representation reduces activity within brain regions critical for constructing mental scenarios, namely the anterior hippocampus and ventromedial prefrontal cortex (vmPFC). The primary aim of this investigation was to test if this pattern reported for imagined events would be found when repeatedly recollecting autobiographical memories. Twenty‐four participants retrieved 12 pre‐selected autobiographical memories four times while undergoing an fMRI scan. We used a region of interest approach to investigate how the anterior and posterior hippocampus as well as cortical regions critical for memory retrieval—the vmPFC and the posterior cingulate cortex (PCC)—are affected by repeated retrievals. This analysis revealed an effect in the bilateral anterior hippocampi and vmPFC, but not the posterior hippocampus nor the PCC, with activity decreasing in these regions as a function of repeated retrievals. A multivariate analytic approach (Partial Least Squares) was used to assess whole‐brain patterns of neural activity associated with repeated retrievals. This analysis revealed one pattern of neural activity associated with the initial retrieval of a memory (e.g., inferior frontal and temporal lobe regions) and a separate pattern of activity associated with later retrievals that was distributed primarily across the lateral parietal cortices. These findings suggest that the anterior hippocampus and the vmPFC support the episodic construction of an autobiographical memory the first time it is retrieved and that alternate nonconstructive processes support its subsequent retrieval shortly thereafter.
in Hippocampus on April 09, 2021 03:38 PM.
• #### How Can an Na+ Channel Inhibitor Ameliorate Seizures in Lennox–Gastaut Syndrome?
Objective Lennox–Gastaut syndrome (LGS) is an epileptic encephalopathy frequently associated with multiple types of seizures. The classical Na+ channel inhibitors are in general ineffective against the seizures in LGS. Rufinamide is a new Na+ channel inhibitor, but approved for the treatment of LGS. This is not consistent with a choice of antiseizure drugs (ASDs) according to simplistic categorical grouping. Methods The effect of rufinamide on the Na+ channel, cellular discharges, and seizure behaviors was quantitatively characterized in native neurons and mammalian models of epilepsy, and compared with the other Na+ channel inhibitors. Results With a much faster binding rate to the inactivated Na+ channel than phenytoin, rufinamide is distinctively effective if the seizure discharges chiefly involve short bursts interspersed with hyperpolarized interburst intervals, exemplified by spike and wave discharges (SWDs) on electroencephalograms. Consistently, rufinamide, but not phenytoin, suppresses SWD‐associated seizures in pentylenetetrazol or AY‐9944 models, which recapitulate the major electrophysiological and behavioral manifestations in typical and atypical absence seizures, including LGS. Interpretation Na+ channel inhibitors shall have sufficiently fast binding to exert an action during the short bursts and then suppress SWDs, in which cases rufinamide is superior. For the epileptiform discharges where the interburst intervals are not so hyperpolarized, phenytoin could be better because of the higher affinity. Na+ channel inhibitors with different binding kinetics and affinity to the inactivated channels may have different antiseizure scope. A rational choice of ASDs according to in‐depth molecular pharmacology and the attributes of ictal discharges is advisable. ANN NEUROL 2021
in Annals of Neurology on April 09, 2021 02:38 PM.
• #### A novel yeast hybrid modeling framework integrating Boolean and enzyme-constrained networks enables exploration of the interplay between signaling and metabolism
by Linnea Österberg, Iván Domenzain, Julia Münch, Jens Nielsen, Stefan Hohmann, Marija Cvijovic
The interplay between nutrient-induced signaling and metabolism plays an important role in maintaining homeostasis and its malfunction has been implicated in many different human diseases such as obesity, type 2 diabetes, cancer, and neurological disorders. Therefore, unraveling the role of nutrients as signaling molecules and metabolites together with their interconnectivity may provide a deeper understanding of how these conditions occur. Both signaling and metabolism have been extensively studied using various systems biology approaches. However, they are mainly studied individually and in addition, current models lack both the complexity of the dynamics and the effects of the crosstalk in the signaling system. To gain a better understanding of the interconnectivity between nutrient signaling and metabolism in yeast cells, we developed a hybrid model, combining a Boolean module, describing the main pathways of glucose and nitrogen signaling, and an enzyme-constrained model accounting for the central carbon metabolism of Saccharomyces cerevisiae, using a regulatory network as a link. The resulting hybrid model was able to capture a diverse utalization of isoenzymes and to our knowledge outperforms constraint-based models in the prediction of individual enzymes for both respiratory and mixed metabolism. The model showed that during fermentation, enzyme utilization has a major contribution in governing protein allocation, while in low glucose conditions robustness and control are prioritized. In addition, the model was capable of reproducing the regulatory effects that are associated with the Crabtree effect and glucose repression, as well as regulatory effects associated with lifespan increase during caloric restriction. Overall, we show that our hybrid model provides a comprehensive framework for the study of the non-trivial effects of the interplay between signaling and metabolism, suggesting connections between the Snf1 signaling pathways and processes that have been related to chronological lifespan of yeast cells.
in PLoS Computational Biology on April 09, 2021 02:00 PM.
• #### Genome Scale-Differential Flux Analysis reveals deregulation of lung cell metabolism on SARS-CoV-2 infection
by Piyush Nanda, Amit Ghosh
The COVID-19 pandemic is posing an unprecedented threat to the whole world. In this regard, it is absolutely imperative to understand the mechanism of metabolic reprogramming of host human cells by SARS-CoV-2. A better understanding of the metabolic alterations would aid in design of better therapeutics to deal with COVID-19 pandemic. We developed an integrated genome-scale metabolic model of normal human bronchial epithelial cells (NHBE) in
|
{}
|
# The article “Stochastic Modeling for Pavement Warranty Cost Estimation” (J. of Constr. Engr. and Mgmnt., 2009: 352–359) proposes the following model for the distribution of Y = time to pavement failure. Let X_{1} be the time to failure due to rutting, and X_{2} be the time to failure due to transverse cracking, these two rvs are assumed independent. Then Y=min (X_{1}, X_{2}). The probability of failure due to either one of these distress modes is assumed to be an increasing function of time t. After making certain distributional assumptions, the following form of the cdf for each mode is obtained: Phi [(a+bt)/(c+dt+et^{2})^{1/2}] where Uparrow Phi is the standard normal cdf. Values of the five parameters a, b, c, d, and e are -25.49, 1.15, 4.45, -1.78, and .171 for cracking and -21.27, .03
Question
Modeling data distributions
The article “Stochastic Modeling for Pavement Warranty Cost Estimation” (J. of Constr. Engr. and Mgmnt., 2009: 352–359) proposes the following model for the distribution of Y = time to pavement failure. Let $$\displaystyle{X}_{{{1}}}$$ be the time to failure due to rutting, and $$\displaystyle{X}_{{{2}}}$$ be the time to failure due to transverse cracking, these two rvs are assumed independent. Then $$\displaystyle{Y}=\min{\left({X}_{{{1}}},{X}_{{{2}}}\right)}$$. The probability of failure due to either one of these distress modes is assumed to be an increasing function of time t. After making certain distributional assumptions, the following form of the cdf for each mode is obtained: $$\displaystyle\Phi{\left[\frac{{{a}+{b}{t}}}{{\left({c}+{\left.{d}{t}\right.}+{e}{t}^{{{2}}}\right)}^{{\frac{{1}}{{2}}}}}\right]}$$ where $$\displaystyle{U}{p}{a}{r}{r}{o}{w}\Phi$$ is the standard normal cdf. Values of the five parameters a, b, c, d, and e are -25.49, 1.15, 4.45, -1.78, and .171 for cracking and -21.27, .0325, .972, -.00028, and .00022 for rutting. Determine the probability of pavement failure within $$\displaystyle{t}={5}$$ years and also $$\displaystyle{t}={10}$$ years.
2021-01-24
Step 1 Cracking $$\displaystyle\Phi{\left(\frac{{{a}+{b}{t}}}{{\left({c}+{\left.{d}{t}\right.}+{e}{t}^{{{2}}}\right)}^{{\frac{{1}}{{2}}}}}\right.}$$
$$\displaystyle{a}=-{25.49}$$
$$\displaystyle{b}={1.15}$$
$$\displaystyle{c}={4.45}$$
$$\displaystyle{d}=-{1.78}$$
$$\displaystyle{e}={0.171}$$ Determine the corresponding probability using the normal probability table. $$\displaystyle{t}={5}\Phi{\left(\frac{{{a}+{b}{t}}}{{\left({c}+{\left.{d}{t}\right.}+{e}{t}^{{{2}}}\right)}^{{\frac{{1}}{{2}}}}}\right)}$$
$$\displaystyle=\Phi{\left({\frac{{-{25.49}+{1.15}{\left({5}\right)}}}{{{\left({4.45}+{\left(-{1.78}\right)}{\left({5}\right)}+{0.171}{\left({5}\right)}^{{{2}}}\right)}^{{\frac{{1}}{{2}}}}}}}\right)}$$
$$\displaystyle=\Phi{\left({\frac{{-{19.74}}}{{{\left(-{0.175}\right)}^{{\frac{{1}}{{2}}}}}}}\right)}$$ =Underfined $$\displaystyle{t}={10}\Phi{\left(\frac{{{a}+{b}{t}}}{{\left({c}+{\left.{d}{t}\right.}+{e}{t}^{{{2}}}\right)}^{{\frac{{1}}{{2}}}}}\right)}$$
$$\displaystyle=\Phi{\left({\frac{{-{25.49}+{1.15}{\left({10}\right)}}}{{{\left({4.45}+{\left(-{1.78}\right)}{\left({10}\right)}+{0.171}{\left({10}\right)}^{{{2}}}\right)}^{{\frac{{1}}{{2}}}}}}}\right)}$$
$$\displaystyle=\Phi{\left(-{7.22}\right)}$$
$$\displaystyle\approx{0}$$ Note that the probability is undefined when $$\displaystyle{t}={5}$$, because the expression under the square root (1/2) is negative and the square root of a negative number doesn't exist. Step 2 Cracking $$\displaystyle\Phi{\left(\frac{{{a}+{b}{t}}}{{\left({c}+{\left.{d}{t}\right.}+{e}{t}^{{{2}}}\right)}^{{\frac{{1}}{{2}}}}}\right)}$$
$$\displaystyle{a}=-{21.27}$$
$$\displaystyle{b}={0.0325}$$
$$\displaystyle{c}={0.972}$$
$$\displaystyle{d}=-{0.00028}$$
$$\displaystyle{e}={0.00022}$$ Determine the corresponding probability using the normal probability y table. $$\displaystyle\Phi{\left({x}\right)}$$ is approximately when x is a value smaller than all z-scores in the table $$\displaystyle{t}={5}\ \Phi{\left(\frac{{{a}+{b}{t}}}{{\left({c}+{\left.{d}{t}\right.}+{e}{t}^{{{2}}}\right)}^{{\frac{{1}}{{2}}}}}\right)}$$
$$\displaystyle=\Phi{\left({\frac{{-{21.27}+{0.0325}{\left({5}\right)}}}{{{\left({0.972}+{\left(-{0.00028}\right)}{\left({5}\right)}+{0.00022}{\left({5}\right)}^{{{2}}}\right)}^{{\frac{{1}}{{2}}}}}}}\right\rbrace}$$
$$\displaystyle=\Phi{\left(-{21.36}\right)}$$
$$\displaystyle\approx{0}$$
$$\displaystyle{t}={10}\ \Phi{\left(\frac{{{a}+{b}{t}}}{{\left({c}+{\left.{d}{t}\right.}+{e}{t}^{{{2}}}\right)}^{{\frac{{1}}{{2}}}}}\right)}$$
$$\displaystyle=\Phi{\left({\frac{{-{21.27}+{0.0325}{\left({10}\right)}}}{{{\left({0.972}+{\left(-{0.00028}\right)}{\left({10}\right)}+{0.00022}{\left({10}\right)}^{{{2}}}\right)}^{{\frac{{1}}{{2}}}}}}}\right)}$$
$$\displaystyle\Phi{\left(-{21.04}\right)}$$
$$\displaystyle\approx{0}$$
### Relevant Questions
The article “Modeling Sediment and Water Column Interactions for Hydrophobic Pollutants” (Water Research, 1984: 1169-1174) suggests the uniform distribution on the interval (7.5, 20) as a model for depth (cm) of the bioturbation layer in sediment in a certain region. a. What are the mean and variance of depth? b. What is the cdf of depth? c. What is the probability that observed depth is at most 10? Between 10 and 15? d. What is the probability that the observed depth is within 1 standard deviation of the mean value? Within 2 standard deviations?
An automobile tire manufacturer collected the data in the table relating tire pressure x (in pounds per square inch) and mileage (in thousands of miles). A mathematical model for the data is given by
$$\displaystyle f{{\left({x}\right)}}=-{0.554}{x}^{2}+{35.5}{x}-{514}.$$
$$\begin{array}{|c|c|} \hline x & Mileage \\ \hline 28 & 45 \\ \hline 30 & 51\\ \hline 32 & 56\\ \hline 34 & 50\\ \hline 36 & 46\\ \hline \end{array}$$
(A) Complete the table below.
$$\begin{array}{|c|c|} \hline x & Mileage & f(x) \\ \hline 28 & 45 \\ \hline 30 & 51\\ \hline 32 & 56\\ \hline 34 & 50\\ \hline 36 & 46\\ \hline \end{array}$$
(Round to one decimal place as needed.)
$$A. 20602060xf(x)$$
A coordinate system has a horizontal x-axis labeled from 20 to 60 in increments of 2 and a vertical y-axis labeled from 20 to 60 in increments of 2. Data points are plotted at (28,45), (30,51), (32,56), (34,50), and (36,46). A parabola opens downward and passes through the points (28,45.7), (30,52.4), (32,54.7), (34,52.6), and (36,46.0). All points are approximate.
$$B. 20602060xf(x)$$
Acoordinate system has a horizontal x-axis labeled from 20 to 60 in increments of 2 and a vertical y-axis labeled from 20 to 60 in increments of 2.
Data points are plotted at (43,30), (45,36), (47,41), (49,35), and (51,31). A parabola opens downward and passes through the points (43,30.7), (45,37.4), (47,39.7), (49,37.6), and (51,31). All points are approximate.
$$C. 20602060xf(x)$$
A coordinate system has a horizontal x-axis labeled from 20 to 60 in increments of 2 and a vertical y-axis labeled from 20 to 60 in increments of 2. Data points are plotted at (43,45), (45,51), (47,56), (49,50), and (51,46). A parabola opens downward and passes through the points (43,45.7), (45,52.4), (47,54.7), (49,52.6), and (51,46.0). All points are approximate.
$$D.20602060xf(x)$$
A coordinate system has a horizontal x-axis labeled from 20 to 60 in increments of 2 and a vertical y-axis labeled from 20 to 60 in increments of 2. Data points are plotted at (28,30), (30,36), (32,41), (34,35), and (36,31). A parabola opens downward and passes through the points (28,30.7), (30,37.4), (32,39.7), (34,37.6), and (36,31). All points are approximate.
(C) Use the modeling function f(x) to estimate the mileage for a tire pressure of 29
$$\displaystyle\frac{{{l}{b}{s}}}{{{s}{q}}}\in.$$ and for 35
$$\displaystyle\frac{{{l}{b}{s}}}{{{s}{q}}}\in.$$
The mileage for the tire pressure $$\displaystyle{29}\frac{{{l}{b}{s}}}{{{s}{q}}}\in.$$ is
The mileage for the tire pressure $$\displaystyle{35}\frac{{{l}{b}{s}}}{{{s}{q}}}$$ in. is
(Round to two decimal places as needed.)
(D) Write a brief description of the relationship between tire pressure and mileage.
A. As tire pressure increases, mileage decreases to a minimum at a certain tire pressure, then begins to increase.
B. As tire pressure increases, mileage decreases.
C. As tire pressure increases, mileage increases to a maximum at a certain tire pressure, then begins to decrease.
D. As tire pressure increases, mileage increases.
The article "Modeling Sediment and Water Column Interactions for Hydrophobic Pollutants" (Water Research, $1984: 1169-1174$ ) suggests the uniform distribution on the interval (7.5,20) as a model for depth (cm) of the bioturbation layer in sediment in a certain region.
What are the mean and variance of depth?
b. What is the cdf of depth?
What is the probability that observed depth is at most 10? Between 10 and $15 ?$
What is the probability that the observed depth is within 1 standard deviation of the mean value? Within 2 standard deviations?
A new thermostat has been engineered for the frozen food cases in large supermarkets. Both the old and new thermostats hold temperatures at an average of $$25^{\circ}F$$. However, it is hoped that the new thermostat might be more dependable in the sense that it will hold temperatures closer to $$25^{\circ}F$$. One frozen food case was equipped with the new thermostat, and a random sample of 21 temperature readings gave a sample variance of 5.1. Another similar frozen food case was equipped with the old thermostat, and a random sample of 19 temperature readings gave a sample variance of 12.8. Test the claim that the population variance of the old thermostat temperature readings is larger than that for the new thermostat. Use a $$5\%$$ level of significance. How could your test conclusion relate to the question regarding the dependability of the temperature readings? (Let population 1 refer to data from the old thermostat.)
(a) What is the level of significance?
State the null and alternate hypotheses.
$$H0:?_{1}^{2}=?_{2}^{2},H1:?_{1}^{2}>?_{2}^{2}H0:?_{1}^{2}=?_{2}^{2},H1:?_{1}^{2}\neq?_{2}^{2}H0:?_{1}^{2}=?_{2}^{2},H1:?_{1}^{2}?_{2}^{2},H1:?_{1}^{2}=?_{2}^{2}$$
(b) Find the value of the sample F statistic. (Round your answer to two decimal places.)
What are the degrees of freedom?
$$df_{N} = ?$$
$$df_{D} = ?$$
What assumptions are you making about the original distribution?
The populations follow independent normal distributions. We have random samples from each population.The populations follow dependent normal distributions. We have random samples from each population.The populations follow independent normal distributions.The populations follow independent chi-square distributions. We have random samples from each population.
(c) Find or estimate the P-value of the sample test statistic. (Round your answer to four decimal places.)
(d) Based on your answers in parts (a) to (c), will you reject or fail to reject the null hypothesis?
At the ? = 0.05 level, we fail to reject the null hypothesis and conclude the data are not statistically significant.At the ? = 0.05 level, we fail to reject the null hypothesis and conclude the data are statistically significant. At the ? = 0.05 level, we reject the null hypothesis and conclude the data are not statistically significant.At the ? = 0.05 level, we reject the null hypothesis and conclude the data are statistically significant.
(e) Interpret your conclusion in the context of the application.
Reject the null hypothesis, there is sufficient evidence that the population variance is larger in the old thermostat temperature readings.Fail to reject the null hypothesis, there is sufficient evidence that the population variance is larger in the old thermostat temperature readings. Fail to reject the null hypothesis, there is insufficient evidence that the population variance is larger in the old thermostat temperature readings.Reject the null hypothesis, there is insufficient evidence that the population variance is larger in the old thermostat temperature readings.
A random sample of $$\displaystyle{n}_{{1}}={16}$$ communities in western Kansas gave the following information for people under 25 years of age.
$$\displaystyle{X}_{{1}}:$$ Rate of hay fever per 1000 population for people under 25
$$\begin{array}{|c|c|} \hline 97 & 91 & 121 & 129 & 94 & 123 & 112 &93\\ \hline 125 & 95 & 125 & 117 & 97 & 122 & 127 & 88 \\ \hline \end{array}$$
A random sample of $$\displaystyle{n}_{{2}}={14}$$ regions in western Kansas gave the following information for people over 50 years old.
$$\displaystyle{X}_{{2}}:$$ Rate of hay fever per 1000 population for people over 50
$$\begin{array}{|c|c|} \hline 94 & 109 & 99 & 95 & 113 & 88 & 110\\ \hline 79 & 115 & 100 & 89 & 114 & 85 & 96\\ \hline \end{array}$$
(i) Use a calculator to calculate $$\displaystyle\overline{{x}}_{{1}},{s}_{{1}},\overline{{x}}_{{2}},{\quad\text{and}\quad}{s}_{{2}}.$$ (Round your answers to two decimal places.)
(ii) Assume that the hay fever rate in each age group has an approximately normal distribution. Do the data indicate that the age group over 50 has a lower rate of hay fever? Use $$\displaystyle\alpha={0.05}.$$
(a) What is the level of significance?
State the null and alternate hypotheses.
$$\displaystyle{H}_{{0}}:\mu_{{1}}=\mu_{{2}},{H}_{{1}}:\mu_{{1}}<\mu_{{2}}$$
$$\displaystyle{H}_{{0}}:\mu_{{1}}=\mu_{{2}},{H}_{{1}}:\mu_{{1}}>\mu_{{2}}$$
$$\displaystyle{H}_{{0}}:\mu_{{1}}=\mu_{{2}},{H}_{{1}}:\mu_{{1}}\ne\mu_{{2}}$$
$$\displaystyle{H}_{{0}}:\mu_{{1}}>\mu_{{2}},{H}_{{1}}:\mu_{{1}}=\mu_{{12}}$$
(b) What sampling distribution will you use? What assumptions are you making?
The standard normal. We assume that both population distributions are approximately normal with known standard deviations.
The Student's t. We assume that both population distributions are approximately normal with unknown standard deviations,
The standard normal. We assume that both population distributions are approximately normal with unknown standard deviations,
The Student's t. We assume that both population distributions are approximately normal with known standard deviations,
What is the value of the sample test statistic? (Test the difference $$\displaystyle\mu_{{1}}-\mu_{{2}}$$. Round your answer to three decimalplaces.)
What is the value of the sample test statistic? (Test the difference $$\displaystyle\mu_{{1}}-\mu_{{2}}$$. Round your answer to three decimal places.)
(c) Find (or estimate) the P-value.
P-value $$\displaystyle>{0.250}$$
$$\displaystyle{0.125}<{P}-\text{value}<{0},{250}$$
$$\displaystyle{0},{050}<{P}-\text{value}<{0},{125}$$
$$\displaystyle{0},{025}<{P}-\text{value}<{0},{050}$$
$$\displaystyle{0},{005}<{P}-\text{value}<{0},{025}$$
P-value $$\displaystyle<{0.005}$$
Sketch the sampling distribution and show the area corresponding to the P-value.
P.vaiue Pevgiue
P-value f P-value
The table below shows the number of people for three different race groups who were shot by police that were either armed or unarmed. These values are very close to the exact numbers. They have been changed slightly for each student to get a unique problem.
Suspect was Armed:
Black - 543
White - 1176
Hispanic - 378
Total - 2097
Suspect was unarmed:
Black - 60
White - 67
Hispanic - 38
Total - 165
Total:
Black - 603
White - 1243
Hispanic - 416
Total - 2262
Give your answer as a decimal to at least three decimal places.
a) What percent are Black?
b) What percent are Unarmed?
c) In order for two variables to be Independent of each other, the P $$(A and B) = P(A) \cdot P(B) P(A and B) = P(A) \cdot P(B).$$
This just means that the percentage of times that both things happen equals the individual percentages multiplied together (Only if they are Independent of each other).
Therefore, if a person's race is independent of whether they were killed being unarmed then the percentage of black people that are killed while being unarmed should equal the percentage of blacks times the percentage of Unarmed. Let's check this. Multiply your answer to part a (percentage of blacks) by your answer to part b (percentage of unarmed).
Remember, the previous answer is only correct if the variables are Independent.
d) Now let's get the real percent that are Black and Unarmed by using the table?
If answer c is "significantly different" than answer d, then that means that there could be a different percentage of unarmed people being shot based on race. We will check this out later in the course.
Let's compare the percentage of unarmed shot for each race.
e) What percent are White and Unarmed?
f) What percent are Hispanic and Unarmed?
If you compare answers d, e and f it shows the highest percentage of unarmed people being shot is most likely white.
Why is that?
This is because there are more white people in the United States than any other race and therefore there are likely to be more white people in the table. Since there are more white people in the table, there most likely would be more white and unarmed people shot by police than any other race. This pulls the percentage of white and unarmed up. In addition, there most likely would be more white and armed shot by police. All the percentages for white people would be higher, because there are more white people. For example, the table contains very few Hispanic people, and the percentage of people in the table that were Hispanic and unarmed is the lowest percentage.
Think of it this way. If you went to a college that was 90% female and 10% male, then females would most likely have the highest percentage of A grades. They would also most likely have the highest percentage of B, C, D and F grades
The correct way to compare is "conditional probability". Conditional probability is getting the probability of something happening, given we are dealing with just the people in a particular group.
g) What percent of blacks shot and killed by police were unarmed?
h) What percent of whites shot and killed by police were unarmed?
i) What percent of Hispanics shot and killed by police were unarmed?
You can see by the answers to part g and h, that the percentage of blacks that were unarmed and killed by police is approximately twice that of whites that were unarmed and killed by police.
j) Why do you believe this is happening?
Do a search on the internet for reasons why blacks are more likely to be killed by police. Read a few articles on the topic. Write your response using the articles as references. Give the websites used in your response. Your answer should be several sentences long with at least one website listed. This part of this problem will be graded after the due date.
1. Find each of the requested values for a population with a mean of $$? = 40$$, and a standard deviation of $$? = 8$$ A. What is the z-score corresponding to $$X = 52?$$ B. What is the X value corresponding to $$z = - 0.50?$$ C. If all of the scores in the population are transformed into z-scores, what will be the values for the mean and standard deviation for the complete set of z-scores? D. What is the z-score corresponding to a sample mean of $$M=42$$ for a sample of $$n = 4$$ scores? E. What is the z-scores corresponding to a sample mean of $$M= 42$$ for a sample of $$n = 6$$ scores? 2. True or false: a. All normal distributions are symmetrical b. All normal distributions have a mean of 1.0 c. All normal distributions have a standard deviation of 1.0 d. The total area under the curve of all normal distributions is equal to 1 3. Interpret the location, direction, and distance (near or far) of the following zscores: $$a. -2.00 b. 1.25 c. 3.50 d. -0.34$$ 4. You are part of a trivia team and have tracked your team’s performance since you started playing, so you know that your scores are normally distributed with $$\mu = 78$$ and $$\sigma = 12$$. Recently, a new person joined the team, and you think the scores have gotten better. Use hypothesis testing to see if the average score has improved based on the following 8 weeks’ worth of score data: $$82, 74, 62, 68, 79, 94, 90, 81, 80$$. 5. You get hired as a server at a local restaurant, and the manager tells you that servers’ tips are $42 on average but vary about $$12 (\mu = 42, \sigma = 12)$$. You decide to track your tips to see if you make a different amount, but because this is your first job as a server, you don’t know if you will make more or less in tips. After working 16 shifts, you find that your average nightly amount is$44.50 from tips. Test for a difference between this value and the population mean at the $$\alpha = 0.05$$ level of significance.
Would you rather spend more federal taxes on art? Of a random sample of $$n_{1} = 86$$ politically conservative voters, $$r_{1} = 18$$ responded yes. Another random sample of $$n_{2} = 85$$ politically moderate voters showed that $$r_{2} = 21$$ responded yes. Does this information indicate that the population proportion of conservative voters inclined to spend more federal tax money on funding the arts is less than the proportion of moderate voters so inclined? Use $$\alpha = 0.05.$$ (a) State the null and alternate hypotheses. $$H_0:p_{1} = p_{2}, H_{1}:p_{1} > p_2$$
$$H_0:p_{1} = p_{2}, H_{1}:p_{1} < p_2$$
$$H_0:p_{1} = p_{2}, H_{1}:p_{1} \neq p_2$$
$$H_{0}:p_{1} < p_{2}, H_{1}:p_{1} = p_{2}$$ (b) What sampling distribution will you use? What assumptions are you making? The Student's t. The number of trials is sufficiently large. The standard normal. The number of trials is sufficiently large.The standard normal. We assume the population distributions are approximately normal. The Student's t. We assume the population distributions are approximately normal. (c)What is the value of the sample test statistic? (Test the difference $$p_{1} - p_{2}$$. Do not use rounded values. Round your final answer to two decimal places.) (d) Find (or estimate) the P-value. (Round your answer to four decimal places.) (e) Based on your answers in parts (a) to (c), will you reject or fail to reject the null hypothesis? Are the data statistically significant at level alpha? At the $$\alpha = 0.05$$ level, we reject the null hypothesis and conclude the data are statistically significant. At the $$\alpha = 0.05$$ level, we fail to reject the null hypothesis and conclude the data are statistically significant. At the $$\alpha = 0.05$$ level, we fail to reject the null hypothesis and conclude the data are not statistically significant. At the $$\alpha = 0.05$$ level, we reject the null hypothesis and conclude the data are not statistically significant. (f) Interpret your conclusion in the context of the application. Reject the null hypothesis, there is sufficient evidence that the proportion of conservative voters favoring more tax dollars for the arts is less than the proportion of moderate voters. Fail to reject the null hypothesis, there is sufficient evidence that the proportion of conservative voters favoring more tax dollars for the arts is less than the proportion of moderate voters. Fail to reject the null hypothesis, there is insufficient evidence that the proportion of conservative voters favoring more tax dollars for the arts is less than the proportion of moderate voters. Reject the null hypothesis, there is insufficient evidence that the proportion of conservative voters favoring more tax dollars for the arts is less than the proportion of moderate voters.
The article “Anodic Fenton Treatment of Treflan MTF” describes a two-factor experiment designed to study the sorption of the herbicide trifluralin. The factors are the initial trifluralin concentration and the $$\displaystyle{F}{e}^{{{2}}}\ :\ {H}_{{{2}}}\ {O}_{{{2}}}$$ delivery ratio. There were three replications for each treatment. The results presented in the following table are consistent with the means and standard deviations reported in the article. $$\displaystyle{b}{e}{g}\in{\left\lbrace{m}{a}{t}{r}{i}{x}\right\rbrace}\text{Initial Concentration (M)}&\text{Delivery Ratio}&\text{Sorption (%)}\ {15}&{1}:{0}&{10.90}\quad{8.47}\quad{12.43}\ {15}&{1}:{1}&{3.33}\quad{2.40}\quad{2.67}\ {15}&{1}:{5}&{0.79}\quad{0.76}\quad{0.84}\ {15}&{1}:{10}&{0.54}\quad{0.69}\quad{0.57}\ {40}&{1}:{0}&{6.84}\quad{7.68}\quad{6.79}\ {40}&{1}:{1}&{1.72}\quad{1.55}\quad{1.82}\ {40}&{1}:{5}&{0.68}\quad{0.83}\quad{0.89}\ {40}&{1}:{10}&{0.58}\quad{1.13}\quad{1.28}\ {100}&{1}:{0}&{6.61}\quad{6.66}\quad{7.43}\ {100}&{1}:{1}&{1.25}\quad{1.46}\quad{1.49}\ {100}&{1}:{5}&{1.17}\quad{1.27}\quad{1.16}\ {100}&{1}:{10}&{0.93}&{0.67}&{0.80}\ {e}{n}{d}{\left\lbrace{m}{a}{t}{r}{i}{x}\right\rbrace}$$ a) Estimate all main effects and interactions. b) Construct an ANOVA table. You may give ranges for the P-values. c) Is the additive model plausible? Provide the value of the test statistic, its null distribution, and the P-value.
|
{}
|
# A4 has no subgroup of order 6
• December 17th 2011, 12:00 AM
Bernhard
A4 has no subgroup of order 6
In Chapter 11 of Armstrong: Groups and Symmetry, Armstrong gives an argument (see bottom of page 59 and top of page 60 of Armstrong - attached)
The argument runs as follows: (see attached)
$A_4$ does not contain a subgroup of order 6. For suppose that H is a subgroup of $A_4$ which has six elements. If a 3-cycle belongs to H, its inverse must also belong to H, so the number of 3-cycles in H is even. There cannot be six as we need room for the identity element. Suppose there are four, say $\alpha$, $\alpha^{-1}$, $\beta$, $\beta^{-1}$. Then $\epsilon$, $\alpha$, $\alpha^{-1}$, $\beta$, $\beta^{-1}$, $\alpha \beta$, $\alpha \beta^{-1}$ are all distinct and belong to H, contradicting our assumption that |H| = 6.
Finally, if only two 3-cycles lie in H, then H must contain the subgoup { $\epsilon$, (12)(34), (13)(24), (14)(23)}. But 4 is not a factor of 6 so Lagrange's Theorem rules out this case too. We conclude that no subgroup of order 6 exists.
================================================== ===
Firstly, why does Armstrong focus on 3-cycles in his reasoning regarding a subgroup of order 6?
Secondly, I cannot follow the statement:
"Finally, if only two 3-cycles lie in H, then H must contain the subgoup { $\epsilon$, (12)(34), (13)(24), (14)(23)}. "
Why does this if-then statement follow?
Peter
• December 17th 2011, 12:51 AM
Opalg
Re: A4 has no subgroup of order 6
Quote:
Originally Posted by Bernhard
"Finally, if only two 3-cycles lie in H, then H must contain the subgoup { $\color{red}\epsilon$, (12)(34), (13)(24), (14)(23)}. "
Why does this if-then statement follow?
$A_4$ consists of 12 elements, 8 of which are 3-cycles. If a subgroup contains 6 elements, only 2 of which are 3-cycles, then it must contain all the elements that are not 3-cycles.
• December 17th 2011, 12:53 AM
Deveno
Re: A4 has no subgroup of order 6
one reason to focus on 3-cycles, is that for n ≥ 3, the 3-cycles generate An.
there are a number of ways to show that H must contain {e, (1 2)(3 4), (1 3)(2 4), (1 4)(2 3)} if it only contains 2 3-cycles. the most elementary way, is to show that if we only have 2 3-cycles, then those 4 elements of A4 is all that is left (because the other 8 elements of A4 are all 3-cycles:
(1 2 3)
(1 3 2)
(1 2 4)
(1 4 2)
(1 3 4)
(1 4 3)
(2 3 4)
(2 4 3).
but even if we had not bothered to count the 3-cycles in A4, the only possible elements in A4 are the identity, 3-cycles, and 2,2-cycles (disjoint).
if we only have 2 3-cycles in H, the remaining elements must be 2,2-cycles. but any pair of 2,2-cycles when multiplied, yields a 3rd. that is:
(a b)(c d)(a c)(b d) =
a-->c-->d
b-->d-->c
c-->a-->b
d-->b-->a = (a d)(b c), so any 2 2,2-cycles generate a group of order 4 (which is necessarily abelian, it is easy to check that (a c)(b d)(a b)(c d) = (a d)(b c) as well).
the case where A4 has just 2 3-cycles needs to be ruled out, because any group of order 6 is either cyclic, or isomorphic to S3. clearly we can rule out the cyclic case, becase S4 has no elements of order 6 at all. but S3 has no elements of order 6, either, but has just 2 elements of order 3. if it were not the case that the 2,2-cycles commuted, then we might have a subgroup of A4 isomorphic to S3 (compare this to when we have two transpositions, which do NOT commute, and the product of two transpositions can lead to a 3-cycle).
|
{}
|
# Primitives¶
Reference
Mode: Object Mode and Edit Mode
Panel: Tool Shelf ‣ Create ‣ Add Primitive/Mesh
Hotkey: Shift-A
A common object type used in a 3D scene is a mesh. Blender comes with a number of “primitive” mesh shapes that you can start modeling from. You can also add primitives in Edit Mode at the 3D cursor.
Note
You can make a planar mesh three-dimensional by moving one or more of the vertices out of its plane (applies to Plane, Circle and Grid). A simple circle is actually often used as a starting point to create even the most complex of meshes.
Hint
In order to facilitate the modeling, the best solution is to imagine what primitive type suits better for your model. If you will model a cuboid, the best solution is to start with a primitive cube, and so on.
## Common Options¶
These options can be specified in the Operator panel in the Tool Shelf, which appears when the object is created. Options included in more than one primitive are:
Generate UVs
Generates a default UV-unwrapping of new geometry. This will be defined in the first UVLayer (which will get added if needed). (available for plane, cube, circle, UV-/icosphere, tube and cone).
Radius/Size, Align to View, Location, Rotation
See Common Object Options.
## Plane¶
The standard plane is a single quad face, which is composed out of four vertices, four edges, and one face. It is like a piece of paper lying on a table; it is not a three-dimensional object because it is flat and has no thickness. Objects that can be created with planes include floors, tabletops, or mirrors.
## Cube¶
A standard cube contains eight vertices, twelve edges, and six faces, and is a three-dimensional object. Objects that can be created out of cubes include dice, boxes, or crates.
## Circle¶
Vertices
The number of vertices that define the circle or polygon.
Fill Type
Set how the circle will be filled.
Triangle Fan
Fill with triangular faces which share a vertex in the middle.
N-gon
Fill with a single n-gon.
Nothing
Do not fill. Creates only the outer ring of vertices.
## UV Sphere¶
A standard UV sphere is made out of quad faces and a triangle fan at the top and bottom. It can be used for texturing.
Segments
Number of vertical segments. Like the Earth’s meridians, going pole to pole.
Rings
Number of horizontal segments. These are like the Earth’s parallels.
Note
Rings are face loops and not edge loops, which would be one less.
## Icosphere¶
An icosphere is a polyhedra sphere made up of triangles. Icospheres are normally used to achieve a more isotropical layout of vertices than a UV sphere.
Subdivisions
How many recursions are used to define the sphere. At level 1 the Icosphere is an icosahedron, a solid with 20 equilateral triangular faces. Any increasing level of subdivision splits each triangular face into four triangles.
Note
Subdividing an icosphere rises the vertex count very high even with few iterations (10 times creates 5,242,880 triangles), Adding such a dense mesh is a sure way to cause the program to crash.
## Cylinder¶
Objects that can be created out of cylinders include handles or rods.
Vertices
The number of vertical edges between the circles used to define the cylinder or prism.
Depth
Sets the starting height of the cylinder.
Cap Fill Type
Similar to circle (see above). When set to none, the created object will be a tube. Objects that can be created out of tubes include pipes or drinking glasses (the basic difference between a cylinder and a tube is that the former has closed ends).
## Cone¶
Objects that can be created out of cones include spikes or pointed hats.
Vertices
The number of vertical edges between the circles or tip, used to define the cone or pyramid.
Sets the radius of the circular base of the cone.
Sets the radius of the tip of the cone. which will creates a frustum. A value of 0 will produce a standard cone shape.
Depth
Sets the starting height of the cone.
Base Fill Type
Similar to circle (see above).
## Torus¶
A dough-nut-shaped primitive created by rotating a circle around an axis. The overall dimensions can be defined by two methods.
Operator Presets
Torus preset settings for reuse. These presets are stored as scripts in the proper presets directory.
Major Segments
Number of segments for the main ring of the torus. If you think of a torus as a “spin” operation around an axis, this is how many steps in the spin.
Minor segments
Number of segments for the minor ring of the torus. This is the number of vertices of each circular segment.
### Torus Dimensions¶
Change the way the torus is defined.
Major/Minor, Exterior/Interior
Radius from the origin to the center of the cross sections.
Radius of the torus’s cross section.
If viewed along the major axis, this is the radius from the center to the outer edge.
If viewed along the major axis, this is the radius of the hole in the center.
## Grid¶
A regular quadratic grid which is a subdivided plane. Example objects that can be created out of grids include landscapes and organic surfaces.
X Subdivisions
The number of spans in the X axis.
Y Subdivisions
The number of spans in the Y axis.
## Monkey¶
This is a gift from old NaN to the community and is seen as a programmer’s joke or “Easter Egg”. It creates a monkey’s head once you press the Monkey button. The Monkey’s name is “Suzanne” and is Blender’s mascot. Suzanne is very useful as a standard test mesh, much like the Utah Tea Pot or the Stanford Bunny.
Note
|
{}
|
# Samacheer Kalvi 6th Maths Solutions Term 1 Chapter 4 Geometry Intext Questions
## Tamilnadu Samacheer Kalvi 6th Maths Solutions Term 1 Chapter 4 Geometry Intext Questions
Try These (Textbook Page No. 80, 81)
Question 1.
Name all the line segments.
Solution:
$$\overline{\mathrm{AB}}, \overline{\mathrm{AE}}, \overline{\mathrm{EB}}, \overline{\mathrm{CD}}, \overline{\mathrm{CE}} \text { and } \overline{\mathrm{ED}}$$
Question 2.
If AB = 5 cm, say which of the following measures are correct in fig 4.9.
Solution:
Fig 4.9(i) and fig 4.9(ii) measures are correct.
Try These (Textbook Page No. 85)
Question 1.
1. Name the rays in the given figure.
2. What is the common point of all these rays?
Solution:
1. $$\overrightarrow{\mathrm{TA}}, \overrightarrow{\mathrm{TB}}, \overrightarrow{\mathrm{TC}} \text { and } \overrightarrow{\mathrm{TD}}$$ are the rays given
2. Point T is the common point of all these rays.
Try These (Textbook Page No. 90, 95)
Question 1.
Which direction will you face if you start facing West and take three right turns clockwise?
Solution:
Will be facing South.
Question 2.
Which direction will you face if you start facing North and take two right turns anticlockwise?
Solution:
Will be facing South.
Question 3.
Adjust the hands of the clock for following time, note the angle made between the hour hand and the minute hand and write the type of angle.
Solution:
|
{}
|
INTELLIGENT WORK FORUMS
FOR ENGINEERING PROFESSIONALS
Are you an
Engineering professional?
Join Eng-Tips Forums!
• Talk With Other Members
• Be Notified Of Responses
• Keyword Search
Favorite Forums
• Automated Signatures
• Best Of All, It's Free!
*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.
#### Posting Guidelines
Promoting, selling, recruiting, coursework and thesis posting is forbidden.
# End of Life - Fatigue
## End of Life - Fatigue
(OP)
Hello,
I am doing an end of life study for an offshore crane. The crane was initially designed for 25000 cycles with max load, but it hasnt gone through that in its life time. What i m wondering is if we were to do a thorough check of the welds and the rest of the structure and we find no cracks in it - can we reset the fatigue life to original 250000 cycles?
thanks
Gaurav
### RE: End of Life - Fatigue
No. Fatigue is a time dependent failure mechanism based on loading. There is a reason for tracking use cycles on critical equipment. Having no cracks on x cycles of use means no current issues but you cannot reset the clock. The cycles continue to accumulate which means you must track your 25,000 cycle limit for sage operation.
### RE: End of Life - Fatigue
I would add that the fatigue life is a function of the number of cycles and the load per cycle, and can be considered independent of time, per se. If the actual , repetitive load were limited to about 25% of the original design load then ,perhaps theoretically, one could raise the fatigue life by a factor of 10, but I am not sure your insurance company would agree.
There were several major catastrophic failures of offshore drill rigs in the north atlantic in the 1960's - 1980's,involving many workers deaths . After analysis, some of the failures were attributed to fatigue failure at the welds, which led to the BS british standards methods of evaluating the fatigue life of welds . Some parts of this method are contained in the EU's PED, for example EN 12952-3 annex B tables B1,2,3. These tables to not account for corrosive effects that are expected in sea structures.
In any case, if you really need to revise the fatigue life, there is likely available a number of fatigue experts who specialize in welded sea structures, as the design of these platform rigs is now a developed art.
"In this bright future, you can't forget your past..." Bob Marley
### RE: End of Life - Fatigue
(OP)
Thanks guys,
I did think that we should not reset the clock, esp if the number of cycles have not been achieved.
However then my question would be "what happens after the 25000 cycle has been reached?" We do a thorough inspection and find there are no cracks or areas of concern. I guess we cannot reset anything, but continue to monitor.
cheers
Gaurav
### RE: End of Life - Fatigue
This is both a philosophical question and a very important engineering application question.
In general, it is considered that "failure" in the context of a fatigue analysis means that you have sufficient load/cycles to initiate a crack. However, that does not mean that the component or equipment has failed.
My recommendation for in-service equipment is to take a fracture-mechanics approach to cyclic life. Postulate a crack size, based on the minimum-detectable crack size for the NDE inspection technique that you will use. Then, using the cyclic stresses from the cyclic loading that you calculated from your fatigue analysis, perform crack growth calculations. Calculate the number of cycles it will take for the crack to reach a critical size (usually based on the crack behaviour changing from ductile to brittle, but there may be other criteria, depending on the equipment/component). Halve that number, and that now becomes your inspection interval.
Every inspection that you perform that doesn't find a crack resets the inspection interval (and hence the life). However, expect Murphy's law to bite you in the a, so always have a repair plan in-place for when you do find a crack!
The problem with fatigue in general, and fatigue curve specifically, is that it is quite random. A graph of test data showing stress vs cycles will look like a shotgun blast at the paper. Most published curves will either be a lower-bound curve or a mean curve with a design margin to bring it down to approximate a lower-bound curve. Nevertheless, there is inevitably test data that falls below the curve, so at best you can think of these fatigue curves as -2σ or -3σ curves, with the expectation that fatigue crack initiation is possible.
Now, for welded structures, fatigue failure is likely to occur in the welds, as welds behave differently than smooth bar steel. Unless your fatigue analysis used something like the Structural Stress Method (Verity Method), you should almost expect pre-existing microcracks in welds to exist and have grown. Depending on the NDE originally performed on the welds, it is possible that the life of the welds is an order of magnitude or more lower than what you would otherwise expect for non-welded construction.
### RE: End of Life - Fatigue
All that I can add to TGS4's good comments is that we also take a statistical approach.
We do a baseline inspection before critical lifting equipment ever goes into service, and then we do a first inspection at 1/2 the estimated life. For some equipment that is 2 years, and with some it is 10 years, we never go beyond 10 years. And from there out we start reducing inspection intervals. Some critical (and old) equipment is on 6 mo inspection now.
= = = = = = = = = = = = = = = = = = = =
P.E. Metallurgy, Plymouth Tube
### RE: End of Life - Fatigue
@Fencer01
The subject about which you are enquiring is generally known as 'fitness for service.' On that note, you may care to study such documents as BS 7910, offering a prime example of assessment methodology.
Steve Jones
Corrosion Management Consultant
All answers are personal opinions only and are in no way connected with any employer.
### RE: End of Life - Fatigue
Parts that ultimately fail under fatigue spend most of their lives accumulating fatigue cycles before cracking initiates. Crack growth per cycle also increases as the crack propagates. You run the risk of a crack initiating and growing to failure if you have reached design life cycles and your inspection interval is too long to ensure actually finding cracks before failure. Your design criteria assumes maximum load to ensure you do not ever risk reaching a point where cracking can begin and failure can occur.
### RE: End of Life - Fatigue
I asked this same question of Dr Lincoln, USAF aircraft Fatigue expert ~1985.
IF All defects remain below the detectable level that was considered the 'threshold' for analysis, then yes, in-theory, the clock can be reset. HOWEVER... the Statistical probability still exists for undetected damage.
The problem is that manufacturing is imperfect, operations and service environments can be unpredictable and cracks are all rogues... not to mention that corrosion and SCC can be wild cards waiting to pop-up. Miss one rogue crack, or corrosion-spot, in thousands of holes/fillets/edges/surfaces/etc and the wing still comes off the jet... most likely killing the crew. This is compounded by the numbers of aircraft built and the numbers of fatigue/strength critical locations.
Worse, yet, a study of NDI specialists found that the very best of them could consistently find cracks ~0.2--0.3" long... and the worst could consistently miss cracks exceeding 1.0". I confirmed this dismal fact in-the-field many times... I was usually better at detecting cracks [and defects like corrosion] than 95% of NDI techs... because I knew what to look for and where-to-look and how to look and had 'fresh eyes'.
NOTE.
A friend lost his nephew in a power-generation plant accident ~10 years ago. A section of steam pipe at an 'S' curve was eroded/pitted internally... undetected by external inspection and an occasional/haphazard US NDI. WHEN the side of the elbow failed, his nephew/crew-mates were walking-by and happened to be in the fragment path. His nephew took a salad-plate size elbow-fragment in the side of his chest that knocked him against a wall. He was dead in a split second. Many of his crew suffered serious hits from smaller fragments and steam injuries.
Regards, Wil Taylor
o Trust - But Verify!
o We believe to be true what we prefer to be true. [Unknown]
o For those who believe, no proof is required; for those who cannot believe, no proof is possible. [variation,Stuart Chase]
o Unfortunately, in science what You 'believe' is irrelevant. ["Orion", Homebuiltairplanes.com forum]
### RE: End of Life - Fatigue
I don't agree that the clock can be reset of defects are below a specified length.
This assumes that you have studied the defect tolerance of the structure and understand the fatigue crack propagation.
There is some confusion in the original post an that is whether or not the crane has been designed for 25000 or 250000 as both number are quoted.
The issue must be how has the fatigue limit been arrived at? What is the anticipated loading spectrum and what is the accumulated damage of it has only been loaded to 90% of its load.
As stated fatigue is stochastic and without much more data I think to rate the crane for Maximum Load Cycles is unhelpful.
The only way to fully understand the condition of the crane is to conduct a study examining loads and cycles. Typically, the average lifted load of a typical offshore crane does not stray significantly from 3.2Te with a variance of 0.75Te.
Although the weight of the load does play a part in the analysis, it is more pertinent to study the stress range. Calculating the difference between the existing residual strain on the structure, the added stress during an actual lift and the spare lifting capacity provides an accurate picture of the fatigue experienced.
Because of the relatively low average load, the number of lift cycles has a greater effect on the fatigue life of structure than the lifted load itself. This also means there are many large capacity cranes operating at a fraction of their design limits. Lower stress ranges provide longer fatigue lives and occasional lifts at maximum capacity will not significantly affect their fatigue life.
Cranes that predominately operate towards the outer regions of their radius range will result in greater stress ranges and therefore reduce their fatigue life compared to those operating at closer ranges.
All of this can be plotted on an S/N curve with fatigue strength and the number of cycles along each axis with the Miner’s Rule then used to calculate the damage through variable stress ranges.
A critical part of analysing fatigue is determining an accurate life cycle. In the majority of cases gaining reliable information to determine the crane’s life cycle can be a challenge. Many of today’s cranes have a recording load indicator system with a full history of each lift, detailing the weight and the position the crane was in when this took place. This is ideal for fatigue analysis but without this information it can be difficult.
It is still possible to establish realistic life cycles for any particular offshore platform crane even without a recording load indicator system. Knowing the type of work, when drilling programmes have taken place and even the deck layout makes it possible to establish the likely lifting frequency, average weights involved and where loads were being moved to and from.
It can take some effort and some skill but it may be possible to carry out some analysis and decide when replacement is required.
### RE: End of Life - Fatigue
#### Quote (Fencer01)
The crane was initially designed for 25000 cycles with max load, but it hasnt gone through that in its life time. What i m wondering is if we were to do a thorough check of the welds and the rest of the structure and we find no cracks in it - can we reset the fatigue life to original 250000 cycles?
Is this a typo? For ten times the number of load cycles (250,000 vs 25,000) the applied load would need to be significantly reduced.
#### Red Flag This Post
Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.
#### Red Flag Submitted
Thank you for helping keep Eng-Tips Forums free from inappropriate posts.
The Eng-Tips staff will check this out and take appropriate action.
Close Box
# Join Eng-Tips® Today!
Join your peers on the Internet's largest technical engineering professional community.
It's easy to join and it's free.
Here's Why Members Love Eng-Tips Forums:
• Talk To Other Members
• Notification Of Responses To Questions
• Favorite Forums One Click Access
• Keyword Search Of All Posts, And More...
Register now while it's still free!
|
{}
|
# How to Calculate Momentum
Download Article
PRO
Download Article
PRO
Momentum is the measurement of the quantity of an object's motion.[1] You can find momentum if you know the velocity and the mass of the object. It will be easy once you understand the formula.
## Steps Download Article PRO
1. 1
Write down the formula . In the formula, ${\displaystyle p}$ stands for the momentum, ${\displaystyle m}$ stands for the mass, and ${\displaystyle v}$ stands for the velocity.[2]
2. 2
Find the mass. Mass is the amount of matter in an object.[3] To measure the mass of an object, you can use a balance. In physics, there is a rule: you have to measure things in SI units which all the scientists in the world use. The SI unit for mass is kilogram or kg.[4]
3. 3
Find the velocity. Velocity is the speed and the direction that the object travels.[5] Right now, we will only concentrate on the speed part. (While speed is just a scalar that has just a magnitude, velocity is a vector that has a magnitude and direction. However, momentum is not affected by direction.) You can calculate the velocity of an object by dividing the distance that the object traveled by the time it took to travel the distance. Again, you have to measure velocity in SI unit. The SI unit for velocity is m/s (meters per second).
4. 4
Following the formula , plug in the mass and the velocity.
5. 5
Multiply the mass by the velocity. That is the momentum! The measurement for momentum is kg*m/s.
• For example, when the mass of an object is ${\displaystyle 4kg}$ and its velocity is ${\displaystyle 2.5{\frac {m}{s}}}$, then its momentum is equal to ${\displaystyle 4kg\cdot 2.5{\frac {m}{s}}=10{\frac {kg\cdot m}{s}}}$.
Advertisement
## Community Q&A
Search
Add New Question
• Question
What if I don't have the mass?
Community Answer
Then you need to find the mass, because there's really no way to calculate momentum without it.
• Question
How do I calculate momentum if the velocity is in mi/hr?
Community Answer
Stop measuring using the imperial system. You can multiply by 5/8 to get it to km/hr and then divide by 3600 to get it to km/s and then multiply by 1000 to get it to m/s, but that would only give you an approximation as a kilometre is not exactly 5/8 of a mile.
• Question
How do I calculate momentum if I don't know the mass?
Community Answer
In order to do this, you need to know the mass. Or, multiply the density by the volume and you get the mass.
Ask a Question
200 characters left
Include your email address to get a message when this question is answered.
Advertisement
## Tips
Submit a Tip
All tip submissions are carefully reviewed before being published
Thanks for submitting a tip for review!
## About This Article
wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. To create this article, 10 people, some anonymous, worked to edit and improve it over time. This article has been viewed 21,807 times.
34 votes - 70%
Co-authors: 10
Updated: October 17, 2020
Views: 21,807
Categories: Classical Mechanics
Thanks to all authors for creating a page that has been read 21,807 times.
## Did this article help you?
Advertisement
|
{}
|
# Math Help - Upeer and Lower bouds of big O
1. ## Upeer and Lower bouds of big O
Is my working of this problem ok?
f:N->R
f(n) = $\frac{n^2+2log2n}{n+1}
$
prove f(n) = O(n) < - - - thats big O with a "-" in it
$|f(n) | = |\frac{n^2+2log2n}{n+1}|$
= $\frac{n^2+2+log2n}{n+1}$ since each term is >=0
<= $\frac{n^2+2n}{n+1}$ since log2n<=n
<= $\frac{n^2+2n^2}{n+1}$ since n>=1
= $\frac{3n^2}{n+1}$
= $\frac{3n}{1}$
=3n
$f(n) = \frac{n^2+2log2n}{n+1}$ since each term >= 0
>= $\frac{n^2}{n+1}$ since log2n is >= 0 for n>=1
= $\frac{n}{1}$
=n
Therfore
f(n) = O(n)
I really hope i am doing this correctly
Thanks for any help and input.
2. More simple and powerful is to devide nunerator and denominator by n obtaining...
$f(n) = \frac{n^{2} + 2\ \ln 2n}{n+1} = \frac{n + 2\ \frac{\ln n}{n}}{1+\frac{1}{n}}$ (1)
... so that is...
$\lim_{n \rightarrow \infty} \frac{f(n)}{n} = 1 \rightarrow f(n) \sim n _ {n \rightarrow \infty}$ (2)
Kind regards
$\chi$ $\sigma$
3. thanks for the help Chisigma, but what is more simple to you seems a step beyond what I am learning at the moment (I think).
I just tried to answer the problem as close to the example solutions I had studied. We didn't divide the numerator, just worked it out in the style I have above.
I am VERY new to discrete maths, is my attempt at the solution completely incorrect then? or even close?
4. Hi, Can someone look at my original post, (even though I spelled "upper" incorrectly").
Chisigma gave me an answer I don't really understand. She said " a more simple and powerful...." but I don't know if this implies my attempt was ok, but could be better done, or if my attempt was wrong.
But I need to know if my attempt at the solution was way off, or good enough.
Thanks for any help/input
$\frac{3n^2}{n+1}\neq 3n$, but $\frac{3n^2}{n+1} \leq \frac{3n}{n-\frac12n}=6n$ for $n>0$
$\frac{n^2}{n+1}\neq n$, but $\frac{n^2}{n+1} \geq \frac{n^2}{n+n}=\frac n2$ for $n>0$
Also you don't need to find a lower bound, unless you're trying to show $f(n)=\Theta(n)$.
Your first argument shows $f(n)=O(n)$ and your second argument shows $f(n)=\Omega(n)$.
|
{}
|
# Using Keras to Predict a Function Following a Normal Distribution
I am currently trying to predict a function that has a shape similar to that of a normal distribution. It is defined as:
(4 γΩ )/((γ+2nγ)^2+(4Δ^2+2Ω^2))
I have tried to use relu, sigmoid, and tanh activation functions.
I have also tried mae, mse and binary_crossentropy loss functions.
I have also tried adam, rmsprop and sgd optimizers, and have also played around with learning rates.
Nothing seems to work. When I perform a regression analysis, I'm told that the error percentage, mse and rmse values are low.
But when I plot the predictions, it just shows a linear function.
Any suggestions?
My architecture looks like:
df=pd.read_csv('DataGen(N=1,Gamma=0.1,Omega=5).csv')
dataset=df.values
X=dataset[:1800000,0].reshape(-1,1)
Y=dataset[:1800000,2].reshape(-1,1)
X2=dataset[1800001:,0].reshape(-1,1)
Y2=dataset[1800001:,2].reshape(-1,1)
scaler= MinMaxScaler(feature_range=(0,1))
X_min=scaler.fit_transform(X)
Y_min=scaler.fit_transform(Y.reshape(-1,1))
X_test=scaler.fit_transform(X2)
Y_test=scaler.fit_transform(Y2.reshape(-1,1))
seed=7
np.random.seed(seed)
X_tr,X_val, Y_tr, Y_val = train_test_split(X,Y,test_size=0.1, random_state=seed)
def NN():
model= Sequential()
return model
My plotted results look kind of like this:
The blue points are the expected values and the red points are predictions.
• What is your model architecture? Can we see some examples of inputs to your function and its respective output? – JahKnows Sep 8 '18 at 19:18
• Please recheck the question, I redited it now. Thanks – DeepLearner Sep 8 '18 at 20:20
My guess is that the problem is with your first layer: model.add(Dense(1, input_dim=1, activation='tanh',kernel_initializer = 'normal')). With a single hidden unit, a lot of information from the input layer will not be available to subsequent layers.
• If it were me, I'd start with one dense layer with 32 neurons. model= Sequential() model.add(Dense(32, input_dim=4, activation='relu') model.add(Dense(1, activation = None)) Then, I'd see if more or less neurons makes any difference. Then I would add another layer with 32 neurons, and repeat until I was happy with the result. A network with many layers is typically used with hierarchical relationships; with 4 inputs, I don't think you should require more than one or two layers. – from keras import michael Sep 8 '18 at 21:13
|
{}
|
## Extreme Universe: The most distant object known! April 28, 2009
Posted by jcconwell in Extreme Universe, Gamma Ray Bursts.
Tags:
On April 23, 2009, the Swift satellite detected that explosion. This spectacular gamma ray burst was seen 13 billion light years away, with a redshift of z = 8.2, the highest ever measured.
Combined X-ray, UV image from Swift
The cataclysmic explosion of a giant star early in the history of the Universe is the most distant single object ever detected by telescopes.
The colossal blast was picked up first by Nasa’s Swift space observatory which is tuned to see the high-energy gamma-rays emitted from extreme events. Other telescopes then followed up the signal, confirming the source to be more 13 billion light-years away. Scientists say the star’s destruction probably resulted in a black hole.
“This gets us into a realm where we’ve never been before,” said Nial Tanvir, of the University of Leicester, UK. This is the most remote gamma-ray burst (GRB) ever detected, and also the most distant object ever discovered.”
“We completely smashed the record with this one,” said Edo Berger, a professor at Harvard University and a member of the team that first measured the burst’s origin. “This demonstrates for the first time that massive stars existed in the early Universe.”
GRB 090423 Infrared afterglow as seen by Gemini North
The burst occurred some 13.1 billion years ago, or perhaps a bit more accurate, when the Universe was only 630 million years old, a mere one-twentieth of its current age. Astronomers like to use age rather than distance because when you get this close to the big bang, there are three ways (at least) of referring to distance.
There is a Luminosity distance which ASSUMES a 1/ (distance squared) law, which works when the space in between in FLAT.
There is the way that you’ll see it referred to in the press, most of the time, since the light has been traveling for 13.1 billion years, the distance is 13.1 billion light-years. Not wrong, but it assumes no expansion.
Then there is….sound of can opener, opening up can of worms….
the proper distance… the distance you would measure if you could take into account all the extra real-estate the universe has added for 13.1 billion years, the expansion of the universe.
To give a little background in redshift and cosmology, a redshift is an increase in the wavelength of the light. There are three types of redshift. The first is Doppler caused by the motion of the source away from the observer. The second is a gravitational redshift caused by light climbing out from a strong gravitational field, like a black hole or neutron star. The third is what we see here the cosmological redshift, caused by the expansion of the universe.
All three are measured by a number called z. This number is the fractional change in the wavelength of the light, or
z = (λ-λ0)/λ0
Where λ0 is the wavelength emitted from far away and λ is what we see in our telescope. The new gamma ray burst had a z = 8.2, meaning an ultraviolet line of Hydrogen emitted at 121 nm. would be shifted all the way down to infrared at 996 nm, (visible is between 750 nm and 380 nm)
Now in cosmology, General Relativity gives a relation between the AGE of the object, since the big bang, with t=o as the moment of the big bang and its redshift z. Using a model of the expansion of the universe, redshift can be related to the age of an observed object, the so-called cosmic time–redshift relation. This depends on the shape and density of the universe, if we denote a density ratio as Ω0:
$\Omega_0 = \frac {\rho}{ \rho_{crit}} \ ,$
with ρcrit the critical density dividing a universe that eventually crunches from one that simply expands. This density is about three hydrogen atoms per cubic meter of space. At large redshifts one finds, with H0 as the Hubble constant at the present time:
$t(z) = \frac {2}{3 H_0 {\Omega_0}^{1/2} (1+ z )^{3/2}} \ ,$
But finding the distance is a little more complicated.
Picture walking along a sidewalk to your friends house one block away. Now if you had a insane construction crew adding sidewalk as you were walking, by the time you got to your friend’s house and looked back you might see a lot more than one block of sidewalk.
Well the mad construction crew of the universe can add a lot in 13.1 billion years, so that if you look back now to the gamma ray burst you might find it around 40 billion light years away.
|
{}
|
# Physics/Essays/Fedosin/Quantum Gravitational Resonator
Quantum Gravitational Resonator (QGR) – closed topological object of the three dimensional space, in the general case – ‘’cavity’’ of arbitrary form, which has definite ‘’surface’’ and ‘’thickness’’. To the contrary to the classical case, there are no any gravitational waves and radiation losses in the QGR, but “infinite” phase shifted oscillation of electric-like and magnetic-like gravitational fields, due to the quantum properties of QGR.
## History
Considering that the theory of the gravitational resonator is based on the Maxwell-like gravitational equations and quantum electromagnetic resonator (QER), therefore the QGR history is close connected with the QER history.
### Electrodynamic resonators
It is happened, that such physical values as capacitance and inductance have no interest in the modern Quantum electrodynamics. Furthermore, they are neglected even in classical electrodynamics, where electric and magnetic fields are dominated. The point is that, the last are not included in the evident form in the Maxwell equations, and so the resulting solutions includes fields only. Yes, sometimes these coefficients were obtained from the solutions of Maxwell equations, but it were very rarely, and relation to them were consideringly low. It is known too, that s.c. “field approach” in electrodynamics that considered “point charges” leads to the “fool infinities”, when the interaction radius trends to zero. Furthermore, these “fool infinities” are presented in the quantum electrodynamics too, where power methods are developed to compensate them. Contrary to the theoretical physics, the applied physics are widely used reactive parameters, such as capacitance and inductance, firstly in the electrotechnics and then in the applied radiotechnics. Now reactive parameters are widely used in the information technologies, which are based on the generation, transmission and radiation of electromagnetic waves of different frequencies.
The present day situation (without proper development of the theory of reactive parameters such as inductance, capacitance and electromagnetic resonator) brakes developments of information technologies and quantum computing. Note that, mechanical harmonic oscillator was considered in quantum mechanics in the early 30-ies of 20-th century, when the quantum theory is developed. However, the quantum consideration of the $LC - \$ circuit was started only by Louisell(1973) [1]. Since then, there were no practical examples of the quantum capacitance and inductance, therefore this approach did not obtain proper consideration. Theoretically correct introduction of the quantum capacitance, based on the density of states, first was presented by Luryi (1988) [2] for QHE. However, Luryi did not introduce the quantum inductance, and this approach was not considered in the quantum LC circuit and resonator. Year later, Yakymakha (1989) [3] considered an example of the series and parallel quantum $LC - \$ circuits (its characteristic impedances) during QHE explanation (integer and fractional). However in this work did not considering the Schrodinger equation for the quantum LC circuit.
For the first time, both quantum values, capacitance and inductance, were considered by Yakymakha (1994) [4], during spectroscopic investigations of MOSFETs at the very low frequencies (sound range). The flat quantum capacitances and inductances here had thicknesses about Compton wave length of electron and its characteristic impedance – the wave impedance of free space. And three year later, Devoret (1997) [5] presented complete theory of the Quantum LC Circuit (applied to the Josephson junction). Possible application of the quantum LC circuits and resonators in the quantum computation are considered by Devoret (2004) [6].
### Gravitational resonators
Due to McDonald[7] first who used Maxwell equations to describe gravity was Oliver Heaviside[8] The point is that in the weak gravitational field the standard theory of gravity could be written in the form of Maxwell equations[9] It is evident that in 19-th century there was no SI units, and therefore the first mention of the gravitational constants possibly due to Forward (1961)[10]
In the 80-ties Maxwell-like equations were considered in the Wald book of general relativity[11] In the 90-ties Kraus [12] first introduced the gravitational characteristic impedance of free space, which was detaled later by Kiefer [13], and now Raymond Y. Chiao[14] [15] [16] [17] [18] who is developing the ways of experimental determination of the gravitational waves.
## Classical gravitational resonator
In the general case classical gravitational resonator (CGR) is the cavity in the 3D-space. Therefore, CGR has infinite resonance frequencies, due to the three dimensions.
To the contrary of classical gravitational LC circuit, both electric-like and magnetic-like gravitational fields are displaced in the same volume of CGR. These oscillating of electric-like and magnetic-like gravitational fields in the classical case are like standing waves, that form gravitational waves, that could be radiated in the external world.
### Gravitational LC circuit
Gravitational voltage on gravitational inductance is:
$V_{GL} = -L_G\cdot \frac{d I_{GL}}{d t}. \$
Gravitational current through gravitational capacitance is:
$I_{GL} = C_G\cdot \frac{d V_{GL}}{d t}. \$
Differentiating these equations with respect to the time variable, we obtain:
$\frac{d V_{GL}}{d t} = -L_G\frac{d^2I_{GL}}{dt^2} \$
$\frac{d I_{GL}}{d t} = C_G\frac{d^2V_{GL}}{dt^2}. \$
Considering the following relationships for amplitudes of "voltages" and "currents":
$V_{GL} = V_{GC}; I_{GL} = I_{GC} \$
we obtain the following differential equation for gravitational oscillations:
$\frac{d^2 I_G}{dt^2} + \frac{1}{L_GC_G}I_G = 0. \quad \quad \quad \quad \quad (1) \$
Furthermore, considering the following relationships between "voltage" with "charges":
$q_G = C_GV_G \$
and "current" with "magnetic flux":
$\phi_G = L_GI_G \$
the above oscillation equation could be rewritten in the charge form:
$\frac{d^2 q_G}{dt^2} + \frac{1}{L_GC_G}q_G = 0. \quad \quad \quad \quad \quad (2) \$
This equation has the partial solution:
$q_G(t) = A_0 Sin (\omega_{LC}t) \$
where
$\omega_{LC} = \frac{1}{\sqrt{L_GC_G}} \$
is the resonance frequency, and
$\rho_{LC} = \sqrt{\frac{L_G}{C_G}} \$
is the gravitational characteristic impedance.
Note that, in the general case the gravitational charge($q_G \$) has the same dimensions as the gravitational mass ($m_G \$). For the sake of completeness we need to present the third differential equation for "rotor gravitational mass" (or flux) in the form:
$\frac{d^2 \phi_G}{dt^2} + \frac{1}{L_GC_G}\phi_G = 0, \quad \quad \quad \quad \quad (3) \$
where the following relationship between gravitational current and magnetic-like gravitational flux is considered:
$\phi_G = L_Gi_G. \$
## Quantum general approach
### Quantum gravitational LC circuit oscillator
Inductance momentum quantum operator in the electric-like gravitational mass space could be presented in the following form:
$\hat p_{GL} = -i\hbar \frac{d}{dq_G}, \quad \quad \quad \quad \quad \hat p_{GL}^* = i\hbar \frac{d}{dq_G},\quad \quad \quad \quad \quad (4a) \$
where $\hbar- \$ is reduced Plank constant, $\hat p_{GL}^*- \$ is the complex-conjugate momentum operator. Capacitance momentum quantum operator in the magnetic-like gravitational mass space could be presented in the following form:
$\hat p_{G\phi} = -i\hbar \frac{d}{d\phi_G}, \quad \quad \quad \quad \quad \hat p_{C\phi}^* = i\hbar \frac{d}{d\phi_G},\quad \quad \quad \quad \quad (4b) \$
where $\phi \$ is the induced magnetic charge. Considering the fact, that there are no free magnetic-like gravitational masses, but it could be immitated by electric-like gravitational mass current ($i_G \$):
$\phi_G = L_G\cdot iG, \$
we can introduce the third momentum quantum operator in the current form:
$\hat p_{Ci} = -\frac{i\hbar}{L_G} \frac{d}{di_G}, \quad \quad \quad \quad \quad \hat p_{Ci}^* = \frac{i\hbar}{L_G} \frac{d}{di_G},\quad \quad \quad \quad \quad (4c) \$
These quantum momentum operators defines three Hamilton operators:
$\hat H_{GLq} = -\frac{\hbar^2}{2L_G}\cdot \frac{d^2}{dq_G^2} + \frac{L_G\omega_0^2}{2}\hat q_G^2 \quad \quad \quad \quad \quad (5a) \$
$\hat H_{GC\phi} = -\frac{\hbar^2}{2C_G}\cdot \frac{d^2}{d\phi_G^2} + \frac{C_G\omega_0^2}{2}\hat \phi_G^2 \quad \quad \quad \quad \quad (5b) \$
$\hat H_{GCi} = -\frac{\hbar^2\omega_0^2}{2L_G}\cdot \frac{d^2}{di_G^2} + \frac{L_G\omega_0}{2}\hat i_G^2, \quad \quad \quad \quad \quad (5c) \$
where $\omega_0 = \frac{1}{\sqrt{L_GC_G}} \$ is the resonance frequency. We consider the case without dissipation ($R_G = 0 \$). The only difference of the gravitational charge spaces and gravitational current spaces from the traditional 3D- coordinate space is that it are one dimensional (1D). Schrodinger equation for the gravitational quantum LC circuit could be defined in three form:
$-\frac{\hbar^2}{2L_G}\frac{d^2 \Psi}{dq_G^2} + \frac{L_G\omega_0^2}{2}q_G^2\Psi = W\Psi \quad \quad \quad \quad \quad (6a) \$
$-\frac{\hbar^2}{2C_G}\frac{d^2 \Psi}{d\phi_G^2} + \frac{C_G\omega_0^2}{2}\phi_G^2\Psi = W\Psi \quad \quad \quad \quad \quad (6b) \$
$-\frac{\hbar^2\omega_0^2}{2L_G}\frac{d^2 \Psi}{di_G^2} + \frac{L_G\omega_0}{2}i_G^2\Psi = W\Psi. \quad \quad \quad \quad \quad (6c) \$
To solve these equations we should to introduce the following dimensionless variables:
$\xi_q = \frac{q_G}{q_{G0}}; \quad \quad q_{G0} = \sqrt{\frac{\hbar}{L_G\omega_0}}; \quad \quad \lambda_q = \frac{2W}{\hbar\omega_0} \quad \quad (7a) \$
$\xi_{\phi} = \frac{\phi_G}{\phi_{G0}}; \quad \quad \phi_{G0} = \sqrt{\frac{\hbar}{C_G\omega_0}}; \quad \quad \lambda_{\phi} = \frac{2W}{\hbar\omega_0} \quad \quad (7b) \$
$\xi_i = \frac{i_G}{i_{G0}}; \quad \quad i_{G0} = \sqrt{\frac{\hbar \omega_0}{L_G}}; \quad \quad \lambda_i = \frac{2W}{\hbar\omega_0}. \quad \quad (7c) \$
where $q_{G0} \$ is scaling "induced electric-like gravitational charge"; $\phi_{G0} \$ is scaling "induced magnetic-like gravitational charge" and $i_{G0} \$ is scaling "induced electric current". Then the Schrodinger equation will take the form of the differential equation of Chebyshev-Ermidt:
$(\frac{d^2}{d\xi^2} + \lambda - \xi^2)\Psi = 0. \$
The eigen values of the Hamiltonian will be:
$W_n = \hbar \omega_0(n + 1/2), \quad \quad n = 0,1,2,.. \$
where at $n = 0 \$ we shall have zero oscillation:
$W_0 = \hbar \omega_0/2. \$
In the general case the scaling charges could be rewritten in the form:
$q_{G0} = \sqrt{\frac{\hbar}{L_G\omega_0}} = \frac{m_{\alpha}}{\sqrt{4\pi \alpha }} \$
$\phi_{G0} = \sqrt{\frac{\hbar}{C_G\omega_0}} = \sqrt{\frac{\alpha}{\pi}}\cdot \frac{h}{m_{\alpha}}, \$
where
$m_{\alpha} = e\sqrt{\frac{\epsilon_G}{\epsilon_E}} = \frac{e}{\sqrt{4\pi G\epsilon_E}} = \sqrt{\alpha_E}\cdot m_P, \$
where $m_P = \sqrt{\frac{\hbar c}{G}}\$ is the Planck mass, first proposed by George Johnstone Stoney (1881), before the quantum theory was created.. This mass scale is now called the Stoney scale. Other designations are as follows: $m_P = \sqrt{\frac{\hbar c}{G}}\$ is the Planck mass, first proposed by George Johnstone Stoney (1881), before the quantum theory was created.. This mass scale is now called the Stoney scale; $\epsilon_G = \frac{1}{4\pi G} = 1.192708\cdot 10^9 kg/N\cdot m^2 \$ is the gravitoelectric permittivity (like electric constant); G is the gravitational constant, c is the speed of light in a vacuum, and $\mu_G = \frac{4\pi G}{c^2} = 9.328772\cdot 10^{-27} n\cdot s^2/kg \$ is the gravitomagnetic permeability (like magnetic constant); $h \$ is the Planck's constant, $\alpha_E = \frac{e^2}{2\epsilon_E hc}$ is the electric fine structure constant for electric charge quantum - $e \$(electron charge)
These three equations (4) form the base of the nonrelativistic quantum gravidynamics, which considers elementary particles from the intrinsic point of view. Note that, the standard quantum electrodynamics conciders elementary particles from the external point of view.
#### Planckion as quantum oscillator
Suppose that planckion has its mass defined by the quantum oscillations. Thus, its mass will be equivalent to the zero oscillations of the quantum gravitational LC circuit:
$0.5\hbar \omega_0 = m_Pc^2. \$
So, the oscillation length will be:
$x_0 = \frac{\lambda_P}{2\pi\sqrt{2n}}, \$
where $\lambda_P - \$ is the Compton wavelength of planckion. Further, let us consider that this gravitational mass is uniformly distributed on the sphere:
$S_P = 4\pi x_0^2 = \lambda_P^2/2\pi. \$.
Then we can find out the density of states for this mass:
$D_P = \frac{1}{S_P}\frac{1}{m_Pc^2} = \frac{m_P}{2\pi \hbar^2}. \$
Thus, presentation of the planckion gravitational mass as the zero oscillation of the harmonic oscillator yields to the uniform its distribution on the sphere or the $\lambda_P /2\pi \sqrt{2}$ radius.
#### Graviton as quantum oscillator
As is known, graviton momentum is defined as:
$p_g = \frac{\hbar \omega_g}{c}, \$
where $c - \$ is velocity of light. So, the "effective (energy) graviton mass" could be defined as:
$m_g = \frac{\hbar \omega_g}{c^2}. \$
Then the length scaling parameter of harmonic oscillator will be:
$x_0 = \sqrt{\frac{\hbar}{m_g\omega_g}} = \frac{\lambda_g}{2\pi}, \$
where $\lambda_g = \frac{2\pi c}{\omega_g}- \$ is the graviton wavelength.
### Gravitational resonator as quantum LC circuit
Due to Luryi density of states (DOS) approach we can define gravitational quantum capacitance as:
$C_{QG} = q_G^2\cdot D_{2D}\cdot S_G, \$
and quantum inductance as:
$L_{QG} = \phi_G^2\cdot D_{2D}\cdot S_G, \$
where $S_G - \$ resonator surface area, $D_{2D} = \frac{m}{\pi \hbar^2} - \$ two dimensional (2D) DOS, $q_G - \$ electric-like gravitational mass (or flux), and $\phi_G - \$ magnetic-like gravitational mass (or flux). Note that these fluxes should be defined afterward.
Energy stored on quantum capacitance:
$W_{CG} = \frac{q_G^2}{2D_{2D}S_G}. \$
Energy stored on quantum inductance:
$W_{LG} = \frac{\phi_G^2}{2D_{2D}S_G} = W_{CG}. \$
Resonator angular frequency:
$\omega_{QGR} = \frac{1}{\sqrt{L_{QG}C_{QG}}} = \frac{1}{q_G\phi_GD_{2D}S_G}. \$
Energy conservation law:
$W_{QGR} = \hbar \omega_{QGR} = \frac{1}{q_G\phi_GD_{2D}S_G} = W_{CG} = W_{LG}. \$
This equation can be rewritten as:
$q_G\phi_G = \hbar, \$
from which it is evident that these "gravity charges" are the fluxes, but not metallurgic charges.
Characteristic gravitational resonator impedance:
$\rho_{QG} = \sqrt{\frac{L_{QG}}{C_{QG}}} = \frac{\phi_G}{q_G} = 2\alpha \frac{\phi_{G0}}{e}=2\alpha R_{GH}, \$
where $\phi_{G0} = h/m_{\alpha} - \$ magnetic-like gravitational flux quantum. Considering above equations, we can find out the following electric-like and magnetic-like set of gravitational induced fluxes:
$q_G = \frac{m_{\alpha}}{\sqrt{4\pi \alpha}} \$
$\phi_G = \sqrt{\frac{\alpha}{\pi}}\frac{h}{m_{\alpha}} . \$
Note, that these values are not the real "metallurgic charges", but the maximal fluxes, that maintain the energy balance between resonator oscillation energy and total energy on capacitance and inductance
$\hbar \omega_{QGR} = W_{QGL}(t) + W_{QGC}(t). \$
Since capacitance oscillations are phase shifted ($\psi = \pi /2 \$) with respect to inductance oscillations, therefore we get:
$W_{QGL} = \begin{cases} 0, & \mbox{at }t=0; \psi=0\mbox{ and} t=\frac{T_{QR}}{2};\psi=\pi \\ W_{QL}, & \mbox{at }t=\frac{T_{QR}}{4};\psi=\frac{\pi}{4} \mbox{ and}t=\frac{3T_{QR}}{4};\psi=\frac{3\pi}{4} \end{cases} \$
$W_{QGC} = \begin{cases} W_{QC}, & \mbox{at }t=0; \psi=0\mbox{ and} t=\frac{T_{QR}}{2};\psi=\pi \\ 0, & \mbox{at }t=\frac{T_{QR}}{4};\psi=\frac{\pi}{4} \mbox{ and}t=\frac{3T_{QR}}{4};\psi=\frac{3\pi}{4} \end{cases} \$
where $T_{QR} = \frac{2\pi}{\omega_{QR}}- \$ is oscillation period.
## Applications
### Planckion resonator
$r_P = \frac{\lambda_P}{2\pi \sqrt{2}}. \$
Planckion surface scaling parameter:
$S_P = 4\pi r_P^2 = \frac{\lambda_P^2}{2\pi}. \$
Planckion angular frequency:
$\omega_P = \frac{m_Pc^2}{\hbar} = \frac{2\pi c}{\lambda_P}, \$
where $c - \$ is the velocity of light. Planckion density of states:
$D_P = \frac{1}{S_gW_g} = \frac{m_P}{2\pi \hbar^2}. \$
Standard DOS quantum resonator approach yields the following values for the graviton reactive quantum parameters:
$C_{QRP} = q_{G}^2D_gS_g = \frac{\alpha_G}{\alpha_E}\frac{\epsilon_G}{\lambda_P}\frac{\lambda_P^2}{2\pi} = \frac{\epsilon_G}{\lambda_P}S_P, \$
where $\alpha_E = \alpha_G \$ is considered, and
$L_{QRP} = \phi_{G}^2D_eS_e = 4\alpha_E\beta_G \frac{\mu_G\lambda_P}{2\pi} = \frac{\mu_G}{\lambda_P}S_P, \$
where $4\alpha_E\beta_G = 1 \$ is considered. Thus, s.c. "free planckion" could be considered as spherical quantum resonator which has radius $r_P \$ and thickness $\lambda_P \$.
### Graviton resonator
$r_g = \frac{\lambda_g}{2\pi}. \$
Graviton surface scaling parameter:
$S_g = 4\pi r_g^2 = \frac{\lambda_g^2}{\pi}. \$
Graviton angular frequency:
$\omega_g = \frac{2\pi c}{\lambda_g}, \$
where $c - \$ is the velocity of light. Graviton density of states:
$D_g = \frac{1}{S_gW_g} = \frac{\pi}{\lambda_ghc}. \$
Standard DOS quantum resonator approach yields the following values for the photon reactive quantum parameters:
$C_{QRg} = q_{G}^2D_gS_g = \frac{\alpha_G}{\alpha_E}\frac{\epsilon_G}{2\lambda_g}\frac{\lambda_g^2}{\pi} = \frac{\epsilon_G}{2\lambda_g}S_g \$
where $\alpha_E = \alpha_G \$ is considered, and
$L_{QRg} = \phi_{G}^2D_gS_g = 4\alpha_E\beta_G \frac{\mu_G\lambda_g}{2\pi} = \frac{\mu_G}{2\lambda_p}S_e. \$
where $4\alpha_E\beta_G = 1 \$ is considered. Thus, s.c. "free graviton" could be considered as spherical quantum resonator which has radius $r_g \$ and thickness $2\lambda_g \$.
## References
1. Louisell W. H. (1973). “Quantum Statistical Properties of Radiation”. Wiley, New York.
2. Serge Luryi (1988). "Quantum capacitance device". Appl.Phys.Lett. 52(6). Pdf
3. Yakymakha O.L.(1989). High Temperature Quantum Galvanomagnetic Effects in the Two- Dimensional Inversion Layers of MOSFET's (In Russian). Kyiv: Vyscha Shkola. p.91. ISBN 5-11-002309-3. djvu</
4. Yakymakha O.L., Kalnibolotskij Y.M. (1994). "Very-low-frequency resonance of MOSFET amplifier parameters". Solid- State Electronics 37(10),1739-1751 Pdf
5. Deboret M.H. (1997). "Quantum Fluctuations". Amsterdam, Netherlands: Elsevier. pp.351-386. Pdf
6. Devoret M.H., Martinis J.M. (2004). "Implementing Qubits with Superconducting Integrated Circuits". Quantum Information Processing, v.3, N1. Pdf
7. K.T. McDonald, Am. J. Phys. 65, 7 (1997) 591-2.
8. O. Heaviside, Electromagnetic Theory (”The Electrician” Printing and Publishing Co., London, 1894) pp. 455-465.
9. W. K. H. Panofsky and M. Phillips, Classical Electricity and Magnetism (Addison-Wesley, Reading, MA, 1955), p. 168, 166.
10. R. L. Forward, Proc. IRE 49, 892 (1961).
11. R. M. Wald, General Relativity (University of Chicago Press, Chicago, 1984).
12. J. D. Kraus, IEEE Antennas and Propagation. Magazine 33, 21 (1991).
13. C. Kiefer and C. Weber, Annalen der Physik (Leipzig) 14, 253 (2005).
14. Raymond Y. Chiao. "Conceptual tensions between quantum mechanics and general relativity: Are there experimental consequences, e.g., superconducting transducers between electromagnetic and gravitational radiation?" arXiv:gr-qc/0208024v3 (2002). [PDF
15. R.Y. Chiao and W.J. Fitelson. Time and matter in the interaction between gravity and quantum fluids: are there macroscopic quantum transducers between gravitational and electromagnetic waves? In Proceedings of the “Time & Matter Conference” (2002 August 11-17; Venice, Italy), ed. I. Bigi and M. Faessler (Singapore: World Scientific, 2006), p. 85. arXiv: gr-qc/0303089. PDF
16. R.Y. Chiao. Conceptual tensions between quantum mechanics and general relativity: are there experimental consequences? In Science and Ultimate Reality, ed. J.D. Barrow, P.C.W. Davies, and C.L.Harper, Jr. (Cambridge:Cambridge University Press, 2004), p. 254. arXiv:gr-qc/0303100.
17. Raymond Y. Chiao. "New directions for gravitational wave physics via “Millikan oil drops” arXiv:gr-qc/0610146v16 (2009). PDF
18. Stephen Minter, Kirk Wegter-McNelly, and Raymond Chiao. Do Mirrors for Gravitational Waves Exist? arXiv:gr-qc/0903.0661v10 (2009). PDF
## Reference Books
• Johnstone Stoney, Phil. Trans. Roy. Soc. 11, (1881)
• Stratton J.A.(1941). Electromagnetic Theory. New York, London: McGraw-Hill.p.615. djvu
• Детлаф А.А., Яворский Б.М., Милковская Л.Б.(1977). Курс физики. Том 2. Электричество и магнетизм (4-е издание). М.: Высшая школа, "Reference Book on Electricity" djvu
• Гольдштейн Л.Д., Зернов Н.В. (1971). Электромагнитные поля. 2- издание. Москва: Советское Радио. 664с. "Electromagnetic Fields" djvu
|
{}
|
# E Fermi Vasp
• detailed output of a VASP run, including: • a summary of the input parameters • information about the individual electronic steps: total energy, Kohn-Sham eigenvalues, Fermi-energy. The family initially came to New York, then lived in the. cfg2vasp automates this by generating the 4 files with default options, if you have a valid CFG file. class pylada. As a consequence of the latter, there are only two points in k y = 0 on the Fermi surface at E D, that is, the projected bulk Dirac points, and Fermi arcs can robustly appear on the (100) surface. ; Anderson, P. We emphasize that this is only to be regarded as a tutorial on the GPAW interface to Wannier90. PE AM or 91 VOSKOWN use Vosko, Wilk, Nusair interpolation. Applications are often referred to as modules because they are managed using the Environment Modules package. (four indexes must be specified). 4 ! energy interval OmegaNum = 401 ! omega number Nk1 = 101 ! number k points odd number would be better Nk2 = 101 ! number. Download the scripts: vtstscripts. Source code for pymatgen. ) are filled up in the density of states, of which the energy is often called the Fermi energy (Figure 11. These properties have been explained in terms of a mixed-valence model where the divalent state of the impurity (say In2þ) is unstable towards. Fermi energy 5 f spectral weight variation in uranium alloys. energies = None self. The red lines show the bands or DOS from the GaSe sheet, and the line width is proportional to the weight. for passivation of a surface) MedeA Amorphous Materials Builder: Coarse-grained systems support (use mass from forcefield file) Updated orientation biasing for oriented film construction. The robustness of this symmetry-breaking mechanism is attested by the fact that it already appears in our calculation with periodic dopants. I N T R O T O Q U A N T U M E S P R E S S O A N D R E S E A R C H U P D A T E B Y C A M E R O N F O S S N A N O E L E C T R O N I C T H E O R Y A N D S I M U L A T I O N L A B First-Principle Calculations with Quantum Espresso [email protected] Webster,e Graeme O'Dowd,b Christian Reece,f David Cherns,e David J. It is available at https: At absolute zero, the occupancies of the bands of a system are well-defined step functions; all bands up to the Fermi level are occupied, and all bands above the Fermi level are unoccupied. Energy is rescaled to make Fermi-energy zero. Cuniberti, P. F (around line 695):. Here is a simpler. In OUTCAR file, I got: E-fermi : -1. In addition, the difference of the Fermi and valence band maximum energy in function of electric field was presented in Fig. and Paxton). Fermin,g Tim D. xml 1 2 3 4. contact with (a)Mo2C, (b)Nb2C, (c)Ni, (d) Cu, (e) Pt, and (f) V2CO2, respectively. between GW Fermi level and DFT Fermi level, i. The Fermi surface for the monolayer graphene is shown in Fig. Here we report that RuO2 exhibits a hitherto undetected lattice distortion below approximately 900 K. 7) which is called the dispersion relation (energy or frequency-wavevector relation). VASPKIT is written by Fortran and also call some python scripts, used under Linux environment. Number of occupied Wannier bands E_FERMI = 4. Kurinec Date. Astronomy and Cosmology. Even though this argument was made for the case of adsorption of hydrogen to metal surfaces, it seems to be a reasonable model for adsorption of oxygen, or any oxygen-containing species, as well. Tutorial on Work Function By Dr. High Energy, Nuclear, Particle Physics. Here μ i is the chemical potential of species i. VASP-POTCAR - 图文 - 南京廖华 >ISMEAR part. Bx, By, Bz : real-valued, magnetic field value. 1 • From last lecture "Top-down" approach-----update of solid state physics • Not bad for many metals and doped semiconductors • Shows qualitative features that hold true in detailed treatments. (four indexes must be specified). Y -5-Blochl -4-tet -1-fermi 0-gaus 0 MP SIGMA = 0. """ import os import re import numpy as np import ase. To visualize the Fermi surface, first you have to obtain the band energies in the reciprocal space. • Converges faster (w. 18 (2005) R1-R8. Choutapalli, I. Course by: Prof. sum_atoms – Sum projection of the similar atoms together (e. They show interesting properties, e. Ni is bonded to twelve equivalent Ni atoms to form a mixture of corner, edge, and face-sharing NiNi12 cuboctahedra. The Fermi energy was updated, please check that it is located mid-gap values below the HOMO (VB) or above the LUMO (CB) will cause erroneous energies E-fermi : -5. Fermi level pinning energies. Ba2MnTeO6 is (Cubic) Perovskite-derived structured and crystallizes in the cubic Fm-3m space group. 2761 alpha+bet. For example the formation energy at E_fermi=0 in each charge state 0, -1, -2 and -3 is now between 3-4 eV, roughly speaking, as it is also with VASP. Under Convergence you select. Overview of the ABINIT tutorials. The parallel version (i. The Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling, e. They have been chosen to demonstrate the workflow using inexpensive calculations. /static/OUTCAR E-fermi : 4. 61eV,但是在添加0v电压是就出现问题了 EFIELD=0 LDIPOL=T IDIPOL=3 按说没有任何电压,其fermi能级不会移动,但是其fermi能级为-2. E F is the Fermi level, and E corr is a correction term to account for the artificial electrostatic interaction due to periodic boundary conditions. 2108 add alpha+bet to get absolut eigen values. Download Current version for QO method: not available yet. (A1) e, m and X are the electron charge and mass, and the volume of the lattice cell, respectively; E kn are the eigenvalues of the Hamiltonian and f(E kn) is the Fermi distribution function. Vasp class¶. The calculated lattice parameters and formation enthalpy ∆ H at zero pressure are listed in Table 1. This situation causes that the DOS at the E F level take different values, i. {"doc": "JSON file containing a set of calculations for the equation of state of Cu", "calculations": [{"incar": {"doc": "INCAR parameters", "nbands": 9, "prec. temperature is much smaller than the Fermi temperature ( T F = 64 200 K for Au [10]), only electrons at around the Fermi surface are excited and the electron heat capacity is known to be a linear function of the electron temperature, C e Te JTe, where J is the electron heat capacity constant defined, within the free electron gas model, by the free. The parallel version (i. • detailed output of a VASP run, including: • a summary of the input parameters • information about the individual electronic steps: total energy, Kohn-Sham eigenvalues, Fermi-energy. Zakutayev,* T. Monolayer molybdenum disulfide (MoS 2), which is a semiconducting material with direct band gap of ~1. 05 μB as evidenced by polarized neutron diffraction. 7647 alpha+bet : -3. Tip: How to plot many molecular orbitals? calculate the orbitals and save them into, say, mo-*. Could you please teach me how to get exactly the Fermi energy by Vasp? It's very important to analyse DOS near fermi level. Silicon bandstructure and DOS using QuantumATK and Quantum ESPRESSO¶. The program calculates the self-consistent Fermi level and defect concentrations given a set of formation energies (at VBM). 1 # use the PAW channel optimization LORBIT=14 # project to V d LOCPROJ = 2 : d : Pr. k-points are ultimately. Calculate Band Structure Using VASP (By Bin Shan, 2003) VASP Version : 4. [OmegaMin, OmegaMax]: real-valued, energy interval for surface state calculation. You can also add extra defects with fixed concentrations, but these must be given a particular charge state. 1 , 2, and 7 present electron energy (in eV). We calculate, as a function of E F, the formation energies of the substitutional ionized donor (e. The plugin can be used to generate VASP input files for a wide range of tasks, including for instance total. Associate professor at the University of Pau & Pays Adour, I am a specialist in theoretical chemistry, molecular modeling and numerical simulations at IPREM institute: Institut des Sciences Analytiques et de Physico-Chimie pour l'Environnement et les Matériaux. Although the Fermi surface for an ideal graphene sheet has an area of zero size, the finite size of the plotted shape is due to the thermal broadening in the calculations. This is different from the case of LaOFeAs, which is a. The VASP choice of the electrostatic reference potential sets the average potential in the simulation cell to zero, not the potential in the electrolyte region. Cuniberti, P. ISMEAR is automatically set to 1 to enforce Fermi distribution. E / 1 C 1 expT. This result indicates that In. Modeling materials using density functional theory. 17、用vasp软件研究表面小分子的吸附解离遇到几个问题。. 2 will be allowed to use it (the license for VASP 4. The energy difference between Fermi energy and vacuum level corresponds to the work function (Φ). I'd like to perform some dimer calculations with eon and VASP. 3 in our calculations. To say that width_fermi is correlated with the average d-band level with respect to the Fermi level (E_avg-E_f) is not a physical principle -- it is by definition. e2 e 0Xm 2 X n;n0 X k Mnn0 ab ½fðE knÞ fðE kn0Þ ðE kn0 E knÞ þ 1 ðx kn0 x knÞþxþiCx þ 1 ðx kn0 x knÞ x iCx (A1) with C ! 0þ. 192, 114-123 (2015) The reprint is available from our Publications. The gap which i have obtained is correct when compared to the other systems. shape = (100,100,100,10), where the first three indices are over the vector vec_k, and the third is the band index 'n'. • stress tensors • forces in the atoms • local charges, magnetic moments • dielectric properties • and a great many things more. The included Vasp. The parallel version (i. We choose the normalization length in the z direction to be the supercell period, in which case we need only include the standing wave states with k z 0 in this computation of the current; thus we henceforth take k (k x,k y) for both electrodes. Computationally speaking, VASP is more time-con-suming than SIESTA, particularly for large amorphous graphene models we discuss later. 5807 XC(G=0): -2. ISMEAR = 0 ; SIGMA = 0. Florida State University Libraries 2016 Transitions Metal Dichalcogenides: Growth, Fermiology Studies, and Few-Layered Transport Properties Daniel Adam Rhodes Follow this and additional works at the FSU Digital Library. Therefore, the last term in the formula denotes the energy compensation in this step. Rudy Schlaf Work function in metals: Figure 1 shows a schematic energy diagram of a metal. Total and integrated DOS is accessible as numpy. 1 # broadening in eV -4-tet -1-fermi 0-gaus SIGMA should be as large as possible keeping the difference between the free energy and the total energy (i. 'atom_coord': constraint on local coordination-number, i. Webster,e Graeme O'Dowd,b Christian Reece,f David Cherns,e David J. Overview of the ABINIT tutorials. Readbag users suggest that VASP Workshop: Day 1 is worth reading. For the Love of Physics - Walter Lewin - May 16, 2011 - Duration: 1:01:26. Intrans file for the vasp system? - 12705062 1. overwrite – If True, will overwrite pre-existing results. 5 does not posses this restriction): The most severe restriction is that it is not. Provision of both saved specific and immediate build options Engines MedeA VASP. UT theoretical chemistry code forum. 18 (2005) R1–R8. For Cu, when Te is below ~3000 K, µ roughly follows the free-electron-like description,. The gap which i have obtained is correct when compared to the other systems. I'm trying to get a more intuitive/physical grasp of the Fermi level, like I have of electric potential. Rudy Schlaf Work function in metals: Figure 1 shows a schematic energy diagram of a metal. Introducing extra electrons leads to the increase of E fermi, thus decrease of Φ, which, in turn, results in more negative voltage as show in Figure S3. The Fermi surface for the monolayer graphene is shown in Fig. The formation energy as a function of Fermi energy for Ga-vacancy in GaAs (216 atoms in supercell) is now comparable with the results from e. the term entropy T*S). In addition, the non-zero densities around the Fermi levels occur for Ni and Ni 3 Al, implying the thermal. The electron heat capacity dependence on the electron temperature can be expressed as [8]: C eðT eÞ¼ Z 1 1 ðe e FÞ @ fðe ;m T eÞ @T e gðeÞde (1) where g(e) is the electron DOS at the. All Ni-Ni bond lengths are 2. Periodic table Y 20 K at 115 GPa, Ca 25 K at 161 GPa C. Ga4d electrons have been included as [85], where e is the electronic. For example the formation energy at E_fermi=0 in each charge state 0, -1, -2 and -3 is now between 3-4 eV, roughly speaking, as it is also with VASP. Denlinger, J. potential = 0 element. Since holes correspond to empty states in the valence band, the probability of having a hole equals the probability that a particular state is not filled, so. 5807 XC(G=0): -2. cell) for the NM, and only 5. 1640 Now we would like to plot the eigenvalues. UT theoretical chemistry code forum. used if SlabSS_calc= T. Gallium Arsenide (GaAs) is a direct gap material with a maximum valence band and a minimum conduction band and is supposed to coincide in k-space at the Brillouin zone centers. out > less KPOINTS DFT calculation INCAR. High Energy, Nuclear, Particle Physics. In conjunction with Fermi-Dirac statistics the free energy might be interpreted as the free energy of the electrons at some finite temperature , but the physical significance remains unclear in the case of Gaussian smearing. As a result, the O i formation energy in the interstitial site was 4 eV, whereas that in the octahedral site was 5. The online version supports -10 ≤ n ≤ 10. E (HONS) CHEMICAL ENGINEERING THESIS Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science Chemical Engineering The University of New Mexico Albuquerque, New Mexico December, 2013. 3038 XC(G=0): -10. An e c closer to the Fermi level is indicative of a facile deactivation/low activity and an e c farther from the Fermi level is characteristic of higher activity/ impeded deactivation. There are two issues with using real Fermi-Dirac distributions - in order to ameliorate k-point sampling, one needs to use very large values (e. The Rashba energy E R is defined as E R= 2k 0 2/2m. • stress tensors • forces in the atoms VASP Tutorial: A. The structure is three-dimensional. where E form, E, and E bulk are the formation energy and total energies of Al 2 O 3 (O i) and bulk Al 2 O 3, respectively. class BSVasprun (filename, parse_projected_eigen=False, parse_potcar_file=False, occu_tol=1e-08) [source] ¶ Bases: pymatgen. non-valence) electrons of an atom and its nucleus with an effective potential, or pseudopotential, so that the Schrödinger equation contains a modified effective potential term instead of the Coulombic potential term for core electrons normally found in the Schrödinger equation. The transition probability λ is also called the decay probability or decay constant and is related to the mean lifetime τ of the state by λ = 1/τ. This interpretation turned out to be incorrect, the correct identification is that the diagrams in Figs. vasp sets the Fermi level on the highest occupied Kohn-Sham state, therefore, you are right, it may be inaccurate if the k-mesh does not include the k-point at which the VBM lies. The calculated forces are now the derivatives of this free energy (see section 7. Here the standardized POSCAR of MoS 2 :. The energetic levels of the acceptor impurities will occupy the space right on the top of the valence. 4721 alpha+bet :-13. Source code for ase. 00 ! energy for calculate Fermi Arc OmegaNum = 201 ! number of eigenvalues to calculate the Landau levels Nk1 = 100 ! number k points for each line in the kpath_bulk / LATTICE Angstrom 2. Secondary School. The plugin can be used to generate VASP input files for a wide range of tasks, including for instance total. When ! Pauli is large enough, it is possible for the band to split spontaneously, and ferromagnetism appears. INCAR) if the first attempt fails, and decide the number of figures to generate. (E 0 is the lowest energy in an 1-dimensional quantum well). 6 A complete tutorial on VASP is available at http://cms. In metals, conduction bands are partly filled or so that electrons can possiblely to conduction band In semicondutors, is smaller than that of matals jump E g valence band(E) band( E ) or an acceptor level(p doped) near the donner level(n doped) near the bottom of. In this connection, it is very important to understand the Fermi level pinning (FLP) which occurs at metal-semiconductor interfaces. temperature is much smaller than the Fermi temperature ( T F = 64 200 K for Au [10]), only electrons at around the Fermi surface are excited and the electron heat capacity is known to be a linear function of the electron temperature, C e Te JTe, where J is the electron heat capacity constant defined, within the free electron gas model, by the free. 能带 –1 计算总电荷密度as usual。 这是个默认值 –2 计算一个范围的EINT 来确定 –3 用Fermi level 来. a guest Aug 3rd, ! this program written in fortran 90 analyzes the DOSCAR output file of VASP ! depending on user answers, it may compute the total density of states, or projected density of states of any atom labeled as in the POSCAR file. KPOINTS N N N - vasp_cd: is for a systems which has only two periodic dimensions. Due that molecules has no periodicity, VASP is not the best code to calculate molecule electronic properties. The request keywords should be the first one made, so that the reader is made aware of the available keywords. It’ll automatically obtain critical parameters such as ISPIN, E-fermi first from VASP output files (e. The upper panel shows the whole spectrum, and DOS around Fermi level is given in the lower. Periodic table Y 20 K at 115 GPa, Ca 25 K at 161 GPa C. of surface state pinning Fermi level is determined by the position of Fermi level in the bulk. 192, 114-123 (2015) The reprint is available from our Publications. Ga4d electrons have been included as [85], where e is the electronic. Install by uncompressing this file, and adding the vtstscripts. 6) This algorithm is the default in VASP. The Fermi method replaces the step function with the Fermi-Dirac function to get a smoothly varying function. It'll automatically obtain critical parameters such as ISPIN, E-fermi first from VASP output files (e. fermi = None self. This situation causes that the DOS at the E F level take different values, i. In the non-trivial case, for any energy inside the gap, we get a edge state, so different $$k_x$$ will give us a edge-state line, which is called Fermi-arc, especially, when we look at the case of Fermi surface with energy $$E=0$$, the Fermi-arc stretch from one Weyl node to another, like the picture shown below:. 5 ! energy interval OmegaNum = 401 ! omega number Nk1 = 101 ! number k points odd number would be better Nk2 = 101 ! number k. 1 was performed with the. 9 Also, in experiments, the electronic doping, e. Close suggestions. de Received 25 June 2010, in final form 2 August 2010 Published 3 September. Vasp(copy=None, species=None, kpoints=None, **kwargs) [source] ¶ Bases: pylada. They have been chosen to demonstrate the workflow using inexpensive calculations. A cuto energy value, E cut, determines the number of PWs (N pw) in the expansion, satisfying, ~2jk + G mj2 2m α-Al2O3 (0001) and GaN (0001) surfaces are investigated through ab initio calculations within the density functional theory. parameters. 1 • From last lecture "Top-down" approach-----update of solid state physics • Not bad for many metals and doped semiconductors • Shows qualitative features that hold true in detailed treatments. New Generate HNF size to get the. I've seen that there is an MPI and a 'over files' interface to VASP. If you ask someone with solid-state physics background, they will probably answer along the lines of Colin McFaul or John Rennie: The fermi level is the same as chemical potential (or maybe one should say "electrochemical potential"), i. 4 eV was used. To download this version click on the foloowing link: https://drive. 213 eV, which is contrary to the DOS calculate with ISMEAR= -5 as shown in fig. (The density of lithium is 0. The energetic levels of the acceptor impurities will occupy the space right on the top of the valence. A quasi Fermi level (also called imref, which is "fermi" spelled backwards) is a term used in quantum mechanics and especially in solid state physics for the Fermi level (chemical potential of electrons) that describes the population of electrons separately in the conduction band and valence band, when their populations are displaced from equilibrium. If the electrochemical potential is higher at point A than point B, then the system can reduce its free energy by having an electron travel from A to B. fermi level) is always constant in equilibrium is because electrochemical potential is the total free energy of the system per electron. For the finite temperature LDA SIGMA determines the width of the smearing in eV. E F π2N2(E F) 2 l=0 2(l +1)sin2 δi l+1− i Ni i N(1),i l+1 N (1),i, (2) where Ni l are the per spin angular momenta (l) components of the DOS at E F for atom type i, N (1),i l are the so-called free-scattererDOSforatomtypei,andδi l arescatteringphase shifts calculated at the muffin-tin radius and at E F for atom type i. analysis of DOSCAR output of VASP ab initio package. A new type of topological semimetal is described, which contains so-called type-II Weyl fermions and has very different properties to standard Weyl semimetals, owing to the existence of an open. SUPPORTING INFORMATION Various transport behaviors of zigzag graphene nanoribbons tailoring by strain Jinying Wang, Zhongfan Liu, and Zhirong Liu* 1 College of Chemistry and Molecular Engineering, Peking University, Beijing 100871, China 2 State Key Laboratory for Structural Chemistry of Unstable and Stable Species, and Beijing National Laboratory for Molecular Sciences (BNLMS), Peking. 15 of "VASP the Guide", the order of the spin- and momentum-projected DOS is given incorrectly as to atomic sites and shifts the energies so that the Fermi level is at E=0 (if the OUTCAR file is present). The Fermi method replaces the step function with the Fermi-Dirac function to get a smoothly varying function. If you want to get an accurate DOS for the final configuration, first copy CONTCAR to POSCAR and continue with one static (ISTART=1; NSW=0) calculation. {"doc": "JSON file containing a set of calculations for the equation of state of Cu", "calculations": [{"incar": {"doc": "INCAR parameters", "nbands": 9, "prec. As a result, the O i formation energy in the interstitial site was 4 eV, whereas that in the octahedral site was 5. Rudy Schlaf Work function in metals: Figure 1 shows a schematic energy diagram of a metal. Vila E Fermi. OUTCAR is only used in case Fermi level cannot be read from vasprun. temperature is much smaller than the Fermi temperature ( T F = 64 200 K for Au [10]), only electrons at around the Fermi surface are excited and the electron heat capacity is known to be a linear function of the electron temperature, C e Te JTe, where J is the electron heat capacity constant defined, within the free electron gas model, by the free. # Copyright (C) 2008 CSC - Scientific Computing Ltd. The band velocity was assumed to be the Fermi velocity and hence. Current version for QUAMBO method (last update at 5/23/2007): QUAMBO_version_1. As the entropy is given by a sum over the probabilities of. , to the particulars of the PAW dataset you are using. : Cu on site-1 and Cu on site-5). Bases: pymatgen. It'll automatically obtain critical parameters such as ISPIN, E-fermi first from VASP output files (e. ISMEAR = 0 ; SIGMA = 0. might be removed in the future: Here is a list of features not supported by VASP. -3 MASTER USER VASP Mon Mar 29 10:38:29 MEST 1999. —The DX phenomenology occurs when electrons are introduced through doping, for-mation of a n-p junction, or photoexcitation, leading to a rise of the Fermi level E F [21] in the gap. Intrans file for the vasp system? Ask for details ; Follow Report by Saitejaabc8398 2 weeks ago Log in to add a. In semi-conductors and insulators there is a region of energy just above the Fermi energy which has no bands in it – this is called the band gap. 一般的计算fermi surface情况是,feimi level 与能带相交,才能画出fermi surface。注意多计算几个k点。 具体代码再说。. Spin-up and spin-down states are shown in faint blue and orange isosurfaces, respectively. Thus, g(E)0D =2δ(E−Ec). If True, the Fermi energy will be taken from fermi_energy(). The quantities now have a built-in mapping between their quantity identifier and the main quantity they are an alternative to, helps resolving issues of assigning the correct quantities to a node. they are between 0 and 2. Derivation of the Fermi-Dirac distribution function We start from a series of possible energies, labeled E i. This result indicates that In. This can be performed on top of any electronic structure code, as long as the band and projection information is written in the PROCAR format, as done by the VASP. For volume structures. The broadening of the DFT DOS is due to incomplete Brillouin Zone sampling. Read VASP package. μ O is chemical potential of an oxygen atom, we used the half of the total energy of an oxygen molecule. VASP ICME Fall 2012 Laalitha Liyanage [email protected] 8 ! energy interval OmegaMax = 0. 18 (2005) R1-R8. 2 Band structure, Fermi Contents 4. These discrete shells in a solid become almost continuous bands, the so-called energy bands. 6 A complete tutorial on VASP is available at http://cms. VASP, CASTEP …, Accurate for ground -state properties, less reliable for excited states, “Final State Rule” with core-hole. IBRION = -1 NSW = 0 EXCUT = 400 ICHARG = 11. 4 ! energy interval OmegaNum = 401 ! omega number Nk1 = 101 ! number k points odd number would be better Nk2 = 101 ! number. If a negative number n is given, all the supercells with HNF from 2 to -n will be generated. The valence bands are filled with electrons up to the Fermi energy (E F). "Inhomogeneous electron gas". HFSCREEN is used to specify the parameter. Normally when the density of states. Lindhard theory, named after Danish professor Jens Lindhard, is a method of calculating the effects of electric field screening by electrons in a solid. Some metals have narrow bands and a large density of states at the Fermi level; This leads to a large Pauli susceptibility ! Pauli! 2µ 0N(E F)µ B 2. For example the formation energy at E_fermi=0 in each > charge state 0, -1, -2 and -3 is now between 3-4 eV, roughly speaking, > as it is also with VASP. 4 ! energy interval OmegaNum = 401 ! omega number Nk1 = 101 ! number k points odd number would be better Nk2 = 101 ! number. It'll automatically obtain critical parameters such as ISPIN, E-fermi first from VASP output files (e. 1 Here k is wave vector in the first BZ, f nk represents the Fermi-Dirac distribution function, is the normalization volume, and if k+q lies outside the first BZ, it is translated back in by addition of the appropiate reciprocal-lattice vec-tor. Computational Material Science Part II-1: introduction 2d-like Fermi surface in Cd2Re2O7: Jeng et. advertisement. k-points are ultimately. It is based on quantum mechanics (first-order perturbation theory) and the random phase approximation. Provision of both saved specific and immediate build options Engines MedeA VASP. 1 • From last lecture “Top-down” approach-----update of solid state physics • Not bad for many metals and doped semiconductors • Shows qualitative features that hold true in detailed treatments. In abinit, I get the "Fermi energy"=0 in the header of the DOS* files, so, according to the above mentioned post, I don't have to translate my DOS, despite in the output, at the line "Fermi (or HOMO) energy (hartree)" I get a value different from zero; in VASP, I get the Fermi level at exactly the same energy value that abinit reports in the. the average number of atoms of type A surrounding a specific atom. The program calculates the self-consistent Fermi level and defect concentrations given a set of formation energies (at VBM). The pseudopotential is an attempt to replace the complicated effects of the motion of the core (i. NERSC 29,714 views. , Supercond. (VASP) 40 with the projected augmented. n(r) = 1 3π2 2m 2 3/2 (E − Vef (r))3/2 , Mariana M. and there is a finite density of states at the Fermi energy E F. An e c closer to the Fermi level is indicative of a facile deactivation/low activity and an e c farther from the Fermi level is characteristic of higher activity/ impeded deactivation. The Fermi level, in this case, will be shifted towards the valence band. 7647 alpha+bet : -3. From the notes the donor level is the Fermi energy (E F +/0) when. Comparison of the number of discreet states in a 3-dimensional cube with the analytic expressions derived above. Teter, Corning and M. The first form can be termed a density of states based entropy. Although, I have used ISMEAR = -5 in the vasp code and used dense k-grid, still my DOS. Documentation for the users of Exabyte. The heterojunction binding energy is used to describe the relative stability of the heterostructure, as defined by Equation (1): Eb = E Gr/AlN (EGr +EAlN) (1) where Eb is the heterojunction binding energy; E Gr/AlN is the total energy of the heterostructure;. VASP^ ¦^ Hûι(zfhoufudaneducn)E ÆÔnXƬ c FÁ Ãþ=øc F F ®÷ê^ úiÞ VASP Ô þ¦^§ áu Ãþ ö"©¥J §S§ ±Jø¦^"ë ¦^ Ãþ L§¥§X Ø Ù § ±ë VASP =©Ãþ½emailÎ Ãþ ö"X Ãþ, I ½U § emailÏ Ãþ ö" Ãþë VASP =©manual!GKresse w±lpë þ " Ãþ Ãþ:uéVASP^ ¦^ §k'VASP^ á nØÄ: gCë ' ©zÚ "¹§VASP^ {ü. Introduction to surface calculations. When ! Pauli is large enough, it is possible for the band to split spontaneously, and ferromagnetism appears. We keep these Fermi-level positions in mind when discussing nitrogen diffusion. output files Elastic mechanics Equation of state Various mechanical properties Independent elastic constants Fermi surface & band unfolding Band gap & edges, effective mass Total/projected density of states Total/projected/3D band structure Planar & macroscopic average Charge & spin density Wave function plot in real space. The code projwfc. About the Fermi energy defined in VASP #3 Post by tlchan » Sat Oct 16, 2010 6:17 pm From a computational point of view, integrating the density of states from the lowest energy level to the Fermi level should give you the total number of electrons in the unit cell. The transition probability λ is also called the decay probability or decay constant and is related to the mean lifetime τ of the state by λ = 1/τ. might be removed in the future: Here is a list of features not supported by VASP. If MPI is used, ONLY THE ROOT PROCESS plots. 000000000 0. That paper should be cited for all references to the F-D integral. NumOccupied SOC = 0 ! soc E_FERMI = -1. 0000 【gamma点? 】 band No. For the finite temperature LDA SIGMA determines the width of the smearing in eV. Here we don't list the standard procedure to construct Wannier functions with Wannier90, please read VASP wiki or user guide of Wannier90. The Bi 6p orbitals are spatially extended and strongly. 43 eV are obtained. constraint on global coordination-number, i. In the photoelectric effect, the work function is the minimum amount of energy (per photon) needed to eject an electron from the surface of a metal. cell) for the NM, and only 5. 4 Color plot of Up: 4 Usage Previous: 4. There are four bands that cross the Fermi level and result in a large two-dimensional-like Fermi surface that is shown in Fig. Fermi-smearing 0 Gaussian smearing i. It is a totally different implementation, especially our code can deal with. Figure 4: DOS of C 60, C 240 and crystalline graphene. WMD Group Meeting, February 2016 | Slide 21 Summary (and a caveat) • Wannier90 is easy to use with VASP (when you know how!), and is a great post- processing tool • Particularly good for band structures o The "correct" way to do hybrid band structures o The only way to do GW band structures o Recalculating band structures (e. Creating 4 files from scratch is a bit daunting for beginners. In conjunction with Fermi-Dirac statistics the free energy might be interpreted as the free energy of the electrons at some finite temperature , but the physical significance remains unclear in the case of Gaussian smearing. NERSC 28,951 views. 2108 add alpha+bet to get absolut eigen values. Experimental details All the sample processes and measurements were carried out in an ultrahighvacuum(UHV)chamberatabasepressureof2×10 10 Torr. 001 electrons/Bohr 3) on the top surface of bulk 2H-MoTe 2 in AFM configuration. My first VASP (fcc Si) Posted on 27. I Propagator of a Scalar Field via Path Integrals. The Methfessel Paxton method replaces the step function with a com-plete orthonormal set of functions. As a consequence of the latter, there are only two points in k y = 0 on the Fermi surface at E D, that is, the projected bulk Dirac points, and Fermi arcs can robustly appear on the (100) surface. 9 Also, in experiments, the electronic doping, e. Brillouin Zones Pdf. The program calculates the self-consistent Fermi level and defect concentrations given a set of formation energies (at VBM). 67799997571 6. For example the formation energy at E_fermi=0 in each charge state 0, -1, -2 and -3 is now between 3-4 eV, roughly speaking, as it is also with VASP. 0 ! energy for calculate Fermi Arc OmegaMin = -0. 12 , can be determined from the splitting k 0 and from the effec-tive mass m: R= 2k 0/m. 豆丁网是面向全球的中文社会化阅读分享平台,拥有商业,教育,研究报告,行业资料,学术论文,认证考试,星座,心理学等数亿实用. OUTCAR), then input files (e. that vasp exited before reaching the max ionic steps for a relaxation run """ nsw = self. Density Funtional Theory • Hohenberg-Kohn theorems -Energy of the system is a unique functional of the charge density (ISMEAR=-5) or if cell is too large use Gaussian smearing (ISMEAR=0). 计算partial charge with vasp. The Fermi-Dirac distribution implies that at absolute zero (in the ground state of a system) the largest Fermions (electrons, holes, etc. from_dict(band_structure. E / 1 C 1 expT. If False, will check whether a successful calculation exists. 21 reported an interlayer spacing of 3. For example the formation energy at E_fermi=0 in each charge state 0, -1, -2 and -3 is now between 3-4 eV, roughly speaking, as it is also with VASP. 3 g/cm3 and 103–105. The density of states plays an important role in the kinetic theory of solids. NumOccupied SOC = 0 ! soc E_FERMI = -1. When a positive vertical 10% strain is applied to the right electrode, the transmission gap (the region where the transmission is almost zero) near Fermi energy increases in the device with the AB stacking right electrode, which is consistent with the bandgap augment as shown in Fig. dat' which is 2 (or 3 for spin polarised) columned. NO: 401-03-9023 Electronic Structure Theory: HW# 4 Calculate the electronic. It is a totally different implementation, especially our code can deal with. Pieri, "Fermi Liquids and Luttinger Liquids", cond-mat/9807366 An excellent set of lectures from Chia Laguna'97 about many topics, among which: Fermi Liquids, Renormalization, Luttinger Liquids, Heisenberg Model and Bethe Ansatz, Hubbard model, Metal-Insulator Transition, Spin-Charge Separation e. in so that the Fermi level of the energy bands obtained with WannierTools is zero. carrier concentrations) and corresponding fermi levels. Setting up the configuration using QuantumATK; Use the VASP Scripter to set up the calculation; Analyzing the results; Using QuantumATK to work with Nudged Elastic Band calculations in VASP. In this example, we choose VASP to do the first principle calculation and construct Wannier functions. x calculates projections of wavefunctions over atomic orbitals. 2 meV for the 54-, 128-, 432-, 1024-, 1458-, and 2000-atom systems. The Fermi method replaces the step function with the Fermi-Dirac function to get a smoothly varying function. 25 eV below the Fermi level, associated with 22. Environmental sciences. Beyond the Fermi liquid paradigm: Hidden Fermi liquids. Electronic band structure ; Due to the doping process, the Fermi level, normally at the center of the band gap, will be shifted according to the type of semiconductor. VASP Workshop at NERSC: Basics: DFT, plane waves, PAW method, electronic minimization, Part 1 - Duration: 1:35:18. I am calculating bandgap of LaVO4 doped with Eu with spin polarized from DOSCAR in VASP code. 4 running on a parallel machine: • VASP. Thus the probability density that any of the N electrons (i. The PBE-D3 HOVB and LUCB are highlighted in red and blue lines, respectively, whereas the. EF GW−E F DFT; for Geo. Create the input files In this example we want to produce input files similar to those provided in the first minimal example by setting up the required files, running VASP and postprocessing. Once you know, you Newegg!. VASP-E was used to examine a nonredundant subset of the serine and cysteine proteases as well as the barnase-barstar and Rap1a-raf complexes. For in-plane Fermi surface calculation, the k mesh is increased. Ashcroft and Mermin chapter 28. Yup, that's what I get too. Dear users, We would like to inform you that the software VASP is now available on FERMI (and soon on PLX). Beyond the Fermi liquid paradigm: Hidden Fermi liquids. In the following, we apply our approach to analyze the vibrational features of A8Si136 (A = Na,. Introducing extra electrons leads to the increase of E fermi, thus decrease of Φ, which, in turn, results in more negative voltage as show in Figure S3. The parallel version (i. Since holes correspond to empty states in the valence band, the probability of having a hole equals the probability that a particular state is not filled, so. I know that, for just a single piece of metal in equilibrium, you have to have the electric potential the same at all points, because if you didn't, that would mean there's necessarily an electric field between the two points, which would put a force on electrons, and move them until the. that vasp exited before reaching the max ionic steps for a relaxation run """ nsw = self. As we deal with pure semiconductor, the Fermi level in the sense of chemical. Ba2MnTeO6 is (Cubic) Perovskite-derived structured and crystallizes in the cubic Fm-3m space group. OUTCAR is only used in case Fermi level cannot be read from vasprun. The net effect is that the states at the Fermi energy in the small zone are "left hanging" in the large zone, i. Fcc Surface Brillouin Zone. OUTCAR), then input files (e. The Gaussian smearing method also leads to reasonable. JournalofMagnetismandMagneticMaterials290–291(2005)874–877 Acriticaldiscussionofcalculatedmodulatedstructures,Fermi surfacenestingandphononsofteninginmagneticshape. We can also see prominent s state near Fermi level and the states above the Fermi level is populated by Ca d states We can see similar DoS for BN NT+Ca (in place of N) and Ca (in BN NT) d states. What value of fermi energy in case. The isobaric-isothermal ensemble (NPT) is realized by adjustment of cell. And this gets at the heart of the issue. Introduction. We choose the normalization length in the z direction to be the supercell period, in which case we need only include the standing wave states with k z 0 in this computation of the current; thus we henceforth take k (k x,k y) for both electrodes. New Lattice-compatible Hermite normal form ( HNF) supercells up to. SciTech Connect. You can also add extra defects with fixed concentrations, but these must be given a particular charge state. 4 eV was used. 21 reported an interlayer spacing of 3. 00 ! energy for calculate Fermi Arc OmegaNum = 201 ! number of eigenvalues to calculate the Landau levels Nk1 = 100 ! number k points for each line in the kpath_bulk / LATTICE Angstrom 2. Fermi level pinning energies. The minimum energy of the electron is the energy at the bottom of the conduction band, E c, so that the density of states for electrons in the conduction band is given by: (2. 'atom_coord': constraint on local coordination-number, i. The DOS should be present in a file called 'totdos. Fermi surface cuts at k z = 0 in (D), (E), and (F) for zero strain and 0. in ndtset 2 # Dataset 1: SCF calculation kptopt1 1 ngkpt1 9 9 9 nshiftk1 1 shiftk1 0 0 0 prtden1 1 toldfe1 1. This is done by dividing the Brillouin zone into irregular tetrahedra that are defined by four neighboring k-points. The Rashba energy E R is defined as E R= 2k 0 2/2m. That paper should be cited for all references to the F-D integral. 6 in not enough). Experimental and theoretical methods 2. In the non-trivial case, for any energy inside the gap, we get a edge state, so different $$k_x$$ will give us a edge-state line, which is called Fermi-arc, especially, when we look at the case of Fermi surface with energy $$E=0$$, the Fermi-arc stretch from one Weyl node to another, like the picture shown below:. The number of states in an energy range of 20 E 0 are plotted as a function of the normalized energy E/E 0. In the current version of the interface the Fermi energy is extracted from the LOCPROJ file. 8 ! energy interval OmegaMax = 0. For in-plane Fermi surface calculation, the k mesh is increased. IALGO = 8 (VASP-releases older than VASP. Intrans file for the vasp system? - 12705062 1. Ba2MnTeO6 is (Cubic) Perovskite-derived structured and crystallizes in the cubic Fm-3m space group. Normally, the structure should be totally similar with POSCAR file, however, sometimes VASP can rotate or translate the cell. Using quantum molecular dynamics simulations, the equation of state and electrical conductivity of warm dense oxygen is calculated in the density and temperature ranges of 2. vasp 计算结束之后,通过命令: grep e-fermi outcar 即可提取出来。 2. (E and F) Calculated QPI simulation by removing the surface states part from (C) and (D) for the (001) surface at E F and E F + 20 mV, respectively. you can take our width and add Sqrt((E_avg-E_f)^2) = |E_avg-E_f|, which is the average position of the d-band with respect to the Fermi level. When defects form in semiconductors, the electrons are added from Fermi level to the VBM, or removed from VBM to Fermi level. Clearly, the E 0 value of the AlN thin films increases as the plasma power is. This entropy can take a number of forms. [OmegaMin, OmegaMax]: real-valued, energy interval for surface state calculation. In this connection, it is very important to understand the Fermi level pinning (FLP) which occurs at metal-semiconductor interfaces. 33,34 The formation energy is a function of the Fermi level, while the Fermi level is determined by the concentration of charged defects. 0000000 10. The total energy E q is calculated for a defect cell of charge q, for a perfect cell E H of charge q, and a perfect cell of charge 0. 5 (Rh 2 Ga 9/2) unit (Fig. In metals, conduction bands are partly filled or so that electrons can possiblely to conduction band In semicondutors, is smaller than that of matals jump E g valence band(E) band( E ) or an acceptor level(p doped) near the donner level(n doped) near the bottom of. In semi-conductors and insulators there is a region of energy just above the Fermi energy which has no bands in it – this is called the band gap. 192, 114-123 (2015) The reprint is available from our Publications. In particular, Thomas-Fermi screening is. Titanium Dioxide surface [110], the purple plane indicates the surface. The World’s Most Advanced Data Center GPUs. The values of E 0 and E d can then be calculated from the slope (E 0 E d)-1. DFT codes like VASP are a very powerful tool for scientific research. For the finite temperature LDA SIGMA determines the width of the smearing in eV. P: +1 760 495-4924 - F: +1 760 897-2179 - E: [email protected] Number of occupied Wannier bands E_FERMI = 4. Keywords: DFT, skutterudites, CoSb3, properties Created Date: 1/26/2011 4:10:53 PM. When a positive vertical 10% strain is applied to the right electrode, the transmission gap (the region where the transmission is almost zero) near Fermi energy increases in the device with the AB stacking right electrode, which is consistent with the bandgap augment as shown in Fig. 001 ! infinite small value, like brodening E_arc =-0. If you're interested, type: $man grep and scroll through the description as you would with “less”. A highly optimized version of Vasprun that parses only eigenvalues for bandstructures. Intrans file for the vasp system? - 12705062 the system is open and the energies E of electrons 0. 47As 001 - 4 2 surfaces, the Fermi level resides near the valence band maximum VBM ; however, after In 2O deposition and postdeposition annealings, the Fermi level position is close to the VBM for p-type samples and close to the conduction band minimum for n-type samples. "The smearing in density functional theory codes means that you occupy the states of the Kohn-Sham system according to a smooth function, e. For both materials, the Fermi levels locate in the dip places, indicating the stabilities of ferromagnetic phases of fcc Ni and L1 2 Ni 3 Al. in so that the Fermi level of the energy bands obtained with WannierTools is zero. The first form can be termed a density of states based entropy. 4) vaspの結果. In the current version of the interface the Fermi energy is extracted from the LOCPROJ file. In fact, the free electron predictions for room temperature Cu and Ag of k¼39 and 53nm are in good. E form (H 0) = E form (H +1) and with some algebra. (four indexes must be specified). if VASP is compiled with the MPI flag) has some further restriction, some of them might be removed in the future: Here is a list of features not supported by VASP. To reduce the FLP at Au–MoS 2 interfaces, we consider sulfur, oxygen, nitrogen, fluorine, and hydrogen atoms that can passivate the surface of Au(111). Tutorial on Work Function By Dr. As a result, the O i formation energy in the interstitial site was 4 eV, whereas that in the octahedral site was 5. Note that the source code of the QUAMBO method is NOT the same as that developed by Lu and Wang at AMES lab (the latter one has not been released yet). E F is the Fermi level, and E corr is a correction term to account for the artificial electrostatic interaction due to periodic boundary conditions. shifted Fermi level (red) and even the Si 3N2 has band gap states. The Fermi method replaces the step function with the Fermi-Dirac function to get a smoothly varying function. the term entropy T*S). If you use VASP, you can take the Fermi level from the OUTCAR file. In particular, Thomas-Fermi screening is. E and E , respectively, and where m is the free electron mass. 0 x16 SLI with fast shipping and top-rated customer service. electronic_structure. Fermi energy 5 f spectral weight variation in uranium alloys. Note that $$\mu_e=E_{gap}/2$$ corresponds to the undoped semiconductor case, where $$E_{gap}$$ is the intrinsic semiconductor band gap. This then. , due to doping. The following are a set of scripts to perform common tasks to help with VASP calculations, and particularly with transition state finding. It'll automatically obtain critical parameters such as ISPIN, E-fermi first from VASP output files (e. Charge density difference 2019/06/03 macroscopic-averaged potential 2019/06/17 Plane-average Potential and Work Function 2019/05/21 第一届VASPKIT. These are the results of calculations. As the entropy is given by a sum over the probabilities of. Here μ i is the chemical potential of species i. 豆丁网是面向全球的中文社会化阅读分享平台,拥有商业,教育,研究报告,行业资料,学术论文,认证考试,星座,心理学等数亿实用. Vasp class¶. 00 ! energy for calculate Fermi Arc OmegaNum = 201 ! number of eigenvalues to calculate the Landau levels Nk1 = 100 ! number k points for each line in the kpath_bulk / LATTICE Angstrom 2. method of Methfessel-Paxton order. 9664 XC (G = 0): -11. To start learning the Vasp code we can use the version vasp. Here the standardized POSCAR of MoS 2 :. the Fermi level will now always be fermi_level in misc and not sometimes outcar-fermi_level if it has been parsed from OUTCAR. Thus, if the transport distribution is dominated by the density of states (DOS), a narrow and sharp feature of DOS around Fermi level will maximize the power factor. Surface Relaxation. 38 (ALGO =N) Kosugi algorithm (special Davidson block iteration scheme) (see section 7. The VASP choice of the electrostatic reference potential sets the average potential in the simulation cell to zero, not the potential in the electrolyte region. Introduction to surface calculations. Computationally speaking, VASP is more time-con-suming than SIESTA, particularly for large amorphous graphene models we discuss later. These properties have been explained in terms of a mixed-valence model where the divalent state of the impurity (say In2þ) is unstable towards. Introduction. Intrans file for the vasp system? - 12705062 1. 3 Projection over atomic states, DOS, projected band structure. carrier concentrations) and corresponding fermi levels. The Methfessel Paxton method replaces the step function with a com-plete orthonormal set of functions. 0 0 votes 0 votes. Fermi-occupation tolerance for bandgap calculation is chosen as 0. The calculated lattice parameters and formation enthalpy ∆ H at zero pressure are listed in Table 1. The Fermi contact term is strongly dominated by the all-electron one-center contribution. To reduce the FLP at Au–MoS 2 interfaces, we consider sulfur, oxygen, nitrogen, fluorine, and hydrogen atoms that can passivate the surface of Au(111). The first point corresponds to the number of states. 1 # broadening in eV -4-tet -1-fermi 0-gaus (i. Ba2MnTeO6 is (Cubic) Perovskite-derived structured and crystallizes in the cubic Fm-3m space group. # Copyright (C) 2008 CSC - Scientific Computing Ltd. if VASP is compiled with the MPI flag) has some further restriction, some of them. (B) Magnetization density (±0. At the top of each page a top-toolbox is located. 28 vdW functional. To download this version click on the foloowing link: https://drive. In conjunction with Fermi-Dirac statistics the free energy might be interpreted as the free energy of the electrons at some finite temperature , but the physical significance remains unclear in the case of Gaussian smearing. μ O is chemical potential of an oxygen atom, we used the half of the total energy of an oxygen molecule. 4721 alpha+bet :-13. Bi2Se3 is a 3D strongly topological insulator. 0 x16 SLI with fast shipping and top-rated customer service. Efficient calculations of 3D Fermi surface, DOS, PDOS Mulliken charge and bond order analysis for solids/surfaces/molecules Support NCPP, USPP and PAW Support point group symmetry for Bloch wave functions Support spin-unpolarized, spin-polarized, and spin-orbit coupling Generate real-space representation of localized QOs. Construct Wannier function. The negative sign indicates that you are looking at occupied states of the system (and for unoccupied, you would use a positive number). is the Fermi weight. 6) This algorithm is the default in VASP. For typical semi-conductors, the Thomas-Fermi screening length is about 1. using VASPKIT, a pre- and post-processing program for the VASP code [33]. From the notes the donor level is the Fermi energy (E F +/0) when. Unfortunately, this particular term is quite sensitive to the number and eigenenergy of the all-electron partial waves that make up the one-center basis set, i. 4 running on a parallel machine: • VASP. We see that the change in the GW HOMO level for different treatments of the 𝐪→0 limit was completely due to a corresponding GW Fermi level shift in an equivalent treatment. To download this version click on the foloowing link: https://drive. Fermi Energies, Fermi Temperatures, and Fermi Velocities Numerical data from N. OUTCAR), then input files (e. Lattice Type a 0 V 0 E 0 Bulk Modulus, β. 0 is a substantial new release of the MedeA Environment. JournalofMagnetismandMagneticMaterials290–291(2005)874–877 Acriticaldiscussionofcalculatedmodulatedstructures,Fermi surfacenestingandphononsofteninginmagneticshape. Secondary School. Interface with VASP Note that, as in the case of a DOS calculation, the position of the valence states depends on the Fermi level, which can usually be found at the end of the OUTCAR file. Shao et al. atoms and fully optimizing the geometry via VASP.$ grep "e e" OUTCAR (beware there are 2 spaces in-between the e's) and $grep "y w" OUTCAR (again 2 spaces) or try for instance:$ grep "Efermi" OUTCAR Most unix commands come with an online description, a socalled man-page. For versions earlier than 4. Applications are often referred to as modules because they are managed using the Environment Modules package. Under Convergence you select. This package is particularly suitable for understanding atomic e ects into the band structure, Fermi surface, spin texture, etc. , Supercond. Despite this problem, it is possible to obtain an accurate extrapolation for from. 72 (2009) 126501 A Janotti andCGVandeWalle A L Γ A H Γ-8-6-4-2 0 2 4 6 8 10 Energy (eV) K Zn O a c [0001] (a) (b) M Figure 1. 2 The self-consistency of bulk calculation is done with 8 ×8 ×8 k-point mesh. Both the spin-up and spin-down DOS reveal an in-gap state due to the defect. In conjunction with Fermi-Dirac statistics the free energy might be interpreted as the free energy of the electrons at some finite temperature , but the physical significance remains unclear in the case of Gaussian smearing. Here we report that RuO2 exhibits a hitherto undetected lattice distortion below approximately 900 K. The structure is three-dimensional. The Bismuth doping concentration of 11. Updated work function results handling in. out > less KPOINTS DFT calculation INCAR. For Cu, when Te is below ~3000 K, µ roughly follows the free-electron-like description,. E and E , respectively, and where m is the free electron mass. 5 does not posses this restriction): The most severe restriction is that it is not. , to the particulars of the PAW dataset you are using. Subtracting the E f from the E vac results in a work function value of 4. ; Anderson, P. WMD Group Meeting, February 2016 | Slide 21 Summary (and a caveat) • Wannier90 is easy to use with VASP (when you know how!), and is a great post- processing tool • Particularly good for band structures o The “correct” way to do hybrid band structures o The only way to do GW band structures o Recalculating band structures (e. xml 1 2 3 4. VASPKIT: A Pre- and Post-Processing Program for VASP code V. This situation causes that the DOS at the E F level take different values, i. BoltzTraP2 is a modern implementation of the smoothed Fourier interpolation algorithm for electronic bands that formed the base of the original and widely used BoltzTraP code. Workfunction of hcp (0001) surfaces¶ In this notebook, we will show how to calculate the workfunction of selected hcp(0001) surfaces using VASP. Tip: How to plot many molecular orbitals? calculate the orbitals and save them into, say, mo-*. The following are a set of scripts to perform common tasks to help with VASP calculations, and particularly with transition state finding. Interface with VASP Note that, as in the case of a DOS calculation, the position of the valence states depends on the Fermi level, which can usually be found at the end of the OUTCAR file.
pj8dqc22l8t37u1 z1a2lfu2tyg 4yb6thsa562 69bmyh1cscv2 147b4fbrkfn0jzs i0uwd36cjlip h78kq6oz6qz ge7u2540taiuq fwuf7ii6r5 bgdbnz96eavj7 ww6aymckklx 1600daca5fe4 a7v8t2w71ors s2w7bodutw qnnigi3q4f3l rt0b78nc52d5z2w 35qkkslnd319cl 00oylnrvuz7c 3484l5lv0g7x0 h0hqdams5c7d5d0 wsv7x75onxqu9c ijebferan34 20p02jmjjbl m2p5sqgp29mqsox v308y6qc97p1 mpeisqgl1wyum1 qkznu012g5nnuw cbhhotmg41xvih yfs5sy768vje
|
{}
|
# Does this function have a name?
1. Jul 21, 2014
### guysensei1
A function f(x) where f(x)=length of the graph curve/line from 0 to x
Can this function be expressed in algebraic form or some other form?
Does it have a name?
2. Jul 21, 2014
### HallsofIvy
Staff Emeritus
You are talking about a particular way of getting a function, not a specific function so, no, it does not have a name. As one learns in Calculus, the length of the graph of y= f(x), from 0 to x is given by $\int_0^x \sqrt{1+ (f'(t))^2} dt$.
3. Jul 21, 2014
### 1MileCrash
It's without a doubt a specific function, one whose domain is the Cartesian product of x and the function space. I don't think that this is a good reason for it to not have a name.
4. Jul 21, 2014
### MrAnchovy
Not in general, but the solutions for a straight line and for a circle should be fairly obvious. Closed form solutions do exist for a few more complex curves e.g. parabola, catenery and cycloid but not for most others, even the humble ellipse.
The calculation of the length of a curve between two points is called rectification, or simply calculating arc length.
5. Jul 21, 2014
### guysensei1
What I was looking for is a function that gives itself when the length of curve function is applied.
6. Jul 22, 2014
### MrAnchovy
Try looking for this function. Clearly f(0) = 0. Let y = f(1). The length of the arc between (0, 0) and (1, y) is given by $\sqrt{1 + y^2}$ so we have $y = \sqrt{1 + y^2}$ or $y^2 = 1 + y^2$ which has no solution - the function you are looking for does not exist (over any non-zero domain).
7. Jul 22, 2014
### skiller
Why is that? That surely just gives the length of the straight line from (0, 0) to (1, y). The curve you are looking for is not going to be of that form.
8. Jul 22, 2014
### Staff: Mentor
No, but its length has to be strictly greater than the length of the straight line between the two points.
9. Jul 22, 2014
### MrAnchovy
Oh, I thought one thing and wrote something slightly different, let's try again.
Try looking for this function. Clearly f(0) = 0. Let y = f(1). The shortest arc between (0, 0) and (1, y) is simply the diagonal of length $\sqrt{1 + y^2}$, and so y must be at least as large as that. So we have $y \ge \sqrt{1 + y^2}$ or $y^2 \ge 1 + y^2$ which has no real solution. The function you are looking for does not exist (over any non-zero domain).
|
{}
|
# News this Week
Science 19 Oct 2007:
Vol. 318, Issue 5849, pp. 372
1. GLOBAL WARMING
# Nobel Peace Prize Won by Host of Scientists and One Crusader
1. Richard A. Kerr,
2. Eli Kintisch*
1. With reporting by Pallava Bagla.
The announcement came as a shock to Robert Watson. “It would never have crossed my mind that a scientific assessment process would be named in a Nobel Peace Prize,” he says. “If anyone had told me that could happen, I would have said, ‘You have to be smoking something.’” But stone-cold sober the Norwegian Nobel Committee was when it awarded the prize to the United Nations-sponsored Intergovernmental Panel on Climate Change (IPCC)—which Watson chaired from 1997 to 2002—and to Al Gore for their “efforts to build up and disseminate greater knowledge about man-made climate change” because such change may increase “the danger of violent conflicts and wars, within and between states.”
The odd-couple winners are a good match, most scientists believe. On the one hand, there's the organization of thousands of unpaid, nearly anonymous researchers meticulously assessing the state of climate science; on the other, a former politician using that science to underpin his media-savvy campaign to save the world from climate catastrophe. “The combination of IPCC, with its very careful examination of scientific knowledge, and Al Gore's ability to bring the message to politicians and the public” has worked well, says Bert Bolin, the first chair of IPCC. Not that their work is done. There's still the matter of steeling the public's will to meet the costs of countering the threat.
On the IPCC side, the winners are legion. “This is an honor that goes to all the scientists and authors who have contributed to the work of the IPCC,” says Indian engineer and economist Rajendra Kumar Pachauri, current IPCC chair. The award recognizes a vast amount of unpaid hard work on their part, says geoscientist Michael Oppenheimer of Princeton University, who has served IPCC in various capacities since the United Nations established the body in 1988. “There's an incredible amount of time involved,” he says, flying to meetings in every corner of the world, hammering out consensus, responding to thousands of reviews, and extracting government approval word by word for three different working groups for each report (Science, 9 February, p. 754). “There is a price,” says Oppenheimer. “People burn out.”
Working against burnout is “a sense of community responsibility,” says Oppenheimer. “A free society provides the space so you can do science” and create knowledge. In return, he says, climate researchers serve on IPCC to distill that knowledge in a credible way for policymakers. Adds Watson: “They want informed political decisions. If they want their science to be part of informed policymaking, the IPCC is the vehicle.” And then there is self-interest. “I get more out of IPCC than I put in,” says Oppenheimer. “IPCC meetings are very useful.” They force a critical analysis of a scientist's own specialty and provide exposure to the top people in other fields, scientists say.
The other winner of the prize is far more familiar to the public. But Gore has also been well-known to the scientific community for decades. Scientists say few politicians have relied upon or involved more researchers in their policy work than Gore. “My relationship with Al Gore was born in combat,” says climate researcher Stephen Schneider of Stanford University in Palo Alto, California, who recalls a 1981 hearing then-representative Gore held in which Schneider opposed a move by the Reagan Administration to cut climate research. “We were soldiers in the same war … for 25 years.”
Climate researchers have known Gore as the rare policymaker who brings scientists in—and listens. When he visited Lamont-Doherty Earth Observatory in Palisades, New York, as a senator, recalls geochemist Wallace Broecker, “he said, ‘I don't want a tour. I just want to sit around a table with some of your climate people.’” While Gore was writing his 1992 book Earth in the Balance, recalls atmospheric chemist Michael McElroy of Harvard University, the then-senator spent 2 hours on the phone nailing down a “pretty subtle chemical point” about ocean acidification. “He came into these issues with a visceral feel that this was an important issue,” says McElroy, “like the Vietnam War had been when he was a young man.”
Schneider thinks the award to both Gore and IPCC recognizes their dual roles in promoting climate science. “We provide the credibility the Gores and Blairs and Schwarzeneggers need,” he says of the panel. And Gore's treatment of that science? “He did a pretty good job of communicating complex scientific information to a lay audience,” says McElroy of Gore's film An Inconvenient Truth. “If it was a scientist doing it, it would be different. But I don't think there were any glaring errors.” The publicity, Broecker says, accomplished far more than IPCC's scientists could have done on their own: “Gore put it in a way that people listened. We're much further along to meaningful action [to cut emissions] because of him.”
IPCC led the way, Watson says. Its reports forging increasingly strong links between human activity and global warming were instrumental in moving nations toward drafting and signing the Kyoto Protocol for cutting greenhouse gas emissions, he says. But more recently, says Oppenheimer, other forces have come into play: high oil prices and a new energy crisis; events ascribable to global warming, such as the dwindling of Arctic sea ice; and weather events such as Hurricane Katrina that are at least analogs of weather in a greenhouse world.
And then “along comes Al Gore,” says Oppenheimer. The end result has been an explosion of media attention and, in the United States, unprecedented political debate and even emission-cutting legislation. But it's not over, warns political communications researcher Matthew Nisbet of American University in Washington, D.C. IPCC and Gore may have raised awareness broadly and stoked concern among the already environmentally attentive, but by Nisbet's reading of the polls, the broad support for emissions cuts that will hurt is nowhere near there. Activists, he says, need a new message.
2. NOBEL PRIZES
# Chemistry Laureate Pioneered New School of Thought
1. Robert F. Service*
1. With reporting by Gretchen Vogel in Berlin, Germany.
Now that's a birthday present! Instead of receiving the random necktie on his 71st birthday last week, Gerhard Ertl was awarded this year's Nobel Prize in chemistry. Ertl, a physical chemist at the Fritz Haber Institute of the Max Planck Society in Berlin, Germany, won for developing methods that reveal how chemical reactions take place on metals and other surfaces. Those techniques have led to results as diverse as new catalysts that remove poisonous carbon monoxide from car exhaust and an understanding of how stratospheric ice crystals supercharge chlorine's ability to destroy the planet's protective ozone layer.
“This is really well deserved,” says Ralph Nuzzo, a surface chemist at the University of Illinois, Urbana-Champaign. “Ertl is a titan.” John Vickerman, a chemist at the University of Manchester in the U.K., agrees. “The reactions occurring at surfaces are very difficult to probe because there are so few molecules involved, and they frequently occur very rapidly,” he says. “Furthermore, the scientist has to distinguish what is happening in a layer one molecule thick from the rest of the solid. Ertl developed very sophisticated physical tools to identify the chemistry occurring at the surface.” The Royal Swedish Academy of Sciences, which awards the Nobel Prizes, says Ertl was selected not for developing a particular tool, technique, or discovery, as is often the case, but because “he established an experimental school of thought for the entire discipline.”
One early example was in figuring out how iron-based catalysts convert hydrogen and nitrogen into ammonia, a critical industrial process for making fertilizers. This conversion, known as the Haber-Bosch process, combines dinitrogen molecules from the air with dihydrogen molecules. Earlier studies had revealed that the slowest step in the process was one in which nitrogen molecules adsorb onto iron particles in a manner that primes them for combining with hydrogen. Researchers didn't know whether the tightly bonded nitrogen molecules reacted with hydrogen intact or whether they broke apart first. Using spectroscopic techniques and other tools, Ertl revealed the complete seven-step process whereby nitrogen and hydrogen molecules land on an iron surface, break apart, and react to form ammonia.
After receiving the announcement last Wednesday, about 200 of Ertl's colleagues toasted him with champagne and German pretzels on the shaded lawn of the Fritz Haber Institute. After Ertl fielded a few questions from TV reporters, the crowd broke out in a rousing round of “Happy Birthday to You” (in English).
In an earlier phone interview with Science, Ertl was quick to offer credit to fellow researchers. His field, he says, was propelled by the parallel development of many surface characterization techniques. And, he adds, many scientists were adept at applying them—including Gabor Somorjai of the University of California, Berkeley, with whom he shared the 1998 Wolf Prize in Chemistry for their work in surface science. “I was a little bit disappointed he didn't share [the Nobel Prize] with me,” Ertl says. Last week, several chemistry bloggers went further, arguing that Somorjai deserved recognition for his vital role in laying the foundations of surface science.
For his part, Somorjai says simply that he does not understand how award decisions are made. But he notes that in the 1980s, he began steering away from ultrahigh-vacuum surface science to study reactions at solid-liquid interfaces, among other things. By contrast, Somorjai says, “Ertl stayed in there all through his life.”
3. NOBEL PRIZES
# Three Economists Lauded for Theory That Helps the Invisible Hand
Scottish philosopher Adam Smith asserted that when everyone acts out of self-interest, everyone will eventually benefit, as if a benevolent “invisible hand” molds the economy. Economists now know that view is naive: In some situations, rational people will act in ways that leave everybody a loser. But such dreary outcomes can sometimes be avoided, thanks to work that earned three Americans the Nobel Prize in economics.
Leonid Hurwicz of the University of Minnesota, Twin Cities, Eric Maskin of the Institute for Advanced Study in Princeton, New Jersey, and Roger Myerson of the University of Chicago, Illinois, developed “mechanism design theory.” The theory aims to find schemes, or “mechanisms,” that ensure that acting in self-interest will indeed lead to benefits for all. Today, its applications range from how best to auction broadcast rights and other public resources to contract negotiations and elections.
“At first, I thought it was some kind of a joke,” says Hurwicz, of hearing of his award. At 90, Hurwicz is the oldest person to win a Nobel. He says colleagues had told him that he might win, “but not in recent years.” The prize is well-deserved, others say. “I was riding in the car [and discussing the prize] with somebody yesterday, and these were the three names that came up,” says W. Bentley MacLeod, an economist at Columbia University.
Mechanism design theory starts with the recognition that unbridled self-interest doesn't always lead to the greater good. For example, if the people of a town were asked to chip in to build a bridge, each person would benefit by underestimating his or her share and letting others bear the cost. So for lack of funds, the bridge would never get built. That sort of a logically unavoidable lose-lose situation is known as a Nash equilibrium.
In the 1960s, Hurwicz pioneered the study of how to avoid such dead ends by fiddling with the rules of an economic or social interaction so that the most beneficial state and the inevitable equilibrium state are one and the same. “It's a little Machiavellian,” says Gabrielle Demange of the Paris School of Economics. “You design a game so that in the end the Nash equilibrium comes out to be what you want.” For example, each person could be required to pay what others think the bridge is worth, thus eliminating the incentive to lie.
Maskin, 57, and Myerson, 56, expanded on Hurwicz's work. In 1977, Maskin developed a criterion for determining just when it's possible to find rules that will guide self-interested participants to the desired end. Starting in the late 1970s, Myerson showed that whenever a mechanism exists, it is also possible to find one that gives participants an incentive to tell the truth, an insight that makes it much easier to devise practical mechanisms.
Relying heavily on game theory, the laureates' work has been largely abstract and formal. “My methodology is to invent simple little worlds in which there is just a bit that we don't understand and can study,” Myerson says. Nevertheless, the theory may play a role in confronting perhaps the most complex and pressing problem facing humanity today, climate change, by helping to set up incentives that encourage consumers and countries to minimize greenhouse gas emissions. “Mechanism design should definitely be pertinent to the problem,” Maskin says. “But first we have to decide exactly what we're trying to accomplish.”
4. EVOLUTION
# Natural Selection, Not Chance, Paints the Desert Landscape
1. Elizabeth Pennisi
Desert snow, a flower that lives in the Mojave Desert, has a colorful history—literally and figuratively. The five-petaled Linanthus parryae comes in purplish-blue and white varieties; it sometimes carpets dusty landscapes in a single color and sometimes in a blue-white mosaic. Sixty years ago, studies of these patterns provided key support for a powerful evolutionary theory. Now, two evolutionary biologists have found that the theory doesn't hold in this species.
At issue is the relative role of randomness in genetic differentiation within a population. Did the chance increase in frequency of a new version of a gene—for example, one that tinted desert snow blue—and the luck of the draw result in the blue blooms flourishing in some places and not others? Such serendipity is called genetic drift, and it contrasts with the idea that fitness in a particular environment—natural selection—not chance, is responsible for the successful spread and distribution of these blue and white flowers.
Researchers began studying Linanthus in the early 1940s, most notably systematist Carl Epling and evolutionary biologists Theodosius Dobzhansky and Sewall Wright. Epling and Dobzhansky, and later Wright, attributed the flowers' distribution to genetic drift: Blue flower seeds happened to land on the far side of a particular ravine, for example, and spread, isolated from the white ones by the forbidding habitat at the bottom of the ravine.
Epling later decided that natural selection was important, but Wright, based on his continued work with this species, concluded that genetic drift was key. He proposed that the larger a population, the more likely new versions of a particular gene would take hold in a subset of that population, setting the stage for some subsets to head in different evolutionary directions. He called this idea the shifting balance theory. That work has been cited more than 1400 times. Nonetheless, evolutionary biologists have been arguing ever since about how right Wright was.
In 1988, Douglas Schemske of Michigan State University in East Lansing and Paulette Bierzychudek of Lewis & Clark College in Portland, Oregon, decided to weigh in on the controversy. “Because none of these studies had directly estimated natural selection, we thought it was necessary to mount a long-term field project to resolve the dispute,” Schemske recalls. That year, they started tracking the distribution and fitness of Linanthus.
They reported in 2001 that natural selection could be intense, playing a larger role in shaping the distribution of flower color than Wright realized. Now, in an early online release of Evolution, Schemske and Bierzychudek have pinpointed strong environmental differences that likely keep blue flowers to one side of the ravine and white flowers to the other. The work “provides a very nice historical perspective on this key system, one that has crept into a lot of textbooks,” notes evolutionary biologist Michael Lynch of Indiana University, Bloomington. “They clearly don't come down on the side of Wright.”
Schemske and Bierzychudek focused on two 500-meter-long swaths along a 25-meter-wide ravine with blue flowers on the west side and white ones on the east. Over 7 years, they counted the blue and white blossoms and noted changes in the distribution of the two colors. They looked at the distribution of allozymes—different versions of a given protein—in flowers on both sides of the ravine. In addition, they planted some white-flower seeds on the west side and blue-flower seeds on the east and vice versa, monitoring seed production in these experimental plots. Because one year was quite wet and another quite dry, the researchers were able to assess the two colored flowers' fitness relative to precipitation. They also analyzed the makeup of the soil and plant communities on both sides of the ravine, finding big differences in both. “It was rigorous fieldwork and careful analysis, work that addresses important questions with exceptional clarity,” says plant population biologist Vincent Eckhart of Grinnell College in Iowa.
The sides were more than 95% blue or white. But the distribution of the allozymes did not parallel that of the flower color. Had genetic drift caused the color pattern, the distribution of at least some allozymes should have been skewed as well, Schemske and Bierzychudek note. In the seed-transplant studies, each color flower typically did best on its own turf, indicating that selection played a role. “Our data strongly suggest that it's no accident that there are only blue survivors on the west side and only white survivors on the east side,” says Bierzychudek.
Furthermore, the soil and community composition of the two sides of the ravine were different—one side had a much higher proportion of creosote bushes, for example—providing strong evidence of environmental differences that could favor one flower color over another.
“The study shows the unimportance of drift in Linanthus,” says evolutionary biologist Masatoshi Nei of Pennsylvania State University in State College. “In this sense, [the] finding shakes the ground of the shifting balance theory.” But he is cautious about making generalizations, given that other studies suggest otherwise: “The relative importance of selection and drift depends on the genes and populations studied.”
5. ARCHAEOLOGY
# Coastal Artifacts Suggest Early Beginnings for Modern Behavior
1. Ann Gibbons
Modern humans first appear in the fossil record of Africa between 160,000 and 195,000 years ago, with skulls and bones that are virtually indistinguishable from ours. But looking like us doesn't necessarily mean that they acted like us. Indeed, researchers have debated intensely about when Homo sapiens began to act sapient by producing complex tools and manipulating symbols.
Now, an international team of researchers says that some key elements of modern behavior were in place by 164,000 years ago, pushing back the appearance of some of these activities by 25,000 to 40,000 years. The team found complex stone bladelets and ground red pigment—advances usually seen as hallmarks of modern behavior—coupled with the shells of mussels, abalone, and other invertebrates in a cave in South Africa. These ancient clambakes are the earliest evidence of humans including marine resources in their diet, according to a report in this week's issue of Nature.
Not everyone agrees that the artifacts add up to a major cognitive shift. But to paleoanthropologists such as Sally McBrearty of the University of Connecticut, Storrs, the package provides “strong evidence” that these people were manipulating symbols. That “supports the gradual rather than sudden or rapid accumulation of more complex behaviors,” adds Alison Brooks of George Washington University in Washington, D.C.
The team found the shells, tools, and pieces of red ochre cemented in the wall of a cave at Pinnacle Point on the Cape of South Africa, on the coast of the Indian Ocean. Using uranium series and optically stimulated luminescence dating, the team dated the sediments to about 164,000 years, during a glacial period that left Africa cool and dry. These humans might have started to eat marine resources as a “famine food” because of a harsh environment, says team leader Curtis Marean of Arizona State University's Institute of Human Origins in Tempe.
Although the team found no human bones, the ancient people did leave behind a trail of stone flakes that the team identifies as bladelets, small points used by more recent humans as advanced projectile points. If so, this would push back the appearance of true bladelets by at least 90,000 years. Other researchers caution, however, that the points may have been made by accident rather than on purpose. The pieces of red ochre were worn down, suggesting that these people were using ochre paste as glue to make complex tools or perhaps even as body paint. Says Marean: “You put that dietary, technological, and cultural package together, and all of a sudden it looks like archaeological sites from 2000 years ago.”
But using “little bits of red ochre” pales in comparison with the advances that appear 50,000 years ago in Europe, when humans began to draw animals, shape beads, and bury their dead in elaborate graves—changes that enhanced reproduction and are linked to dramatic population expansions, says paleoanthropologist Richard Klein of Stanford University in Palo Alto, California. By themselves, the Pinnacle Point artifacts would not confer such a significant reproductive advantage, says Klein.
Marean, however, thinks the behavioral changes were so important that they might have been one of the catalysts for the birth of our species. He is searching even older sediments to pinpoint when these behaviors emerged.
6. ASTRONOMY
# Space Sighting Suggests Stardust Doesn't Have to Come From Stars
1. Govert Schilling*
1. Govert Schilling is an astronomy writer in Amersfoort, the Netherlands.
Microscopic rubies and sapphires arise in black hole winds. Using NASA's Spitzer Space Telescope, astronomers spotted the telltale spectroscopic fingerprints of these unpolished microgems in space near a supermassive black hole. Many other dust species also showed up, including crystalline minerals that make up sand, glass, and marble. Team leader Ciska Markwick-Kemper of the University of Manchester, U.K., says the find may help explain the abundance of dust particles in the very early universe.
“It's a spectacular find,” says astrochemist Rens Waters of the University of Amsterdam in the Netherlands. “If pressures and temperatures in supermassive black hole winds are favorable for dust production, huge quantities of dust could be produced in this way.”
The universe started out with a mixture of hydrogen and helium, the two lightest elements. Heavier elements such as carbon, oxygen, silicon, and magnesium formed by nuclear fusion in the first generation of extremely massive stars. Supernova explosions then dispersed these heavy elements through space, where some of them condensed into dust particles—the building blocks of planets such as Earth. However, many components of dust form only in the calm outflows of dying sunlike stars. So astronomers have been baffled to observe healthy amounts of dust at a time in the universe's history when sunlike stars were still in their infancy.
In 2002, astrophysicist Martin Elvis of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, suggested that dust could form in the winds of supermassive black holes that sit in the cores of young galaxies, sucking in matter with their enormous gravity. These gluttonous monsters are “messy eaters,” says Sarah Gallagher of the University of California, Los Angeles, spilling and blowing much of their food into space, including heavy elements from supernovae. The balmy temperatures and high densities in these winds could forge dust particles, including crystalline silicates and tiny rubies, from these elements, Elvis theorized.
Now, analysis of light from a supermassive black hole in a galaxy some 8 billion light-years away supports Elvis's idea. In the Spitzer observations, Markwick-Kemper, Gallagher, and their colleagues detected many mineral species previously seen only in the outflows of dying sunlike stars, such as forsterite (Mg2SiO4), periclase (MgO), and corundum (Al2O3), the mineral that constitutes ruby and sapphire. Because many of those minerals are easily destroyed by energetic radiation from stars or by interstellar shock waves, the observations suggest that the dust has been freshly formed in the black hole winds.
The case isn't closed. In their paper in the 20 October issue of The Astrophysical Journal Letters, Markwick-Kemper and her colleagues note that part of the early universe's dust could still have come from supernova ejecta. Says Waters: “The origin of dust is still shrouded in lots of mysteries.”
7. SCIENCE FUNDING
# U.K. Spells Out Boost in Medical Research
1. Daniel Clery
In 10 years as the U.K. government's finance chief, Gordon Brown engineered substantial and steady growth in research funding. Now, as prime minister, Brown is continuing that trend. Last week, the government's Comprehensive Spending Review (CSR)—a statement of spending plans issued every 2 or 3 years—signaled a boost of £300 million (about $600 million), to £1.7 billion, in medical and health research over the next 3 years. “This is nothing less than good news,” says Hilary Leevers, acting head of the Campaign for Science and Engineering in the U.K. The government had previously announced that it intended to boost the overall level of funding for science and university research from £5.4 billion to £6.3 billion over the same 2008–11 period. CSR reveals how that increase will be divvied up. Around half goes to the U.K.'s seven research councils, which distribute grants to scientists at universities and national labs. They will see their £2.8 billion annual funding boosted on average by 5.4%. The emphasis on medical and health research continues a process begun earlier. In 2006, Brown appointed David Cooksey, a venture capitalist who has advised the government on medical research, to figure out the best way of combining all the government's medical and health research spending into a single fund. Last December, acting on Cooksey's recommendations, Brown created the Office for Strategic Coordination of Health Research (OSCHR). OSCHR oversees the activities of the Medical Research Council (MRC) and the Department of Health's National Institute for Health Research to promote a new emphasis on “translational” research—taking basic science results and turning them into usable drugs or treatments. CSR—which does not need parliamentary approval—boosts the combined budgets of these two bodies by £300 million. “There's been a need for an increase for some time, and a need for a better connection between the MRC and the Department of Health,” says Michael Rutter, clinical vice president of the Academy of Medical Sciences, although he expressed concern that the emphasis on translational research “doesn't lead to a reduction in funding for basic science.” Leevers has similar concerns. The research councils have recently begun requiring information about the economic impact of research on grant applications, a change that some researchers worry would put basic research proposals at a disadvantage. “The government ardently believes in the drive toward innovation,” she says. “But you have to have the bedrock on which to innovate.” 8. SCIENTIFIC FACILITIES # Location, Location, Location 1. Daniel Clery When nations vie for massive international scientific facilities, science can take a back seat to politics and even sheer chance. Dealmakers say there's no magic formula for getting things right On the night of 17/18 October 1977, a Lufthansa airliner sat on the tarmac of Mogadishu airport in Somalia and the world held its breath. Four days earlier, terrorists from the Popular Front for the Liberation of Palestine had hijacked the Boeing 737 en route from Majorca to Frankfurt and demanded$15 million and the release of 11 members of an allied terrorist group, the Red Army Faction (RAF), who were in prison in Germany. Over the following days, the plane landed in Rome, Larnaca, Bahrain, Dubai, and Aden before coming to a stop in Mogadishu, where the hijackers dumped the body of the pilot—whom they had shot—out of the plane. They set a deadline that night for their demands to be met.
At 2 a.m. local time, a team of German special forces, the GSG 9, which had been tailing the plane across the Mediterranean and Middle East, stormed aboard. In the fight that followed, three of the four terrorists were killed and one was captured with bullet wounds. All the passengers were rescued uninjured. Far from the action, the resolution of the hijacking had a surprising side effect: the Joint European Torus (JET), an experimental nuclear fusion reactor being planned by European nations, ended up being built in the United Kingdom rather than in Germany.
On the day the hijacking ended, British prime minister James Callaghan arrived in Bonn for a summit meeting and was met by German chancellor Helmut Schmidt with the words: “Thank you so much for all you have done.” The reason for his gratitude was that Britain's Special Air Service (SAS), the Army's special forces unit, had advised the GSG 9 and provided them with specially designed stun grenades, which the German commandos used to incapacitate the hijackers during the storming of the plane.
Because of this help, Schmidt settled an issue that had recently divided the two countries: where to build JET. Most of the nine members of what was then the European Economic Community (EEC) supported Culham near Oxford, but Germany was holding out for Garching, home of its own fusion research lab. At a cabinet meeting the day after meeting Callaghan, Schmidt backed Culham, and on 25 October, the site was approved by EEC research ministers.
It's not often that acts of terrorism play a part in international research collaborations, but there comes a time in the development of many such projects—usually around the issue of choosing a site—when national pride and cross-border rivalries can take over from technical considerations. In such situations, the scientists who have carefully nurtured a project for years become bit players as international power politics is played out.
When politicians stumble, the process can become so divisive that it threatens the whole project and international relations as well. Such was the case with ITER, a global fusion research project that is the successor to JET. In late 2003, ITER's site-selection process descended into 18 months of mudslinging and frantic shuttle diplomacy. Although an amicable resolution was finally achieved, there were moments when the project's future looked in doubt, and many consider the episode a low-water mark in international scientific collaboration. “I haven't talked with anyone who was happy about the ITER process, even those who won,” says an international official who asked not to be named.
So is there a better way to choose the site for an international facility? Those projects currently on the drawing board—including the next multibillion-dollar particle physics machine, the International Linear Collider (ILC)—don't seem to have agreed on the best method, but with the scars of ITER still raw, they are treading very carefully.
## Physicists with a mission
The model for international collaborations, most agree, is CERN, Europe's particle physics lab. Soon after the Second World War, a group of prominent physicists, including Pierre Auger, Isidor Rabi, Eduardo Amaldi, and Lew Kowarski, bullied, coaxed, and cajoled European governments and the continent's physicists into supporting an international particle physics lab. The aim was both to rebuild European science and to foster international cooperation. In February 1952, 11 nations signed up to the provisional CERN and soon four sites were under consideration: Geneva, Copenhagen, Paris, and Arnhem in the Netherlands.
A site-selection committee began visiting the sites prior to a meeting of the provisional CERN council in October 1952. By this time, Paris had slipped in the rankings because it was considered too big, too expensive, and plagued by labor strikes. Copenhagen was strongly opposed by the French. Geneva made a strong case as an international city: home of the defunct League of Nations, and with good tax and customs terms. Reportedly, on the day the selection committee visited Arnhem, it was pouring with rain. The panel found a town with only two hotels, no university, no international school, and only a few foreign newspapers at the train station newsstand. At the council meeting in October, the delegations lined up behind Geneva.
The next hurdle was Swiss public opinion. Eastern bloc countries had declined to join the project, and communist politicians in Switzerland exploited the resulting Western bias. They claimed that the lab would become part of the U.S. atomic system, controlled by bomb manufacturers. A heated debate in the Geneva state council spilled out into fistfights in the corridors. Voters in the Canton of Geneva, fearing the health effects of radiation and a threat to Swiss neutrality, petitioned for a referendum on CERN, to be held on 29 June 1953. In the run-up, physicists made a hectic round of speeches and rallies—the city was abuzz with scientific debates. On the day, only 7332 voted against the lab—fewer than had signed the original petition—and 16,539 voted in favor. On 1 July 1953, the provisional council voted CERN into existence.
The center soon became a model for other cross-border collaborations: the Institut Laue-Langevin (ILL, a neutron source), the European Molecular Biology Laboratory (EMBL), the European Space Agency (ESA), and the European Southern Observatory (ESO). Relatively few sparks flew in the discussions over siting these organizations. ESA is headquartered in Paris, but has other facilities in all its major funding countries apart from the United Kingdom. ESO has its base in Garching, Germany, but its telescopes are all in Chile. “Everyone is happiest when the [location] issue doesn't come up,” says the international official, such as when the best site is not in one of the funding countries.
But such harmony grew increasingly difficult to maintain as politicians became increasingly interested in scientific facilities for the international prestige they brought and the money they injected into local economies. In the mid-1970s, European researchers identified the need for a large synchrotron radiation source, a provider of intense laserlike x-rays for physicists, materials scientists, and molecular biologists. By the early 1980s, many countries had expressed interest in hosting the machine but it boiled down to horse-trading between the main backers, France and Germany. According to CERN physicist Horst Wenninger, President François Mitterrand and Chancellor Helmut Kohl decided the issue over a breakfast cup of coffee: The site for the European Synchrotron Radiation Facility (ESRF) would be Strasbourg on the French-German border.
But in 1984, researchers and politicians in the French city of Grenoble began agitating for a rethink. According to current ESRF director Bill Stirling, the then ILL director Brian Fender had suggested a vacant site next door to his facility to build on synergies and common services. Grenoble is also home to a number of French national research centers, and prominent scientists lobbied Mitterrand and other politicians. With elections looming, Mitterrand struck a new deal with the Germans. “They were furious in Strasbourg,” Stirling says. But ESRF's troubles weren't over. The geology of the site was not ideal, and it was surrounded by vibration-causing roads and rivers. After errors in construction, the concrete slabs supporting the beam lines had to be relaid. But after its difficult birth, the world's first third-generation synchrotron was a great success.
Since ESRF, the movement to build large pan-European labs has faded. These days, it is more common for governments to beef up an existing national lab with new facilities and recruit international partners to help shoulder the burden. Germany is currently starting construction on two such examples: the XFEL x-ray laser at its DESY particle physics lab near Hamburg and the Facility for Antiproton and Ion Research (FAIR) at the GSI heavy ion research lab at Darmstadt.
## The bigger they come …
In recent years, scientists' ambitions have increasingly taken on a global scale, and as the budgets get bigger, the stakes get higher. The most ambitious project, and the one that really tested the powers of diplomacy was ITER, an experiment designed to prove fusion is a viable source of power for humankind (Science, 13 October 2006, p. 238).
The ITER project was started in the mid-1980s. After a global design effort, a redesign, the departure of some members, and the arrival of others, the delegations from six partners—China, the European Union (E.U.), Japan, Russia, South Korea, and the United States—gathered in Washington, D.C., in December 2003 to choose between two candidate sites and sign the agreement that would set the construction ball rolling, at a total cost of some \$12 billion. “The higher the stakes, the more difficult the decision is,” says Achilleas Mitsos, the E.U.'s former director general of research.
The political atmosphere at the Washington meeting could not have been worse. The E.U.'s proposed site was at Cadarache in southern France, and relations between France and the United States were subzero following France's opposition to the Iraq War, which had begun earlier that year. According to Mitsos, who was the E.U.'s chief negotiator, the United States was determined to get a result in Washington and was unambiguously in favor of Japan's proposed site, Rokkasho. “Clearly, the game was not going to be easy,” Mitsos says.
Despite enormous pressure, the E.U. delegation played the long game and convinced the other partners that further technical studies of the two sites were needed. Those studies still failed to signal a clear winner, although European researchers asserted that Rokkasho's position in northern Japan had too high a risk of earthquakes, whereas the Japanese charged that Cadarache was too far from the coast and it would be impossible to move large components that far by road. Japan upped the stakes by offering to pay not the required 40% host contribution but 50%. The E.U., after much handwringing, followed suit.
The E.U. negotiators realized that in order to win they had to come up with a face-saving formula for the loser. The E.U. opened direct discussions with Japan on a set of extra fusion-science facilities to be built in whichever country did not get the main reactor. Negotiations over this “broader approach to fusion” continued in a theoretical fashion through the second half of 2004 and into 2005—Mitsos says he traveled to Tokyo twice a month while other officials shuttled between other capitals. “Russia and China every day became more pro-Cadarache, and the U.S. and Korea every day became less insistent on Rokkasho,” he says. Finally, in June 2005, Japan agreed to back Cadarache. “The broader approach was the deciding factor. It allowed Japan to not come out as the loser,” Mitsos says.
How will the next megaproject avoid the pitfalls that ITER stumbled on? “We're trying hard not to duplicate ITER,” says Barry Barish, head of the global design effort for the ILC project, but “if there's a process, I don't know what it is.”
The ILC is the next big machine on particle physicists' shopping list. Researchers around the world are currently working on a detailed design for the machine and they've done some testing of “sample sites” in the United States, Europe, and Japan. “We're very early in the process, but probably our biggest lesson from ITER is to avoid the ‘all or nothing’ situation,” Barish says. Although the machine has to be in one place, its high-tech components will be designed, built, and tested at sites across the globe, and it will be managed and governed as a global facility.
Drawing another lesson from ITER, ILC's funders have become actively involved in the planning, even at this early stage. Ian Halliday, former head of the U.K.'s Particle Physics and Astronomy Research Council, helped set up Funding Agencies for the Linear Collider (FALC), which, he says, will allow interested parties to “talk about what everyone wants, identify problems early on, and learn how everyone's funding works.” FALC has already acted to smooth out tensions over issues, such as whether to use superconducting magnets in the accelerator or conventional technology, and who should lead the design effort. “It's a gradual process. We might end up without a shootout, but it's in the lap of the gods,” he says.
Experts in such international negotiations dismiss the idea that there is some magic formula for resolving disputes. “There isn't such a thing,” says Stefan Michalowski, executive secretary of the Organisation for Economic Cooperation and Development's Global Science Forum, a talking shop for senior scientists and science administrators. “Don't try to create general principles,” he says, but at a certain stage in a project's planning, “get everyone to agree on the rules.” He cites the case of the International Neuroinformatics Coordinating Facility (INCF), a small collaboration for which he was asked to head the site selection committee. All 15 member countries agreed on the criteria for selection beforehand. His committee worked through the process and made its recommendation. “Not everyone was happy, but no bones were broken and the losers got over it.”
Although there may not be a magic formula, some sort of oversight authority could play a role. “The only thing that will make a difference is a substantial, European-level central fund for facilities,” says Peter Tindemans, spokesperson for the European Spallation Source, a neutron source that has been on the drawing board for more than a decade and will soon be choosing a site. E.U. officials have been thinking along similar lines. When they proposed plans for the latest tranche of the multiyear Framework research program, it contained funds to pay for as much as 20% of the construction cost of pan-European projects. E.U. officials “could participate to provide a package deal, come up with a plan to link projects, and allow everyone to have a stake,” says Mitsos. He adds that they even drew up a table, laying out details of funding and where each future facility would go so that all countries got a fair division of spoils.
In the budget negotiations last year for the seventh Framework, the funds for infrastructure were slashed and the program can now only help out with the preparatory stages of projects. But Mitsos believes that, in Europe at least, the E.U. will eventually take on the role of dealmaker and guardian of fairness in international projects. “The possibility to draw such a table exists. I'd be surprised if we didn't try again.” As for global facilities, they'll have to continue to make up the rules as they go along.
9. ALZHEIMER'S DISEASE
# Fresh Evidence Points to an Old Suspect: Calcium
1. Jean Marx
Proteins known to contribute to Alzheimer's pathology have been linked to disturbances in calcium ion regulation that could underlie neuronal death in the disease
Imagine that police discover hundreds of dead bodies over the course of a year and the same suspicious-looking man is standing near each one. A strong circumstantial case for murder, of course. But given that the exact cause of death is uncertain in each case and that no one witnessed the suspect with any obvious weapon, prosecutors would still have a hard time convicting him.
That's essentially the circumstance facing Alzheimer's disease researchers. For years, they've thought that the protein β-amyloid causes the neurodegeneration underlying the fatal illness, but they remain unsure about how it kills brain cells. Now, the mystery may be beginning to unravel.
New evidence supports an old, but somewhat neglected, idea: that β-amyloid, perhaps by forming channels in neuronal membranes, slays brain cells by making them unable to regulate their internal concentrations of ions, particularly calcium ions. Such changes can be “ominous,” says Charles Glabe of the University of California, Irvine (UCI). “You just can't go around punching holes in membranes” without endangering the neuron.
But β-amyloid is only part of the emerging picture. Two additional suspects, known as presenilin 1 and presenilin 2 (PS1 and PS2), have also been linked to Alzheimer's pathology because mutations in their genes can cause the disease. Evidence now indicates that these proteins, too, normally help maintain calcium ion concentrations in neurons and that the disease-causing mutations disrupt this function.
If so, this would be a new role for the presenilins, which were previously shown to contribute to Alzheimer's pathology by clipping β-amyloid out of a larger precursor protein called APP. But if a calcium imbalance does in fact cause neuron death in the disease, a new therapeutic strategy may be possible. “You might block calcium flux as a way of preventing neurodegeneration,” says Sam Gandy, an Alzheimer's researcher at the Mount Sinai Medical Center in New York City.
The idea that calcium overload might be the final insult that finishes off brain neurons in Alzheimer's emerged in the mid-1980s, mainly from a hypothesis put forward by Zaven Khachaturian, then director of the Alzheimer's program at the National Institute on Aging (NIA) in Bethesda, Maryland. Khachaturian, who now heads up the Lou Ruvo Brain Institute and Keep Memory Alive in Las Vegas, Nevada, says that he wanted researchers to focus more on finding the underlying mechanisms of neurodegeneration rather than just describing the brain pathology.
At about the same time, however, much of the Alzheimer's field began concentrating on β-amyloid as the likely nerve cell killer-in part because it's found in the abnormal plaques that stud the brains of Alzheimer's patients. Even more convincing evidence came when researchers found that mutations in APP cause an early onset form of the disease.
Then in the early 1990s, Nelson Arispe of the Uniformed Services University of the Health Sciences in Bethesda, Maryland, and his colleagues provided a possible link between β-amyloid and the calcium hypothesis. When they exposed artificial membranes designed to resemble the cell membrane to β-amyloid, the protein formed channels in the membrane. “Those channels were very particular,” Arispe says. “They only permitted the flow of cations [positively charged ions],” such as calcium, into the cell. That fits with numerous observations over the years that exposing nerve cells in culture to β-amyloid causes an increase in their internal calcium ion concentrations.
More recently, Arispe and his Uniformed Services University colleague Olga Simakova provided further support for the idea that calcium disturbances underlie β-amyloid's toxic effects. They found that application of β-amyloid to nerve cells maintained in lab cultures produced an immediate rise in intracellular calcium concentrations followed by the death of the cells. Both effects, they reported in the 9 May 2006 issue of Biochemistry, could be inhibited by a peptide they designed to block β-amyloid calcium channels.
Arispe isn't alone in reporting that β-amyloid seems to form ion channels. In 2005, Jorge Ghiso of New York University in New York City, Ratnesh Lal of the University of California, Santa Barbara, and their colleagues found that β-amyloid, as well as several other proteins that produce similar deposits in various tissues, form channels in artificial membranes.
Yet not everyone is persuaded by the channel evidence. Glabe and his colleagues find that β-amyloid increases the permeability of both artificial and normal cell membranes, but this, he says, doesn't seem to depend on the formation of ion channels. In this case, β-amyloid's effects weren't specific; the protein increased the cross-membrane movements of both negatively and positively charged ions.
Glabe proposes that β-amyloid causes a generalized thinning of neuronal membranes. If that happens, he says, a cell would become leaky and have to work a lot harder to maintain normal internal ion concentrations. This could have a number of harmful effects, including the generation of reactive oxygen species, a normal but nonetheless cell-damaging byproduct of metabolism.
The discrepancies between the two sets of observations remain unresolved. “I always assume we are both right. We're just not doing the same experiments,” Glabe says. For the time being, other Alzheimer's researchers have taken something of a “wait-and-see” attitude about whether β-amyloid forms membrane channels for calcium ions. “No one has proved it with rigor that would allow it to become dogma, but no one has disproved it, either,” says Gandy.
But there is another way in which β-amyloid may increase calcium entry into neurons: by altering the activity of the receptors that respond to stimulatory signals. Earlier this year, a team led by William Klein of Northwestern University in Evanston, Illinois, found that β-amyloid increases the calcium influx that occurs when the neurotransmitter glutamate activates the so-called NMDA receptor. Intriguingly, the researchers also found that memantine, a drug designed to inhibit NMDA receptor activity that has been approved for treating Alzheimer's, blocks this action of β-amyloid-an indication that drugs that restore calcium balance in neurons might indeed be therapeutic options for the disease.
## From the inside
Whereas β-amyloid apparently affects calcium entry through the outer cell membrane, the presenilins exert their effects on an interior membrane. Calcium ions not only enter the cell from outside when a neuron is stimulated, but they are also released into the cytoplasm from internal stores, primarily from a membrane-bound compartment called the endoplasmic reticulum (ER). That's where the presenilins, which are located in the ER membrane, come in. “Presenilin mutations somehow cause a bigger calcium release from the ER when glutamate stimulates a cell,” says Mark Mattson, whose team at the NIA Gerontology Research Center in Baltimore, Maryland, is one of several who made the finding.
This might be because calcium concentrations in the ER are elevated to begin with in cells bearing presenilin mutations. What causes that excessive accumulation has been unclear, but the answer may lie in new work from Ilya Bezprozvanny of the University of Texas Southwestern Medical Center in Dallas, Bart De Strooper of the Flanders Interuniversity Institute for Biotechnology (VIB4) and K. U. Leuven in Leuven, Belgium, and their colleagues.
In experiments done over the past year or two, both on artificial membranes and on cultured nerve cells, they found that the normal presenilins are membrane channels that allow calcium ions to leak passively from the ER into the cytoplasm. However, presenilins carrying Alzheimer's mutations no longer function as calcium leak channels. Presenilin mutations “overload the ER with calcium, and you get excessive release on [nerve cell] stimulation,” Bezprozvanny proposes. To Mattson, this sounds plausible. These results, he says, “seem to provide a molecular explanation for what we saw.”
Other researchers, however, contend that presenilin mutations alter calcium handling in a different way. Frank LaFerla and his colleagues at UCI have looked at how presenilin mutations alter calcium release from the ER through two previously identified ion channels, known as the ryanodine and IP3 channels because they are activated by those chemicals. “When you stimulate either of them, you get a lot more calcium release in [PS] mutant cells than in normal cells.” says LaFerla.
Through studies of mice genetically engineered with PS1 and other genes to develop Alzheimer's-like brain pathology, LaFerla, Grace Stutzmann, then a postdoc in his lab, and their colleagues found changes in the ER's handling of calcium occur in neurons even before the animals' brains developed the plaques and tangles characteristic of Alzheimer's. This finding, reported in the 10 May 2006 issue of the Journal of Neuroscience, indicates that the calcium changes might play a primary role in triggering neurodegeneration.
Some of the increased calcium release from the ER in PS-mutant cells may be due to greater expression of the ryanodine receptor, the LaFerla team has found. In as yet unpublished work, the researchers also observed that the presenilins are needed for the normal operation of the SERCA pumps that move calcium ions back into the ER after a neuron has fired. Not yet known is whether PS mutations affect SERCA pump operation. But if they increase it, the ER could become loaded with excess calcium ions.
Mutations in the APP and presenilin genes together account for less than 10% of all Alzheimer's cases. The other 90%, mostly of the late-onset variety, fall into the so-called sporadic category, meaning that their causes aren't known. There are, however, indications that changes in calcium handling by neurons could be contributing to Alzheimer's susceptibility as we grow older. Some of this evidence comes from Olivier Thibault, Philip Landfield, and their colleagues at the University of Kentucky College of Medicine in Lexington.
In work reported early last year in the Journal of Neuroscience, these researchers looked at several indicators of calcium function in neurons obtained from the brains of rats at ages ranging from 4 to 23 months. Beginning at 12 months, which is middle age for rats, the neurons underwent several changes that should make them hyperexcitable, a response similar to that seen in cells with presenilin mutations. Changes such as these “could conceivably set the stage for Alzheimer's by making neurons more vulnerable to further insults,” Landfield says. Those insults could include the increase in β-amyloid deposits that also occurs with age or membrane damage caused by reactive oxygen species.
Proving that similar calcium changes occur in humans could be diff icult as researchers can't perform the same experiments on human brain neurons that Thibault and Landfield performed on rats. Consequently, the acid test of the calcium hypothesis in Alzheimer's disease will likely await possible clinical trials of drugs that inhibit calcium movements into the cytoplasm. That's “the only way to test cause and effect in sporadic Alzheimer's,” Bezprozvanny says. Although researchers are beginning to test inhibitors of calcium release on cells in culture and animal models of Alzheimer's, it's still too early to tell whether they will find agents suitable for trials in humans.
10. FORENSIC SCIENCE
# Dirty Science: Soil Forensics Digs Into New Techniques
1. Krista Zala*
1. Krista Zala is a freelance writer in Los Angeles, California.
Geologists, chemists, and other scientists are developing better ways of matching soil samples to help catch and convict criminals
A woman and her mother are reported missing from a township east of Adelaide in South Australia. The next day, the woman's car is found 160 kilometers away with a dirty, bloody shovel in the trunk. When her son shows up in a nearby town and tries to get assistance for the broken-down car, police arrest him. But the suspect refuses to talk, and with no bodies to provide evidence or even prove someone is dead, the desperate police seek help.
They call in a team of forensic soil scientists to analyze the shovel. The minerals, acidity, and moisture level of the soil on the shovel lead the team to suggest that the police search a gravel quarry in the Adelaide Hills, where days later a fox uncovers a body. The next day, the second body is found near the first. The son confesses to killing his mother and grandmother and is sentenced to 18 years in prison.
Although it could be a television episode of CSI, the case was real—and so were the soil scientists, who now work at the Centre for Australian Forensic Soil Science (CAFSS) in Adelaide, created in 2003 following the team's successful intervention in this 2000 double homicide. CAFSS analyzes soil for investigations from murder to environmental pollution, helps train new forensic scientists, and conducts research on new soil-analysis techniques. It has become well known among Australian detectives. “Ten years ago, police wouldn't have wanted to talk to us,” says Rob Fitzpatrick, the center's director. “Now we can't cope with the number of cases.”
Soil evidence has been used to link criminals to crime scenes for more than a century. But in Australia and elsewhere, the recent automation of techniques and the ability to get information from smaller samples have made soil forensics an increasingly popular tool in criminal investigations. Scientists are now also exploring new ways of applying microscopy to dirt and of analyzing the plant waxes and microbial DNA within it.
Traditionally, soil forensics has been vulnerable to legal attack by defense lawyers because expert witnesses can testify only to whether samples are similar, versus the more absolute nature of a DNA or fingerprint match. Although some protocols are well-established—a soil sample is always sealed and locked, for example, and at least two people must be present while it's being analyzed—the field has yet to settle on the best means to analyze each soil type, explains Lorna Dawson of the Macaulay Institute in Aberdeen, U.K. One project aimed at standardizing old methods and validating new ones is the SoilFit project, led by Dawson and her colleagues. The effort also aims to provide a systematic database of soil fingerprints across the United Kingdom.
Reflecting the growing interest in applying new scientific techniques to soil, forensics researchers in Perth, Australia, last year hosted the first international conference on the topic, drawing several dozen attendees. This month, a second meeting in Edinburgh, U.K., is expected to bring together between 100 and 200 researchers, crime investigators, and forensic experts. “There's a lot of information in soil,” says Dawson.
## Fertile ground
Analyzing soil samples has a distinguished history in literature and real life. Sherlock Holmes uses soil to deduce Dr. Watson's peregrinations based on the dirt of his shoes in the 1890 work The Sign of the Four. A decade later, in the first known instance of soil evidence being used in a criminal investigation, German chemist Georg Popp helped authorities obtain a confession in a murder case near Freiberg, Germany. Popp connected dirt from the trouser cuffs and fingernails of the main suspect to the crime scene.
Matching soils is no small task. Soil is dynamic and part alive: A teaspoonful holds more than a million organisms, and soil microbes are constantly dying out or exploding in number. Water also leaches away compounds and introduces others as it trickles through. And soil is sensitive. Disturbing dirt—even by scooping a sample—changes it: Drying it alters its chemistry, exposing it to wind rounds out sharp edges on grains, and sealing it, such as in an evidence bag, can prompt a flurry of fungal growth. Such delicacy means that soil can only be pronounced in court as similar to or dissimilar from a possible source. Still, combining a few dirt characteristics can offer a compelling case for, say, linking a sample on a shoe to one in the back garden.
For the past few decades, soil scientists have used a variety of tools in criminal investigations. Ground-penetrating radar is able to pinpoint burial sites for individual bodies as well as mass graves. X-ray diffraction can uncover the minerals of the soil, infrared spectrometry determines the chemical pedigree, and analysis of diatoms and pollen provides biological clues to dirt's provenance.
Not all of those techniques can be applied to a given soil source, however. And others often require a greater sample size than the crime scene investigators can produce—hence the push for new, robust ways that require less dirt with which to work. As a visiting research fellow at CAFSS a few years ago, geologist Duncan Pirrie of the University of Exeter, U.K., saw how an automated scanning electron microscope could boost the availability and effectiveness of soil forensics. About 20 minerals occur in most soils, he explains, but what makes each sample identifiably distinct is the relative abundance of each mineral.
The CAFSS microscope, called QEMSCAN, finds both the mineral composition and its relative abundance from just 10 mg of dirt—50 times less than previously required. A similar instrument was originally developed for mining applications by Australian scientists, and the design was then adapted for use in forensic applications. QEMSCAN will analyze in 1 hour what would take a mortal days, and the scope's objective analysis triumphs over simple visual analysis of soils by people.
For a murder case in 2003, Pirrie hauled soil evidence from the United Kingdom to Australia for analysis, then promptly set up a QEMSCAN at his own university. Pirrie, who also conducts research on climate change in cretaceous Antarctica and on the effects of mining on coastal zones, says his lab is the only one in Europe with such a forensic scope. Today, the lab is called on about once a month to analyze traces of soil for murder and assault cases.
Several new soil-analysis techniques remain a topic of lab research rather than court cases—at least for now. Organic substances among a soil's minerals can also offer an opportunity to match samples. One of Dawson's projects funded under the Soil-Fit umbrella looks at profiling soils by the mementos plants leave behind. Plants have a waxy covering to keep them waterproof. The mix of organic compounds—alkanes, acids, sterols, and other alcohols—is unique to each species and persists in the soil, sometimes for thousands of years. Dawson and colleagues are now refining a means of extracting the waxes to identify plants.
Jacqui Horswell, a soil microbiologist at the Institute of Environmental Science and Research in Porirua, New Zealand, is pursuing another means of matching soil samples: DNA. Millions of species of fungi and bacteria form complex communities in dirt, yet most remain unknown to scientists. Fewer than 1% of bacterial species can be cultured in the lab, she explains. But by applying a technique that chops DNA at specific target sequences and analyzes the length of the segments, Horswell can profile most of the bacteria in 200 mg of soil. The method doesn't identify individual species. Instead, without the need to culture any microbes, it produces a DNA signature for the organisms within the soil. Horswell and her research team published their first DNA soil profiles in 2001, and they hope that in another 5 years their database of soil DNA signatures will be large enough to be useful in court.
## From science to law
Indeed, getting a new forensic technique established well enough for courts to recognize it can be a challenge. The SoilFit project, started in 2005 with funding from the U.K.'s Engineering and Physical Sciences Research Council (EPSRC), is one effort to give soil-matching more reliability as evidence. For prosecutors to better survive legal challenges in court, “we need a comprehensive survey of soil types in the United Kingdom” to substantiate the conclusions of an expert witness, says Derek Auchie, director of undergraduate law programs at Aberdeen Business School. To that end, EPSRC gave Dawson's team a £350,000 grant to analyze all feasible combinations of soil types—such as loams, peat, and alluvial soils—and vegetation such as grassland, heather, and forest. To date, they have tested an array of analysis techniques on all 120 combinations and are now comparing each technique's accuracy to work out which ones work better for which soil combinations.
EPSRC funded SoilFit under its Crime Initiative, which seeks to bridge crime-fighting services and academic research to benefit U.K. citizens. The project is “developing a community of researchers active in [fighting] crime,” says Peter Hedges, head of EPSRC's Economy, Environment and Crime Team. Dawson predicts that the Soil-Fit database will be ready for detectives and prosecutors in 2008. Sherlock Holmes would be pleased.
11. ARCHAEOLOGY
# In Search of the World's Most Ancient Mariners
1. Michael Balter
Researchers debate the capabilities of the first human voyagers, who traveled the waters of Southeast Asia at least 45,000 years ago
CAMBRIDGE, U.K.—We humans are terrestrial animals, yet we spend a lot of time gazing wistfully over bodies of water. We flock to the seashore or the lakeside at the slightest sign of mild weather and celebrate the romance of the sea in art and literature. Early seafaring was central to the spread of civilization, and today thousands of vessels ply the world's oceans, searching for fish and hauling billions of tons of cargo.
Despite the importance of seafaring to culture, however, archaeologists are not sure how, when, and why humans first ventured into the oceans. The earliest known boats, hollowed out logs found in the Netherlands and in France, are at most 10,000 years old. And the earliest indirect evidence for sea crossings in Europe—human occupation of Cyprus and the Greek island of Milos—dates to only 12,000 to 13,000 years ago. Yet ancient archaeological sites in present-day Australia, Indonesia, and other Southeast Asian islands suggest sea crossings at least 45,000 years ago, soon after modern humans first left Africa.
At a meeting here last month,* three dozen archaeologists and maritime historians sifted through the evidence for seafaring through the ages. They debated, sometimes sharply, whether the earliest mariners crossed the sea purposely or by accident. “There is a danger in accepting either of these extreme positions,” says William Keegan, an anthropologist at the Florida Museum of Natural History in Gainesville. “But I have no problem believing that people who were exploiting coastal resources had developed the ability to cross the water gaps in question by 50,000 years ago.”
The meeting also heard dire warnings that rising sea levels—which are already at least 50 meters higher than when modern humans first took to the oceans—might put evidence crucial to resolving these questions out of reach. “There are drowned terrestrial landscapes that were occupied by our ancestors,” says archaeologist Jon Erlandson of the University of Oregon in Eugene. “But we know almost nothing about them.”
## Blown about in a bamboo boat?
Although most archaeologists have assumed that seafaring was invented by cognitively advanced modern humans, one earlier hominid seems to have jumped the gun. In 1998, a team led by archaeologist Michael Morwood of the University of New England in Armidale, Australia, dated stone tools on the Indonesian island of Flores to 800,000 years ago, when Homo erectus was known to inhabit the Southeast Asian mainland. The occupation of Flores almost certainly required a sea crossing, and Morwood suggested at the time that the cognitive abilities of H. erectus might be “due for reappraisal” (Science, 13 March 1998, p. 1635.)
Yet the lack of other evidence anywhere near so early suggests to many researchers that this was a fluke that did not require technology. Perhaps a small band of hominids was blown out to sea on floating vegetation, as occasionally happens to other mammals who then found island populations. The possibility that H. erectus evolved in isolation on Flores for thousands of years, eventually becoming the tiny H. floresiensis, a.k.a. the Hobbit, supports the rarity of traveling to or from Flores.
“Flores is the exception that proves the rule in terms of when seafaring really began,” says Atholl Anderson, a prehistorian at the Australian National University (ANU) in Canberra. Erlandson agrees: “Otherwise, H. erectus should have colonized Australia and the surrounding islands.” Yet although the trek to Australia could be accomplished by relatively short hops across a multitude of islands, there is no evidence that H. erectus ever made that journey. Modern humans were the first hominids in Australia, arriving no earlier than 60,000 years ago, and many archaeologists are skeptical of dates earlier than 45,000 years. Even then, it's hard to differentiate true seafaring from a bit of boating gone wrong, says archaeologist Geoff Bailey of the University of York in the U.K. “It remains an open question whether the move into Australia was a purposeful, high-tech exercise in skilled navigation or a low-tech process of almost accidental drift that resulted in the opening up of a maritime universe.”
Both viewpoints were in evidence at the meeting. In her talk, ANU archaeologist Susan O'Connor argued that modern humans did not necessarily require sophisticated seafaring skills to colonize Australia and nearby islands. She proposed that early humans traveled by simple bamboo rafts—probably already used to explore rivers and estuaries—then drifted out to sea and were blown about by the monsoon.
And island hopping was easier in the past. About 45,000 years ago, sea levels were roughly 50 meters lower than they are today. As a result, Australia, New Guinea, and Tasmania formed a single continent known as Sahul, whereas Borneo, Java, and the Malay Peninsula were joined together in a continental shelf called Sunda (see map). Although the earliest dates for modern human occupation of Sahul are controversial, excavations on several islands north of Sahul have produced radiocarbon dates of up to 45,000 years ago—including O'Connor's own excavations at Jerimalai Cave on East Timor, which recently clocked in at 42,000 years. If Sahul was colonized as early as 60,000 years ago, O'Connor contended, then humans' fairly leisurely spread supports a more accidental than purposeful journey.
O'Connor concluded that when the colonizers did venture farther out to sea, traveling 180 kilometers to the islands of Buka by 28,000 years ago and 230 kilometers to Manus by 21,000 years ago, their earlier seafaring experience might have “preadapted” them to later innovations in boating technology, including larger vessels made of wood and the use of sails. Nevertheless, O'Connor and others stressed, there is no direct archaeological evidence for the use of sails that early, indeed none at all before about 7000 years ago in the Near East.
## The short chronology
O'Connor's scenario, which archaeologists call the “long chronology” for the colonization of island Southeast Asia, was challenged at the meeting by archaeologist James O'Connell of the University of Utah in Salt Lake City. In the last few years, O'Connell, together with archaeologist Jim Allen of La Trobe University in Bundoora, Australia, has argued from a detailed analysis of radiocarbon dates for a “short chronology” that puts the occupation of Sahul no earlier than about 50,000 years ago. He pointed out that by 45,000 years ago modern humans had colonized a number of islands between Sunda and Sahul, called the Wallacean Archipelago, which stretched at least 1000 kilometers even when sea levels were at their lowest. Reaching many of these islands required sea crossings of 30 to 70 kilometers, sometimes against the currents. Most animals from Asia never achieved these crossings, implying that humans must have used technology to do it. That 5000 years of colonization, O'Connell said, represented a relatively short “archaeological instant.” Rather than drifting, O'Connell argued, early seafarers must have had “marine-capable watercraft” and keen navigation skills.
To bolster his argument, O'Connell pointed out that remains of open-ocean fish, including tuna and sharks, have been found at numerous island sites dating more than 40,000 years ago, an indication that the colonizers already had boats capable of deep-sea fishing.
O'Connell also cited recent demographic simulations by anthropologist John Moore of the University of Florida in Gainesville and others, suggesting that successful colonizations require a minimum founder group of 5 to 10 women of reproductive age and a similar number of men. “The odds that the members of a small group cast adrift by chance, then tossed up on an isolated shore, could generate a successful population are long indeed,” O'Connell concluded.
The conflicting talks drew varied reactions. “My tendency would be to side with [O'Connell],” says Keegan. “For me the issue is what was socially possible. Humans live in groups, and successful colonists tend to reproduce those groups. They have a better chance of survival if they can maintain contact with their parent community,” for example, by making return sea voyages back home. But Anderson counters that the relatively mild, tropical conditions around Sahul 45,000 years ago and the abundance of species of giant, wide-diameter bamboo, perfect for making rafts, ensured that accidental voyagers would survive at sea. “If people were habitually using bamboo [rafts] to explore coral reefs and lagoons, and if they did so as family groups, then the chance of an accidental passage was always there.” Moreover, Anderson says, even such simple craft were capable of carrying a “viable colonizing group of 5 to 10 people” and could be blown across the sea “within a few days.”
Bailey notes that “island Southeast Asia offers all the right conditions for just such a gradual process,” including warm seas and “lots of very productive marine resources like fish, sea mammals, turtles, and shellfish, which would have encouraged exploration of offshore islands.”
Indeed, Bailey suggests that the special conditions in Southeast Asia might explain why the earliest evidence of seafaring is there rather than in the Mediterranean, where seafaring only shows up about 13,000 years ago—even though modern humans occupied southern Europe beginning at least 40,000 years ago. “The Mediterranean offers a stark contrast,” Bailey says. “When it comes to marine fertility and productivity of offshore resources, it is very nearly at the bottom of the world league, with little tidal movement … and temperature gradients that trap nutrients on the seabed below the zone of photosynthesis.” Erlandson agrees: “One of the take-home messages of the meeting was that the development of seafaring capabilities was not universal, but was contingent on a variety of ecological and cultural conditions.”
The other take-home message, Erlandson says, is that the current rise in sea levels caused by global warming, and the accelerated erosion of coastlines, “is threatening our best source of information about such conditions.” Because ancient boats would have been launched from shores now underwater, the best chance of finding evidence for them lies in exploring coastal sites where the ancient shoreline is near the present one, for example, where the land falls off steeply into the sea. Yet most of these sites, Erlandson says, “are actively eroding and countless others have already been destroyed. Enormous amounts of information will be lost in coming decades unless we find, date, and excavate them.”
• Global Origins and Development of Seafaring, Cambridge, U.K., 9–12 September 2007.
|
{}
|
Margin Charting
A top investment site online said few investors understand margin as well as they should. That includes me.
Drop this code with one paste into an existing algorithm at the end of initialize() to chart any overnight margin where fees can run high.
schedule_function(margin_overnight, date_rules.every_day(), time_rules.market_open())
context.margin_ttl = 0
context.mdays = 0.0 # a count of days for margin average
def margin_overnight(context, data):
''' Chart overnight margin, current, total and average per day.
'''
if context.mdays:
margin_one_night = abs(min(0, context.portfolio.cash)) # Margin held overnight
context.margin_ttl += margin_one_night
margin_average = context.margin_ttl / context.mdays
record(Margin = margin_one_night) # Last night's carried margin
record(Mrgn_Ttl = context.margin_ttl) # Total overnight margin
record(Mrgn_Avg = margin_average) # Average overnight margin
record(PnL = context.portfolio.pnl) # Actual profit
context.mdays += 1.0
Why does this matter? Realistic evaluation of algorithms and comparison can only be done with knowledge of their amount of margin. If you start with $1 and buy a share of SPY every day, your returns will appear astronomical, the returns curve in that case is unfortunately an illusion, most of the apparent profit would be stock value owed eventually in cash back to the broker, even without considering the fees. About 600% returns below. And what are the fees? Margin | Interactive Brokers https://gdcdyn.interactivebrokers.com/en/index.php?f=marginnew&p=overview1 Overnight Margin Calculations. Stocks have additional margin requirements when held overnight. For overnight margin requirements for stocks, click the Stocks ... Adjusted Gross Margin - Investopedia www.investopedia.com/terms/a/adjusted-gross-margin.asp Adjusted gross margin goes one step further than gross margin because it includes these inventory carrying costs, which greatly affect the bottom line of a product's profitability. Can anyone add some broker fees to the code? 57 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility -- Returns 1 Month 3 Month 6 Month 12 Month Alpha 1 Month 3 Month 6 Month 12 Month Beta 1 Month 3 Month 6 Month 12 Month Sharpe 1 Month 3 Month 6 Month 12 Month Sortino 1 Month 3 Month 6 Month 12 Month Volatility 1 Month 3 Month 6 Month 12 Month Max Drawdown 1 Month 3 Month 6 Month 12 Month # https://www.quantopian.com/posts/margin def initialize(context): schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=1)) schedule_function(margin_overnight, date_rules.every_day(), time_rules.market_open()) context.margin_ttl = 0 context.mdays = 0.0 # a count of days for margin average def margin_overnight(context, data): ''' Chart overnight margin, current, total and average per day. Intraday margin is ignored. ''' if context.mdays: margin_one_night = abs(min(0, context.portfolio.cash)) # Margin held overnight context.margin_ttl += margin_one_night margin_average = context.margin_ttl / context.mdays record(Margin = margin_one_night) # Last night's carried margin record(Mrgn_Ttl = context.margin_ttl) # Total overnight margin record(Mrgn_Avg = margin_average) # Average overnight margin context.mdays += 1.0 def trade(context, data): order(sid(8554), 1) record(PnL = context.portfolio.pnl) There was a runtime error. 6 responses Can someone who knows from margin vet this, added a fee and logging if margin call. Maybe can lead to a correct version schedule_function(margin_overnight, date_rules.every_day(), time_rules.market_open()) context.margin_ttl = 0 context.margin_cost = 0.0 # Fees for overnight margin, 3% ? Not sure. context.mdays = 0.0 # a count of days for margin average def margin_overnight(context, data): ''' Chart overnight margin, current, total, average per day and an assigned cost. Intraday margin is ignored. ''' c = context if c.mdays: margin_one_night = abs(min(0, c.portfolio.cash)) # Margin held overnight c.margin_cost += margin_one_night * .03 c.margin_ttl += margin_one_night margin_average = c.margin_ttl / c.mdays record(Night_Margin = margin_one_night) # Last night's carried margin record(Ttl_Cost_Mrgn = c.margin_cost) # Fee on overnight margin record(Avg_Mrgn = margin_average) # Average overnight margin record(PnL = c.portfolio.pnl) # Profit without margin # Profit if overnight margin were 3%. right? wrong? record(PnL_w_Mrgn = c.portfolio.pnl - c.margin_cost) #record(Ttl_Mrgn = c.margin_ttl) # Total overnight margin if margin_one_night > .5 * context.portfolio.portfolio_value: # anywhere close? log.info('margin call, margin {} vs 50% of portfolio = {}'.format( int(margin_one_night), int(.5 * context.portfolio.portfolio_value))) c.mdays += 1.0 I feel like this method is creating false margin calls on days when there are large gains while holding a long position. (EDIT: Though.. Maybe not.. I can't read the equations very well because I don't understand all the code yet. I just know I'm getting tons of calls on an algo that doesn't drawdown more than 30%. I don't understand how it could be generating calls if I haven't lost more than 50% of my original purchase price?) When you purchase a long position on margin you only put up 50% of the value at purchase. As the market value increases your DEBIT stays the same. What changes as the Market value changes is your equity. So Equity = Long Market Value - Debit Balance. So, if I buy 1000 shares of Widget Corp at$20.00 a share for a total of $20,000 I will have a DEBIT balance of$10,000 (Reg T requires 50% for between days. You are also maintenance margin called if LMV hits 25% of your purchase price during intraday trading.) If the stock goes up say 20% I have a new LMV of $24,000. So my account now looks like this. ( LMV of$24,000
Debit of $10,000$14,000 of equity. ($4,000 of EXCESS equity) I could use the excess equity to purchase more securities. When you do that the debit created is 100% of the purchase price. So say I did that. My account would now look like this. (I'll buy 166 more shares @$24)
LMV of $27,984 ($24,000 + $3,984 of the new purchase using excess equity.) Debit of$13,984 ($10,000 +$3,984 of new debit from the purchase using excess equity.)
Equity of $14,000 (No net change) Excess equity of$6. (Equity - Debit)
Also, margin interest is ANNUAL so to get the true cost of margin you need to calculate the daily rate (Annual Interest Rate/365)=Daily interest rate.
I think it may be hard to code this in a proper way without it getting entirely too complex. I feel like we would need to create order types. Three types.
Long (Cash) - No extra requirements
Long (Margin) - 50% due at purchase and 50% as a debit for Reg T. 25% Maintenance margin.
Short (Margin) - 50% due at purchase and 50% as a credit. Keep in mind that maintenance margin is 30% on shorts instead of 25%
I'm trying to think of ways to do this, but I'm still very new to this whole coding thing.
Here's a good reference for LONG margin.
http://www.dummies.com/test-prep/series-7/how-to-calculate-the-numbers-in-long-margin-accounts-on-the-series-7-exam/
Great vetting so with that added info now I hope someone will write some code for modeling margin costs. Thanks WF.
I can help with equations and work flow for addressing this, however until I learn more about code I'm not very good at building something out myself. I currently work at a trade desk and am studying for my series 66 when I'm not at the desk. Hopefully soon I'll take that exam and then I can devote more time to learning more about python. If you want to prod me for any info let me know. I've been building strategies in excel for awhile now cause at work we have plugins that pull in our market feeds, but now I have to learn how to build in an "actual" language.
ALSO.. THANK YOU so much for your PvR tools. I use them on the regular now when I'm looking at algos. Your PVR tool has actually helped me catch a few double purchasing bugs in some of the code I was trying to build.
Hopefully someone into margin and python will roll up their sleeves and dig in. Good to hear PvR was helpful and thanks for mentioning it.
I think part of the problem might lay in the way quantopian places the orders. No matter how I align my orders quontopian seems to put them in alphabetical order and process them. So, this results in buys before sells a LOT, which would cause the cash in my account to drop considerably for a moment. Does anyone know how to override this behavior?
|
{}
|
[XeTeX] \sloppy or \tolerance with fontspec
Sat Mar 7 20:21:27 CET 2009
On 7 Mar 2009, at 14:04, John Hughes wrote:
> When I compile a LaTeX document with XeLaTeX and the fontspec package,
> \sloppy, \sloppypar and \tolerance (set to any value) have no effect.
> The commands work fine with LaTeX, pdfLaTeX, and XeLaTeX without
> fontspec. I am using MikTeX 2.7 on Windows. Is this a feature or a bug
> in fontspec? Is there a way of changing the tolerance in XeTeX with
> fontspec?
I don't think fontspec touches \tolerance. With an example like:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass{article}
\usepackage{fontspec}
\setmainfont{Times New Roman}
\usepackage{lipsum}
\begin{document}
\lipsum[4]
\sloppy
\lipsum[4]
\end{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
I see \sloppy having the expected effect (the first paragraph has an
overfull hbbox; the second doesn't, but has some worse spacing).
Perhaps the particular combinations of text and fonts you were using
just didn't happen to come out any different with changed \tolerance?
JK
|
{}
|
# volume of the ellipsoid (triple intergral) plz help =)
• Sep 29th 2009, 05:59 AM
sinni8
volume of the ellipsoid (triple intergral) plz help =)
Find the volume of the ellipsoid
x^2/a^2 +y^2/b^2 +z^2/c^2 =1
by solving the triple integral AFTER having made the transformation
x = au, y = bv, and z = cw.
Thanks for ur help!
• Sep 29th 2009, 06:09 AM
Krizalid
is a straightforward problem, and the answer is given.
we have $dx\,dy\,dz=abc\,du\,dv\,dw$ and the triple integral is $\iiint\limits_{u^{2}+v^{2}+z^{2}\le 1}{abc\,du\,dv\,dw},$ and this is a unit ball, so the expected volume is $\frac43\pi abc.$
|
{}
|
# Chapter 6 - Section 6.1 - Circles and Related Segments and Angles - Exercises - Page 286: 16b
TV$\approx$11.5
#### Work Step by Step
TV=2RV TV=2$\sqrt {33}$ TV$\approx$11.5
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.