text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
# Iosevka - an awesome font for Emacs
Before my foray into Emacs, I purchased applications like IAWriter (classic)1, Marked2, Texts (cross platform Mac/Windows), and have also tried almost all the recommended apps for longer form writing. I am a fan of zen writing apps. In particular the font and environment provided by IAWriter are conducive to focused writing. There also exist apps like Hemingway that also help check the quality of your writing.
Zen writing apps are called so because they have a unique combination of fonts, background color, including line spacing and overall text-width - all of which enable a streamlined and focused flow of words onto the screen. Any customisation required towards this end is possible in Emacs.
The Texts app has some nifty features (besides being cross platform), but the font and appearance is not as beautiful as IAWriter. Both IAWriter (classic) and Texts have minimal settings for further customisation. See the comparison below:
While everybody’s style and approach vary, there are many authors who swear by archaic text editors and tools that enable distraction free writing. One example is Tony Ballantyne’s post on writing tools, and several more examples are available in this blog post.
The next best thing to a clear retina display on a MacBook Pro, is a beautiful font face to take you through the day, enhancing the viewing pleasure and thus the motivation to work longer.
In Emacs, writeroom-mode and Emacs can be customised to mimic IAWriter. In this regard, the font Iosevka, is a great font to try. This old Emacs reddit has many more suggestions. One post described Iosevka as “it doesn’t look like much, but after a few hours it will be difficult to use any other font." This is exactly what happened to me.
There’s still a lot of tweaking to be done with writeroom-mode, but this is certainly a workable result. My nascent configuration for writeroom-mode in emacs is as follows (munged off the internet!). It’s remarkable how much was achieved with a few lines of code!
(with-eval-after-load 'writeroom-mode
(define-key writeroom-mode-map (kbd "C-s-,") #'writeroom-decrease-width)
(define-key writeroom-mode-map (kbd "C-s-.") #'writeroom-increase-width)
|
{}
|
If $y(x)$ satisfies the differential equation
$(\sin x) \dfrac{\mathrm{d}y }{\mathrm{d} x} + y \cos x = 1,$
subject to the condition $y(\pi/2) = \pi/2,$ then $y(\pi/6)$ is
1. $0$
2. $\frac{\pi}{6}$
3. $\frac{\pi}{3}$
4. $\frac{\pi}{2}$
|
{}
|
Article Contents
Article Contents
# Computational techniques to locate crossing/sliding regions and their sets of attraction in non-smooth dynamical systems
• * Corresponding author: Luciano Lopez
• Several models in the applied sciences are characterized by instantaneous changes in the solutions or discontinuities in the vector field. Knowledge of the geometry of interaction of the flow with the discontinuities can give new insights on the behaviour of such systems. Here, we focus on the class of the piecewise smooth systems of Filippov type. We describe some numerical techniques to locate crossing and sliding regions on the discontinuity surface, to compute the sets of attraction of these regions together with the mathematical form of the separatrices of such sets. Some numerical tests will illustrate our approach.
Mathematics Subject Classification: Primary: 65L05; Secondary: 34L99.
Citation:
• Figure 1. System (13): Crossing and attractive sliding region for $\lambda = -10$ (up) and $\lambda = 15$ (down)
Figure 2. System (13). Up: Behaviour of the the singular sets with respect to $\lambda$ space and for $\lambda = -5$ Down: Projection of the singular sets on the $x_1\lambda$ plane and crossing /sliding region
Figure 3. System (15). Crossing and sliding regions for $F = -1$ (up) and $F = 1$ (down)
Figure 4. System (15). Localization of the singular sets in the $x_1x_2F$ space for $F = 0$ (up) and $F = 1$ (down)
Figure 5. 2D continuation method: Computation of the first patch
Figure 6. Advancing the front: General step. Incomplete patch centered at a frontal point (A) and completed patch (B) are colored in yellow
Figure 7. Boundaries of initial sets for Example 2.1
Figure 8. Reconstruction of the separatrices surfaces from some collection of points (blue and red dots on the surfaces). The black line illustrates a piece of a trajectory starting from the initial point $[1,1,5]^{\top}$. The chosen initial condition belongs to the subregion of points in $R_2$ starting from which any trajectory (forward in time) reaches the crossing region on the discontinuity surface $x_2 = 0$
Figure 9. Projections of the numerical solution obtained solving (15) from the initial point $[1,1,5]^{\top}$. The red and blue curves on the sliding surface $x_2 = 0$ (the first plot) represent the singular curves depicted in the selected region and obtained using the continuation algorithm
Figure 10. Reconstruction of some portions of the separatrix surfaces $m_1$ in $R_1$ (red surface) and $m_2$ in $R_2$ (blue surface) obtained interpolating a collection of points (red and blue dots on the surface, respectively) randomly chosen from trajectories obtained integrating -backward in time- the PWS system
Figure 11. Separatrix surface $m_2$ in $R_2$ and exit curves on the discontinuity surface $\Sigma$
Figure 12. 3D example via 2D continuation, first portion
Figure 13. 3D example via 2D continuation, second portion
Table 1. Values of the interpolated separatrix surface on different points in the interpolation domain
Separatrix Point Interpolared surface value $m_1$ $[1,1,5]^{\top}$ $>0$ $m_1$ $[0.5, -0.5, 10]^{\top}$ $>0$ $m_1$ $[0,1,-0,8]^{\top}$ $<0$ $m_2$ $[1,1,5]^{\top}$ $< 0$ $m_2$ $[0.5, 0.5, 10]^{\top}$ $< 0$ $m_2$ $[0,1,20]^{\top}$ $>0$
• [1] V. Acary and B. Brogliato, Numerical Methods for Nonsmooth Dynamical Systems. Applications in Mechanics and Electronics, Lecture Notes in Applied and Computational Mechanics. Springer-Verlag, Berlin, 2008. [2] A. Agosti, L. Formaggia and A. Scotti, Analysis of a model for precipitation and dissolution coupled with a Darcy flux, Journal of Mathematical Analysis and Applications, 431 (2015), 752-781. doi: 10.1016/j.jmaa.2015.06.003. [3] A. Agosti, B. Giovanardi, L. Formaggia and A. Scotti, Numerical simulation of geochemical compaction with discontinuous reactions, in Coupled Problems 2015 - Proceedings of the 6th International Conference on Coupled Problems in Science and Engineering, 2015, 300-311. [4] A. Agosti, B. Giovanardi, L. Formaggia and A. Scotti, A numerical procedure for geochemical compaction in the presence of discontinuous reaction, Advances in Water Resources, 94 (2016), 332-344. doi: 10.1016/j.advwatres.2016.06.001. [5] I. Arango and J. Taborda, Numerical analysis of sliding dynamics in three-dimensional Filippov systems using SPTI method, International Journal of Mathematical Models and Method in Applied Sciences, 2 (2008), 342-354. [6] I. Arango and J. Taborda, Integration-free analysis of nonsmooth local dynamics in planar Filippov systems, International Journal of Bifurcation and Chaos, 19 (2009), 947-975. doi: 10.1142/S0218127409023391. [7] I. Arango and J. Taborda, Topological classification of limit cycles of piecewise smooth dynamical systems and its associated non-standard bifurcations, Entropy, 16 (2014), 1949-1968. doi: 10.3390/e16041949. [8] M. Berardi and L. Lopez, On the continuous extension of Adams - Bashforth methods and the event location in discontinuous ODEs, Applied Mathematics Letters, 25 (2012), 995-999. doi: 10.1016/j.aml.2011.11.014. [9] M.-D. Buhmann, Radial Basis Functions: Theory and Implementations, Cambridge University Press, Cambridge, 2003. doi: 10.1017/CBO9780511543241. [10] M. Calvo, J. Montijano and L. Rández, On the solution of discontinuous IVPs by adaptive Runge-Kutta codes, Numerical Algorithms, 33 (2003), 163-182. doi: 10.1023/A:1025507920426. [11] M. Calvo, J. I. Montijano and L. Rández, Algorithm 968: Disode45: A matlab runge-kutta solver for piecewise smooth ivps of filippov type, ACM Trans. Math. Softw., 43 (2017), Art.25, 14 pp. doi: 10.1145/2907054. [12] R. Casey, H. deJong and J. Gouze, Piecewise-linear models of genetics regulatory networks: Equilibria and their stability, Journal Mathematical Biology, 52 (2006), 27-56. doi: 10.1007/s00285-005-0338-2. [13] R. Cavoretto, A. DeRossi, E. Perracchione and E. Venturino, Robust approximation algorithms for the detection of attraction basins in dynamical systems, J. Sci. Comput., 68 (2016), 395-415. doi: 10.1007/s10915-015-0143-z. [14] A. Colombo and U. Galvanetto, Stable manifolds of saddles in piecewise smooth systems, Computer Modeling in Engineering & Sciences, 53 (2009), 235-254. [15] A. Colombo and U. Galvanetto, Computation of the basins of attraction in non-smooth dynamical systems, in Proceedings of the IUTAM Symposium on Nonlinear Dynamics for Advanced Technologies and Engineering Design, held Aberdeen, UK, 27-30 July 2010 (eds. M. Wiercigroch and G. Rega), vol. 32, Springer Science, 2013, 17-29. [16] A. Colombo and M.R. Jeffrey, Nondeterministic chaos, and the two-fold singularity in piecewise smooth flows, SIAM Journal on Applied Dynamical Systems, 10 (2011), 423-451. doi: 10.1137/100801846. [17] A. Colombo and M.R. Jeffrey, The two-fold singularity of nonsmooth flows: Leading order dynamics in n-dimensions, Physica D, 263 (2013), 1-10. doi: 10.1016/j.physd.2013.07.015. [18] N. DelBuono, C. Elia and L. Lopez, On the equivalence between the sigmoidal approach and Utkin's approach for piecewise-linear models of gene regulatory networks, SIAM Journal on Applied Dynamical Systems, 13 (2014), 1270-1292. doi: 10.1137/130950483. [19] N. DelBuono and L. Lopez, Direct event location techniques based on Adams multistep methods for discontinuous ODEs, Applied Mathematics Letters, 49 (2015), 152-158. doi: 10.1016/j.aml.2015.05.012. [20] F. Dercole and Y.A. Kuznetsov, SlideCont: An Auto97 driver for bifurcation analysis of filippov systems, ACM Trans. Math. Softw., 31 (2005), 95-119. doi: 10.1145/1055531.1055536. [21] A. Dhooge, W. Govaerts, Y. A. Kuznetsov, W. Mestrom, A. M. Riet and B. Sautois, MATCONT and CL MATCONT: Continuation toolboxes in Matlab, december 2006 edition, 2006, URL http://www.ricam.oeaw.ac.at/people/page/jameslu/Teaching/MathModelBioSciences_Summer08/EX3/MATCONT_manual.pdf. [22] A. Dhooge, W. Govaerts and Y.A. Kuznetsov, MATCONT: A MATLAB package for numerical bifurcation analysis of odes, ACM Trans. Math. Softw., 29 (2003), 141-164. doi: 10.1145/779359.779362. [23] L. Dieci, C. Elia and L. Lopez, Uniqueness of Filippov sliding vector field on the intersection of two surfaces in $\mathbb R^3$ and implications for stability of periodic orbits, Journal of Nonlinear Science, 25 (2015), 1453-1471. doi: 10.1007/s00332-015-9265-6. [24] L. Dieci and L. Lopez, Fundamental matrix solutions of piecewise smooth differential systems, Mathematics and Computers in Simulation, 81 (2011), 932-953. doi: 10.1016/j.matcom.2010.10.012. [25] L. Dieci and L. Lopez, One-sided direct event location techniques in the numerical solution of discontinuous differential systems, BIT Numerical Mathematics, 55 (2015), 987-1003. doi: 10.1007/s10543-014-0538-5. [26] C. Erazo, M. E. Homer, P. T. Piiroinen and M Di Bernardo, Dynamic cell mapping algorithm for computing basins of attraction in planar filippov systems, International Journal of Bifurcation and Chaos, 27 (2017), 1730041, 15PP. doi: 10.1142/S0218127417300415. [27] G. F. Fasshauer, Meshfree Approximation Methods with MATLAB, World Scientific Publishing Co., Inc., River Edge, NJ, USA, 2007. doi: 10.1142/6437. [28] A. Filippov, Differential Equations with Discontinuous Right Hand Side, Kluwer, Dordrecht, Netherlands, 1988. doi: 10.1007/978-94-015-7793-9. [29] U. Galvanetto, Computation of the separatrix of basins of attraction in a non-smooth dynamical system, Physica D, 237 (2008), 2263-2271. doi: 10.1016/j.physd.2008.02.009. [30] M. Gameiro, J.-P. Lessard and A. Pugliese, Computation of smooth manifolds via rigorous multi-parameter continuation in infinite dimensions, Foundations of Computational Mathematics, 16 (2016), 531-575. doi: 10.1007/s10208-015-9259-7. [31] W. Govaerts, Numerical Methods for Bifurcations of Dynamical Equilibria, SIAM, Philadelphia, 2000. doi: 10.1137/1.9780898719543. [32] E. Hairer, C. Lubich and G. Wanner, Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations, Springer Series in Computational Mathematics, 31. Springer-Verlag, Berlin, 2002. doi: 10.1007/978-3-662-05018-7. [33] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald and W. Stuetzle, Surface reconstruction from unorganized points, SIGGRAPH Comput. Graph., 26 (1992), 71-78. doi: 10.1145/133994.134011. [34] L. Lopez and S. Maset, Time transfomations for the event location of discontinuous ODEs, Math. Comp, Published electronically December 26, 2017 doi: 10.1090/mcom/3305. [35] J.-M. Melenk and I. Bubuska, The partition of unity finite element method: Basic theory and applications, Comput. Methods Appl. Mech. Eng., 139 (1996), 289-314. doi: 10.1016/S0045-7825(96)01087-0. [36] B.-S. Morse, T.-S. Yoo, P. Rheingans, D.-T. Chen and K.-R. Subramanian, Interpolating implicit surfaces from scattered surface data using compactly supported radial basis functions, in Proceedings of International Conference on Shape Modeling and Applications. Genova, Italy May 7-11, 2001, IEEE, 2001, 72-89. [37] P. T. Piiroinen and Y. A. Kuznetsov, An event-driven method to simulate Filippov systems with accurate computing of sliding motions, ACM Transactions on Mathematical Software, 34 (2008), Art. 13, 24 pp. doi: 10.1145/1356052.1356054. [38] E. Plathe and S. Kjoglum, Analysis and genetic properties of gene regulatory networks with graded response functions, Physica D, 201 (2005), 150-176. doi: 10.1016/j.physd.2004.11.014. [39] A. Tornambé, Modelling and control of impact in mechanical systems: Theory and experimental results, IEEE Trans. Automat. Control, 44 (1999), 294-309. doi: 10.1109/9.746255. [40] G. Turk and J. F. O'Brien, Modelling with implicit surfaces that interpolate, ACM Trans. Graph., 21 (2002), 855-873. doi: 10.1145/1198555.1198640. [41] H. Wendland, Scattered Data Approximation. Cambridge Monogr. Appl. Comput. Math., Cambridge Univ. Press, Cambridge, 2005. [42] H. Wendland, Fast evaluation of radial basis functions: Methods based on partition of unity, in Approximation Theory X: Wavelets, Splines, and Applications, Vanderbilt University Press, 2002, 473-483.
Figures(13)
Tables(1)
|
{}
|
for-phone-onlyfor-tablet-portrait-upfor-tablet-landscape-upfor-desktop-upfor-wide-desktop-up
# France's Bouygues hikes outlook for H2
Nov 19 (Reuters) - French conglomerate Bouygues on Thursday raised its outlook for the second half of the year on the back of a strong third-quarter.
The group now expects the current operating margin in the second-half of 2020 to be slightly higher than in the second-half of 2019.
It had originally forecast a significant profitability in the second half of 2020, although without reaching the particularly high levels of the second half of 2019. (\$1 = 0.8442 euros) (Reporting by Charles Regnier; Editing by Jacqueline Wong)
for-phone-onlyfor-tablet-portrait-upfor-tablet-landscape-upfor-desktop-upfor-wide-desktop-up
|
{}
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
The motivation for this plot is the function: graphics::smoothScatter, basically a plot of a two dimensional density estimator. In the following I want to reproduce the features with ggplot2.
## smoothScatter
To have some data I draw some random numbers from a two dimensional normal distribution:
<pre class ="r"><code>library(ggplot2) library(MASS) set.seed(2) dat <- data.frame( mvrnorm(n = 1000, mu = c(0, 0), Sigma = matrix(rep(c(1, 0.2), 2), nrow = 2, ncol = 2))) names(dat) <- c("x", "y") </code>
smoothScatter is basically a scatter plot with a two dimensional density estimation. This is nice especially in the case of a lot of observations and for outlier detection.
<pre class="r"><code>par(mfrow = c(1,2)) plot(dat$x, dat$y) smoothScatter(dat$x, dat$y) </code>
## smoothScatter in ggplot2
OK, very pretty, lets reproduce this feature in ggplot2. First thing is to add the necessary layers, which I already mentioned is a two dimensional density estimation, combined with the geom called ‘tile’. Also I use the fill aesthetic to add colour and a different palette:
<pre class ="r"><code>ggplot(data = dat, aes(x, y)) + stat_density2d(aes(fill = ..density..^0.25), geom = "tile", contour = FALSE, n = 200) + scale_fill_continuous(low = "white", high = "dodgerblue4")</code>
I add one additional layer; a simple scatter plot. To make the points transparent I choose alpha to be 1/10 which is a relative quantity with respect to the number of observations.
<pre class ="r"><code>last_plot() + geom_point(alpha = 0.1, shape = 20)</code>
A similar approach is also discussed on StackOverflow. Actually that version is closer to smoothScatter.
## Changing the theme
The last step is to tweak the theme-elements. Not that the following adds to any form of information but it looks nice. Starting from a standard theme, theme_classic, which is close to where I want to get, I get rid of all labels, axis and the legend.
<pre class ="r"><code>last_plot() + theme_classic() + theme( legend.position = "none", axis.line = element_blank(), axis.ticks = element_blank(), axis.text = element_blank(), text = element_blank(), plot.margin = unit(c(-1, -1, -1, -1), "cm") ) </code>
The last thing is to save the plot in the correct format for display:
<pre class="r"><code>ggsave( "scatterFinal.png", plot = last_plot(), width = 14, height = 8, dpi = 300, units = "in" ) </code>
And that’s it, a nice picture which used to be a statistical graph.
|
{}
|
(MathML) Mathematical Markup Language
Developed byWorld Wide Web Consortium
Type of formatMarkup language
Extended fromXML
For help writing formulae in Wikipedia, please see Help:Formula. A markup language is an Artificial language using a set of annotations to text that give instructions regarding the structure of text or how it is to be displayed Don't change "Extensible"
Mathematical Markup Language (MathML) is an application of XML for describing mathematical notations and capturing both its structure and content. Don't change "Extensible" Mathematics is the body of Knowledge and Academic discipline that studies such concepts as Quantity, Structure, Space and It aims at integrating mathematical formulae into World Wide Web documents. The World Wide Web (commonly shortened to the Web) is a system of interlinked Hypertext documents accessed via the Internet. It is a recommendation of the W3C math working group. Working Group can mean Working group, an interdisciplinary group of researchers or Working Group (dogs, kennel club designation for
## History
The specification of version 1. 01 of the format was released in July 1999 and version 2. Year 1999 ( MCMXCIX) was a Common year starting on Friday (link will display full 1999 Gregorian calendar) 0 appeared in February 2001. February 2001: January - February - March - April - May - June - July - August - September In October 2003, the second edition of MathML Version 2. October 2003: January - February - March - April - May - June - July - August - 0 was published as the final release by the W3C math working group. In June 2006 the W3C has rechartered the MathML Working Group to produce a MathML 3 Recommendation until February 2008.
MathML was originally designed before the finalization of XML namespaces. XML namespaces are used for providing uniquely named elements and attributes in an XML instance As such, MathML markup is often not namespaced, and applications that deal with MathML, such as the Mozilla browsers, do not require a namespace. For applications that wish to namespace MathML, the recommended namespace URI is http://www.w3.org/1998/Math/MathML.
## Presentation and semantics
MathML deals not only with the presentation but also the meaning of formula components (the latter part of MathML is known as “Content MathML”). Because the meaning of the equation is preserved separate from the presentation, how the content is communicated can be left up to the user. For example, web pages with MathML embedded in them can be viewed as normal web pages with many browsers but visually impaired users can also have the same MathML read to them through the use of screen readers (e. A screen reader is a software application that attempts to identify and interpret what is being displayed on the screen (or more accurately sent to standard output g. using the MathPlayer plugin for Internet Explorer, Opera 9. Windows Internet Explorer (formerly Microsoft Internet Explorer abbreviated MSIE) commonly abbreviated to IE, is a series of graphical Opera is a Web browser and Internet suite developed by the Opera Software company 50 build 9656+ or the Fire Vox extension for Firefox).
## Example
$x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$
would be marked up using LaTeX syntax like this:
x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}
in troff/eqn like this:
x={-b +- sqrt{b sup 2 - 4ac}} over 2a
in OpenOffice.org Math like this (both are valid):
x={-b plusminus sqrt {b^2 - 4 ac}} over {2 a}x={-b +- sqrt {b^2 - 4ac}} over 2a
The above equation could be represented in Presentation MathML as an expression tree made up from layout elements like mfrac or msqrt elements:
<math xmlns="http://www. In Mathematics, a quadratic equation is a Polynomial Equation of the second degree. LaTeX (ˈleɪtɛ See also Trough. TROFF may also refer to a command in the BASIC programming language. Part of the Troff suite of Unix document layout tools eqn is a Preprocessor that formats equations for printing OpenOfficeorg Math is a tool for creating and editing mathematical formulae similar to Microsoft Equation Editor and is part of the OpenOffice w3. org/1998/Math/MathML"> <mi>x</mi> <mo>=</mo> <mfrac> <mrow> <mrow> <mo>-</mo> <mi>b</mi> </mrow> <mo>±</mo> <msqrt> <msup> <mi>b</mi> <mn>2</mn> </msup> <mo>-</mo> <mrow> <mn>4</mn> <mo>⁢</mo> <mi>a</mi> <mo>⁢</mo> <mi>c</mi> </mrow> </msqrt> </mrow> <mrow> <mn>2</mn> <mo>⁢</mo> <mi>a</mi> </mrow> </mfrac></math>
The <annotation> element can be used to embed a semantic annotation in non-XML format, for example to store the formula in the format used by an equation editor. Alternatively, the equation could be represented in Content MathML as an expression tree for the functional structure elements like apply (for function application) or eq (for the equality relation) elements:
<math xmlns="http://www. w3. org/1998/Math/MathML"> <apply> <eq/> <ci>x</ci> <apply> <frac/> <apply> <csymbol definitionURL="http://www. example. com/mathops/multiops. html#plusminus"> <mo>±</mo> </csymbol> <apply> <minus/> <ci>b</ci> </apply> <apply> <power/> <apply> <minus/> <apply> <power/> <ci>b</ci> <cn>2</cn> </apply> <apply> <times/> <cn>4</cn> <ci>a</ci> <ci>c</ci> </apply> </apply> <cn>0. 5</cn> </apply> </apply> <apply> <times/> <cn>2</cn> <ci>a</ci> </apply> </apply> </apply> <annotation encoding="TeX"> x=\frac{-b \pm \sqrt{b^2 - 4ac}}{2a} </annotation> <annotation encoding="StarMath 5. 0"> x={-b plusminus sqrt {b^2 - 4 ac}} over {2 a} </annotation></math>
In the expression tree above, elements like times are defined by the MathML specification and stand for mathematical functions that are applied to sibling expressions that are interpreted as arguments. The csymbol element is a generic extension element that means whatever is specified in the document referred to in the definitionURL attribute.
Although less compact than TeX, the XML structuring promises to make it widely usable and allows for instant display in applications such as Web browsers and facilitates a straightforward interpretation of its meaning in mathematical software products. A web browser is a software application which enables a user to display and interact with text images videos music games and other information typically located on a MathML is not intended to be written or edited directly by humans. [1]
## Software support
### Editors
Some editors with native MathML support (including copy and paste of MathML) are Publicon from Wolfram Research and SciWriter from soft4science. Publicon is a technical Publication software marketed by Wolfram Research.
MathML is also supported by major office products such as OpenOffice.org, KOffice, and MS Office 2007, as well as by mathematical software products such as Mathematica and the Windows version of the Casio ClassPad 300. OpenOfficeorg ( OOo or OOo) is a free Cross-platform office application suite available for a number of different computer KOffice is an Office suite for the K Desktop Environment ( KDE) Microsoft Office 2007 (officially called 2007 Microsoft Office system) is the most recent Windows version of the Microsoft Office system, Microsoft's Mathematica is a computer program used widely in scientific engineering and mathematical fields The Casio ClassPad 300 is the first (and the only apart from the Sharp EL-9650 and EL-9600c pen based Calculator. The W3C Browser/Editor Amaya can also be mentioned as a WYSIWYG MathML-as-is editor. Amaya is a free and open source WYSIWYG web Authoring tool with browsing abilities created by a structured editor project at WYSIWYG (ˈwɪziwɪg or /ˈwɪzɪwɪg/ is an Acronym for W hat Y ou S ee I s W hat Y ou G
### Conversion
Several utilities for converting mathematical expressions to MathML are available, including converters [1] between TeX and MathML. TeX (ˈtɛx as in Greek, often /ˈtɛk/ in English; written with a lowercase 'e' in imitation of the logo is a Typesetting system designed and mostly ConTeXt does the reverse and uses TeX for typesetting MathML (usually resulting in PDF documents). ConTEXT is a closed-source Freeware Text editor for Microsoft Windows, aimed at software developers TeX (ˈtɛx as in Greek, often /ˈtɛk/ in English; written with a lowercase 'e' in imitation of the logo is a Typesetting system designed and mostly MathType from Design Science allows users to create equations in a WYSIWYG window and export them as MathML. MathType, created by Design Science, is a proprietary interactive tool for Windows and Macintosh that allows the creation of mathematical notation for word WYSIWYG (ˈwɪziwɪg or /ˈwɪzɪwɪg/ is an Acronym for W hat Y ou S ee I s W hat Y ou G Also, Wolfram Research provides a web page to convert typed mathematical expressions to MathML.
GNU TeXmacs is a what-you-see-is-what-you-get editor with extensive support for mathematics. GNU TeXmacs is a free scientific Word processor component of the GNU project which was "inspired" by both TeX and GNU Emacs WYSIWYG (ˈwɪziwɪg or /ˈwɪzɪwɪg/ is an Acronym for W hat Y ou S ee I s W hat Y ou G Converters exist for presentation MathML in both directions. TeXmacs can be used to write mathematical articles which are exported to XHTML with embedded MathML. The Extensible Hypertext Markup Language, or XHTML, is a Another WYSIWYG MathML-as-is editor, Formulator MathML Weaver [2] provides a means for importing/exporting MathML with support for some abstract entities such as ⅇ and ⅆ. WYSIWYG (ˈwɪziwɪg or /ˈwɪzɪwɪg/ is an Acronym for W hat Y ou S ee I s W hat Y ou G Formulator MathML Weaver is a free software by Hermitech Laboratory that enables editing of MathML mathematical formulae in WYSIWYG way
### Web browsers
Of the major web browsers, those that directly support the format are recent versions of Gecko browsers (e. A web browser is a software application which enables a user to display and interact with text images videos music games and other information typically located on a Gecko is a Layout engine currently developed by Mozilla Corporation, known as the layout engine of the Firefox web browser Mozilla Application g. , Firefox and Camino)[3][4], and the Opera web browser since version 9. Camino (Spanish for way / path / road) is a free, Open source, GUI-based Web browser based on Mozilla Opera is a Web browser and Internet suite developed by the Opera Software company 5.
For Gecko-based browsers, the user is currently required to download special fonts in order to display MathML correctly; this is likely to change soon with the release of the STIX fonts. The STIX Fonts project is a project sponsored by several leading scientific and technical publishers to provide under royalty-free license a comprehensive font set of mathematical symbols
Opera, since version 9. Opera is a Web browser and Internet suite developed by the Opera Software company 5, supports MathML for CSS profile [5][6]. Before it required User JavaScript to emulate MathML support. [7]
Other browsers do not support the format, and require third-party plugins.
### Web conversion
ASCIIMath [8] provides a JavaScript library to re-write a convenient Wiki-like text syntax used inline in web pages into MathML on the fly; it works in browsers with MathML support or plug-ins. JavaScript is a Scripting language most often used for Client-side web development LaTeXMathML [9] does the same for (a subset of) the standard LaTeX mathematical syntax. LaTeX (ˈleɪtɛ
Blahtex is a TeX-to-MathML converter intended for use with MediaWiki. MediaWiki is a web-based Wiki software application used by all projects of the Wikimedia Foundation, all wikis hosted by Wikia, and
Equation Server for .NET from soft4science can be used on the server side (ASP.NET) for TeX-Math (Subset of LaTeX math syntax) to MathML conversion. ASPNET is a Web application framework developed and marketed by Microsoft, that Programmers can use to build dynamic Web sites Web applications LaTeX (ˈleɪtɛ It can also create bitmap images (Png, Jpg, Gif,. . . ) from TeX-Math or MathML input.
### Support of software developers
Support of MathML format accelerates software application development in such various topics, as computer-aided education (distance learning, electronic textbooks and other classroom materials); automated creation of attractive reports; computer algebra systems; authoring, training, publishing tools (both for web and desktop-oriented), and many other applications for mathematics, science, business, economics, etc. Several software vendors propose a component edition of their MathML editors, thus providing the easy way for software developers to insert mathematics rendering/editing/processing functionality in their applications. For example, Formulator ActiveX Control [10] from Hermitech Laboratory can be incorporated into an application as a MathML-as-is editor, Design Science propose a toolkit for building web pages that include interactive math (WebEQ Developers Suite, [11]). Formulator ActiveX Control is a software component by Hermitech Laboratory that enables software developers to insert MathML edit control into their applications Hermitech Laboratory was founded in Zhytomyr Ukraine in 2003 to develop the mathematical software tools
## Other standards
Another standard called OpenMath which has been designed (largely by the same people who devised Content MathML) more specifically for storing formulae semantically can also be used to complement MathML. OpenMath is the name of a markup language for specifying the meaning of mathematical Formulae.
The OMDoc format has been created for markup of larger mathematical structures than formulae, from statements like definitions, theorems, proofs, or example, to theories and text books. OMDoc (Open Mathematical Documents is a semantic markup format for mathematical documents Formulae in OMDoc documents can either be written in Content MathML or in OpenMath; for presentation, they are converted to Presentation MathML.
The Office Open XML (OOXML) standard defines a different XML math syntax, derived from Microsoft Office products. Office Open XML (also referred to as OOXML or OpenXML) is a File format for representing Spreadsheets Charts Presentations Microsoft Office is a set of interrelated desktop applications servers and services collectively referred to as an Office suite, for the Microsoft Windows and However, it is partially compatible[2] through relatively simple XSL Transformations. Extensible Stylesheet Language Transformations ( XSLT) is an XML -based language used for the transformation of XML documents into other XML or "human-readable"
|
{}
|
# American Institute of Mathematical Sciences
• Previous Article
A random cocycle with non Hölder Lyapunov exponent
• DCDS Home
• This Issue
• Next Article
On an exact solution of a nonlinear three-dimensional model in ocean flows with equatorial undercurrent and linear variation in density
August 2019, 39(8): 4797-4840. doi: 10.3934/dcds.2019196
## Convergence and center manifolds for differential equations driven by colored noise
1 School of Mathematics, Sichuan University, Chengdu, Sichuan 610064, China 2 Department of Mathematics, Brigham Young University, Provo, Utah 84602, USA 3 Department of Mathematics New Mexico Institute of Mining and Technology, Socorro, NM 87801, USA
* Corresponding author: Jun Shen, junshen85@163.com
Received November 2018 Published May 2019
Fund Project: This work was supported by NSFC (11501549, 11331007, 11831012), NSF (1413603), the Fundamental Research Funds for the Central Universities (YJ201646) and International Visiting Program for Excellent Young Scholars of SCU
In this paper, we study the convergence and pathwise dynamics of random differential equations driven by colored noise. We first show that the solutions of the random differential equations driven by colored noise with a nonlinear diffusion term uniformly converge in mean square to the solutions of the corresponding Stratonovich stochastic differential equation as the correlation time of colored noise approaches zero. Then, we construct random center manifolds for such random differential equations and prove that these manifolds converge to the random center manifolds of the corresponding Stratonovich equation when the noise is linear and multiplicative as the correlation time approaches zero.
Citation: Jun Shen, Kening Lu, Bixiang Wang. Convergence and center manifolds for differential equations driven by colored noise. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4797-4840. doi: 10.3934/dcds.2019196
##### References:
show all references
##### References:
[1] Virginia Giorno, Serena Spina. On the return process with refractoriness for a non-homogeneous Ornstein-Uhlenbeck neuronal model. Mathematical Biosciences & Engineering, 2014, 11 (2) : 285-302. doi: 10.3934/mbe.2014.11.285 [2] Annalisa Cesaroni, Matteo Novaga, Enrico Valdinoci. A symmetry result for the Ornstein-Uhlenbeck operator. Discrete & Continuous Dynamical Systems - A, 2014, 34 (6) : 2451-2467. doi: 10.3934/dcds.2014.34.2451 [3] Anhui Gu, Bixiang Wang. Asymptotic behavior of random fitzhugh-nagumo systems driven by colored noise. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1689-1720. doi: 10.3934/dcdsb.2018072 [4] Filomena Feo, Pablo Raúl Stinga, Bruno Volzone. The fractional nonlocal Ornstein-Uhlenbeck equation, Gaussian symmetrization and regularity. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3269-3298. doi: 10.3934/dcds.2018142 [5] Tomasz Komorowski, Lenya Ryzhik. Fluctuations of solutions to Wigner equation with an Ornstein-Uhlenbeck potential. Discrete & Continuous Dynamical Systems - B, 2012, 17 (3) : 871-914. doi: 10.3934/dcdsb.2012.17.871 [6] Kai Liu. Quadratic control problem of neutral Ornstein-Uhlenbeck processes with control delays. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1651-1661. doi: 10.3934/dcdsb.2013.18.1651 [7] Tomasz Komorowski, Łukasz Stȩpień. Kinetic limit for a harmonic chain with a conservative Ornstein-Uhlenbeck stochastic perturbation. Kinetic & Related Models, 2018, 11 (2) : 239-278. doi: 10.3934/krm.2018013 [8] Antonio Avantaggiati, Paola Loreti. Hypercontractivity, Hopf-Lax type formulas, Ornstein-Uhlenbeck operators (II). Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 525-545. doi: 10.3934/dcdss.2009.2.525 [9] Tiziana Durante, Abdelaziz Rhandi. On the essential self-adjointness of Ornstein-Uhlenbeck operators perturbed by inverse-square potentials. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 649-655. doi: 10.3934/dcdss.2013.6.649 [10] Thi Tuyen Nguyen. Large time behavior of solutions of local and nonlocal nondegenerate Hamilton-Jacobi equations with Ornstein-Uhlenbeck operator. Communications on Pure & Applied Analysis, 2019, 18 (3) : 999-1021. doi: 10.3934/cpaa.2019049 [11] Anhui Gu. Asymptotic behavior of random lattice dynamical systems and their Wong-Zakai approximations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5737-5767. doi: 10.3934/dcdsb.2019104 [12] Renhai Wang, Yangrong Li, Bixiang Wang. Random dynamics of fractional nonclassical diffusion equations driven by colored noise. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 4091-4126. doi: 10.3934/dcds.2019165 [13] Anhui Gu, Boling Guo, Bixiang Wang. Long term behavior of random Navier-Stokes equations driven by colored noise. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2020020 [14] Martin Redmann, Melina A. Freitag. Balanced model order reduction for linear random dynamical systems driven by Lévy noise. Journal of Computational Dynamics, 2018, 5 (1&2) : 33-59. doi: 10.3934/jcd.2018002 [15] Lianfa He, Hongwen Zheng, Yujun Zhu. Shadowing in random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 355-362. doi: 10.3934/dcds.2005.12.355 [16] Philippe Marie, Jérôme Rousseau. Recurrence for random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 1-16. doi: 10.3934/dcds.2011.30.1 [17] Yujun Zhu. Preimage entropy for random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 829-851. doi: 10.3934/dcds.2007.18.829 [18] Ji Li, Kening Lu, Peter W. Bates. Invariant foliations for random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (9) : 3639-3666. doi: 10.3934/dcds.2014.34.3639 [19] Weigu Li, Kening Lu. Takens theorem for random dynamical systems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3191-3207. doi: 10.3934/dcdsb.2016093 [20] Simona Fornaro, Abdelaziz Rhandi. On the Ornstein Uhlenbeck operator perturbed by singular potentials in $L^p$--spaces. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5049-5058. doi: 10.3934/dcds.2013.33.5049
2018 Impact Factor: 1.143
|
{}
|
## Calculus with Applications (10th Edition)
Published by Pearson
# Chapter 1 - Linear Functions - Chapter Review - Review Exercises: 57
#### Answer
$y=7.2t+11.9$
#### Work Step by Step
Graphing the approximately linear function, the points on the graph are (t, y(t)). Let y be in billions of dollars, t is the number of years after 2000. Given points: (1, 19.1) and (8, 69.7). Slope: $m=\displaystyle \frac{69.7-19.1}{8-1}=\frac{50.6}{7}\approx$7.22857142857$\approx 7.2$ Point-slope equation: $y-19.1=7.2(t-1)$ $y-19.1=7.2t-7.2\qquad/+19.1$ $y=7.2t+11.9$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{}
|
# [NTG-context] small caps italic and font switching inside math
Taco Hoekwater taco at elvenkind.com
Sun Jul 2 10:44:23 CEST 2006
Mojca Miklavec wrote:
>
>>>It works perfect except in a single case: \title{\molecule{SF_6}}
>>
>>I had seen that, but not yet bothered to fix it. Still, it is
>>fairly easy to change the macro, try the version below.
Sorry, I thought you were talking about the spacing between F and 6.
> Didn't work in titles either (or I did something strange) :(
> But If I write a couple of explicit \lohi-s, it will still be OK.
Good, but it can be fixed, by changing the definition of
\domolecule to:
\def\domolecule#1%
{\expandafter\scantokens\expandafter
{\detokenize{#1\finishchem}}\egroup}
This re-tokenizes the argument (needed because it was grabbed
catcodes.)
> I didn't really understand the \iffluor-part of the code ... but don't
> bother too much.
It is there to trigger a negative italic superscript correction
(TeX doesn't have a primitive for that :-))
> Thanks a lot for the trickery again (I'm still impressed by the
> \uppercase part),
That is actually a fairly standard trick, not something I invented
Greetings, Taco
|
{}
|
Is there such a thing as an order-$k$ RNN?
In HMMs it's common to include edges from previous layers of the model. Looking back at the previous $$k$$ layers creates an order-$$k$$ Markov model.
Is this commonly done in RNNs? Have you ever seen this? What was the advantage for that specific application?
In probabilistic graphical models like HMMs, order-$$k$$ models can be reduced to order-1 models on transformed variables with higher-dimensional support (i.e., the tree decomposition of the order-$$k$$ graph). Is there a corresponding equivalence for order-$$k$$ RNNs, where the RNN accepts edges only from the previous layer, but where each layer has greater complexity?
• This is doable, but it's not common. The assumption in an RNN is that the hidden state already carries with it information from all previous states, which isn't true in (e.g.) an HMM with the Markov assumption. I don't think it would have much value, unless you're trying to include a specific inductive bias of dependency on specifically $k$ steps back. Nov 27 '21 at 23:03
The assumption in an RNN is that the hidden state already carries with it information from all previous states. This isn't true in (e.g.) a $$k$$th-order HMM. By the Markov assumption, you absolutely can only look back $$k$$ steps.
I don't think the higher-order RNN would have much value, unless you're trying to include a specific inductive bias of dependency on specifically $$k$$ steps back. What you've provided is another avenue for information to flow, but unlike the HMM there was already one avenue.
|
{}
|
480.369.1683
discount rate dcf
The WACC incorporates the average rate of return that shareholders in the firm are expecting for the given year. DCF valuation methods for shares While most practitioners use a uniform discount rate, there are situations when it may be more accurate to use multiple discount rates. This rate is often a company’s Weighted Average Cost of Capital (WACC), required rate of return, or the hurdle rate that investors expect to earn relative to the risk of the investment. Discounted cash flow (DCF) helps determine the value of an investment based on its future cash flows. The discounted cash flow model (DCF) is one common way to value an entire company and, by extension, its shares of stock. If the project had cost $14 million, the NPV would have been -$693,272, indicating that the cost of the investment would not be worth it. As such, a DCF analysis is appropriate in any situation where a person is paying money in the present with expectations of receiving more money in the future. First, you forecast the expected cashflows from the investment. Discounted cash flow valuation for emerging markets companies can be much tougher than valuing companies in developed markets. Discount rates for cash flows paid by financial securities are likely to be calculated against market rates using CAPM or a similar model. Bear in mind that different businesses and companies also have different methods in arriving at a discount rate, probably because each business accounts for their own specific business situation and investment realities. Terminal value is defined as the value of an investment at the end of a specific time period, including a specified rate of interest. Valuing Companies, The Easy Way Out. You can vary every aspect of the analysis, including the growth rate, discount rate, type of simulation, and the cycles of DCF to perform.. The DCF has limitations, primarily that it relies on estimations on future cash flows, which could prove to be inaccurate. Since money in the future is worth less than money today, you reduce the present value of each of these cashflows by your 10% discount rate. How Money-Weighted Rate of Return Measures Investment Performance, Discounted After-Tax Cash Flow Definition, How to Calculate Present Value, and Why Investors Need to Know It, Introduction to Capital Investment Analysis, Equity Valuation: The Comparables Approach, Determining the Value of a Preferred Stock, How to Use Enterprise Value to Compare Companies. The Discount Rate represents risk and potential returns, so a higher rate means more risk but also higher potential returns.. 3. The discount factor effect discount rate with increase in discount factor, compounding of the discount rate builds with time. Normally, you use something called WACC, or the “Weighted Average Cost of Capital,” to calculate the Discount Rate. And if it goes public in an IPO, the shares it issues, also called “Equity,” are a form of capital. Free online Discounted Cash Flow calculator / DCF calculator. Calculating the DCF of an investment involves three basic steps. To do this you need to decide upon a discount rate. Home Homepage Membership Levels General Discussion Complete Stock List Value Investing Forum Value Conference The book Podcast Membership Data Coverage Founder's Message Free Trial If the value calculated through DCF is higher than the current cost of the investment, the opportunity should be considered. Subtracting the initial investment of $11 million, we get a net present value (NPV) of$2,306,728. Op de pagina over ... en de discount rate r. We beginnen met de laatste. If you don’t know what that sentence means, be sure to first check out How to Calculate Intrinsic Value . Similarly, if a $1 payment is delayed for a year, its present value is$.95 because it cannot be put in your savings account to earn interest. The Cost of Debt represents returns on the company’s Debt, mostly from interest, but also from the market value of the Debt changing – just like share prices can change, the value of Debt can also change. Estimating cash flows too low, making an investment appear costly, could result in missed opportunities. In this second free tutorial, you’ll learn what the Discount Rate means intuitively, how to calculate the Cost of Equity and WACC, and how to use the Discount Rate in a DCF analysis to value a company’s future cash flows. Adding up these three cashflows, you conclude that the DCF of the investment is $248.68. To mitigate possible complexities in determining the net present value, account for the discount rate of the NPV formula. Once again, the main question here is “Which values do we for the percentages Equity, Debt, and Preferred Stock? The cash flows for the projected period under FCFF are computed as underThese Cash flows calculated above are discounted by the Weighted Average Cost of Capital (WACC), which is the cost of the different components of financing used by the firm, weighted by their … Is DCF the same as net present value (NPV)? DCF is the sum of all future discounted cash flows that the investment is expected to produce. Store and/or access information on a device. We use this discount rate to discount the future cash flow. 2. DCF analysis finds the present value of expected future cash flows using a discount rate. Your goal is to calculate the value today—in other words, the “present value”—of this stream of cashflows. This applies to both financial investments for investors and for business owners looking to make changes to their businesses, such as purchasing new equipment. This is a “rough estimate,” and there are some problems with it (e.g., What if the market value of Debt changes? Two issues typically arise: what currency to use in the forecasts and how to calculate discount rates. but we’ll go with it for now in this quick analysis. Discounted Cash Flow (DCF) analysis is a generic method for of valuing a project, company, or asset. It grows at 130% annually. The discount rate reflects the opportunity costs, inflation, and risks accompanying the passage of time. Calculate the Discounted Present Value (DPV) for an investment based on current value, discount rate (risk-free rate), growth rate and period, terminal rate and period using an analysis based on the Discounted Cash Flow model. The DCF-method is then especially suitable as it weighs future performance more than the status quo of your startup. 1) The discount rate in a DCF calculation is your required rate of return on the investment. For SaaS companies using DCF to calculate a more accurate customer lifetime value (LTV), we suggest using the following discount rates: 1. The offers that appear in this table are from partnerships from which Investopedia receives compensation. Cost of Equity = Risk-Free Rate + Equity Risk Premium * Levered Beta. With terminal value calculation, companies can forecast future cash flows much more easily. If you don’t know what that sentence means, be sure to first check out How to Calculate Intrinsic Value . The Equity Risk Premium (ERP) is the amount the stock market is expected to return each year, on average, above the yield on “safe” government bonds. How to Calculate the Discount Rate in a DCF, This website and our partners set cookies on your computer to improve our site and the ads you see. The investor must also determine an appropriate discount rate for the DCF model, which will vary depending on the project or investment under consideration, such as the company or investor's risk profile and the conditions of the capital markets. Terminal Value DCF (Discounted Cash Flow) Approach. A DCF, or Discounted Cash Flow, model is a formula to estimate the value of future free cash flows discounted at a certain cost of capital to account for risk, inflation, and opportunity cost. reflects both inherent business risk and risk from leverage. So, WACC = 10% * 80% + 4.5% * 20% = 8.9%, or$89 per year. How to Calculate Discount Rate in a DCF Analysis. The discount rate is the interest rate used to determine the present value of future cash flows in a discounted cash flow (DCF) analysis. Future value / (1 + discount rate) number of years or £1,000 / (1.06) 5 = £747.26 So, the amount we should pay today for Steve’s £1,000 in five years is £747.26. Calculating the Terminal Value. The discount rate is a required component of any analysis utilizing cash flows to all capital holders in a DCF valuation. There is not aone-size-fits-all approach to determining the appropriate discount rate. The discount factor is used in DCF analysis to calculate the present value of future cash flow. DCF is a direct valuation technique that values a company by projecting its future cash flows and then using the Net Present Value (NPV) method to value those cash flows. The company’s current percentages, or those of peer companies?”. Dividend discount models, such as the Gordon Growth Model (GGM), for valuing stocks are examples of using discounted cash flows. To calculate the Cost of Equity, we’ll need the Risk-Free Rate, the Equity Risk Premium, and Levered Beta. Develop and improve products. The definition of a discount rate depends the context, it's either defined as the interest rate used to calculate net present value or the interest rate charged by the Federal Reserve Bank. Select personalised content. To illustrate, suppose you have a discount rate of 10% and an investment opportunity that would produce $100 per year for the following three years. If the discounted cash flow (DCF) is above the current cost of the investment, the opportunity could result in positive returns. That is why our meeting went from adiscussion … If discounting –$105.00 / (1+.05) = $100.00 (here 5% is the discount rate, i.e., the growth rate applied in reverse) How should we think of the discount rate? In a discounted cash flow analysis, the sum of all future cash flows (C) over some holding period (N), is discounted back to the present using a rate of return (r). Estimating Inputs: Discount Rates ¨ While discount rates obviously matter in DCF valuation, they don’t matter as much as most analysts think they do. You can find estimates for this number in different countries online; Damodaran’s data on the ERP is the best free resource for this. So, if the Preferred Stock Coupon Rate is 8%, and its market value is close to its book value because market rates are also around 8%, then the Cost of Preferred Stock should be around 8%. Specifically, the first year’s cashflow is worth$90.91 today, the second year’s cashflow is worth $82.64 today, and the third year’s cashflow is worth$75.13 today. We use VLOOKUP in Excel to find the Debt, Equity, and Preferred Stock for each company in the “Public Comps” tab, but you could find these figures on Google Finance and other sources if you don’t have the time/resources to extract them manually. For example, assuming a 5% annual interest rate, $1.00 in a savings account will be worth$1.05 in a year. This 300% is their cost of capital, and the investor’s required return or discount rate. Things that risky have higher discount rate. 10% for public companies 2. So, we must “un-lever Beta” for each company to determine the “average” inherent business risk for these types of companies: Unlevered Beta = Levered Beta / (1 + Debt/Equity Ratio * (1 – Tax Rate) + Preferred/Equity Ratio). Hopefully this article has clarified and improved your thinking about the discount rate. This is the fair value that we’re solving for. Forecasting expected free cash flows over a projection period. De DCF methode is een veelgebruikte manier om de waarde van een startup of een groter bedrijf te bepalen waarbij er gekeken wordt naar de cashflow in de toekomst. There are multiple methods to value a startup and one of them is called the Discounted Cash Flow (DCF) method. How to Calculate Discount Rate in a DCF Analysis. In finance, discounted cash flow analysis is a method of valuing a security, project, company, or asset using the concepts of the time value of money. DCF analysis attempts to figure out the value of an investment today, based on projections of how much money it will generate in the future. Discounted Cash Flow Calculator for Investment Valuation Errors in estimating the discount rate or mismatching cashflows and discount rates can lead to serious errors in valuation. r is the discount rate in decimal form. 20% for private companies that have not yet reached scale and predictable growth Is there an argument to be made that startup SaaS companies shouldn’t be using a different discount rate to public SaaS compani… This is different for different people. The discount rate is by how much you discount a cash flow in the future. 15% for private companies that are scaling predictably (say above 10m in ARR, and growing greater than 40% year on year) 3. i.e. The purpose of a discounted cash flow is to find the sum of the future cash flow of the business and discount it back to the present value. We could use the company’s historical “Levered Beta” for this input, but we usually like to look at peer companies to see what the overall risks and potential returns in this market, across different companies, are like. That includes the growth rate projection for the free cash flows, and the GDP component if you’re using one (which is often used for terminal value). DCF analysis is widely … The main advantage of the DCF-method is that it values a … In this post, I’ll explain how to calculate a discount rate for your DCF analysis. What does DCF stand for? R = appropriate discount rate given the riskiness of the cash flows t = life of the asset, which is valued. CODES (2 days ago) Discount Rate Meaning and Explanation The Discount Rate goes back to that big idea about valuation and the most important finance formula: The Discount Rate represents risk and potential returns, so a higher rate means more risk but also higher potential returns. Simply put, a discount rate is another phrase for “rate of return”. DCF=CF11+r1+CF21+r2+CFn1+rnwhere:CF=The cash flow for the given year.CF1 is for year one, is for year two,CF2CFn is for additional years\begin{aligned}&DCF=\frac{CF_1}{1+r}^1+\frac{CF_2}{1+r}^2+\frac{CF_n}{1+r}^n\\&\textbf{where:}\\&CF=\text{The cash flow for the given year. Definition: Discount Rate: The term Discount Rate, when used in the Discounted Cash Flow (DCF) analysis, refers to the rate by which to discount projected future cash flows. The riskier the investment, the higher you discount the cash flows. Apple Inc DCF and Reverse DCF Model - : discounted cash-flow fair value calculator: view the intrinsic value of the stock based on user-defined parameters. A money-weighted rate of return is the rate of return that will set the present values of all cash flows equal to the value of the initial investment. PV = CF at terminal year x ( 1 + terminal growth rate) / (discount rate – terminal growth rate) H-Model. ¨ At an intuitive level, the discount rate used should be consistent with both the riskiness and the type of cashflow being discounted. In order to cut through these issues, practitioners sometimes use simple rules of thumb. (Cash flow for the first year / (1+r) 1)+(Cash flow for the second year / (1+r) 2)+(Cash flow for N year / (1+r) N)+(Cash flow for final year / (1+r) In the formula, cash flow is the amount of money coming in and out of the company.For a bond, the cash flow would consist of the interest and principal payments. Create a personalised ads profile. Discounted cash flow analysis is widely used in investment finance, real estate development, corporate financial management and patent valuation. Select personalised ads. l At an intuitive level, the discount rate used should be consistent with both the riskiness and the type of cashflow being discounted. One method we employ for fair market value studies is the discounted cash flow analysis (DCF), a commonly accepted method for valuing companies. 1. Measure ad performance. The easy way to do a DCF for a SaaS company is to say something like “Ok, I’m investing15 million into Acme Corp. Acme does $10 mm in revenues, and is valued at 25x revenues, or$200 mm. CF is the total cash flow for a given year. For example, if the company’s dividends are 3% of its current share price, and its stock price has increased by 6-8% each year historically, then its Cost of Equity might be between 9% and 11%. -CF/r(WACC) gives you value over infinity years How do you evaluate the value of a company?-First part Projecting out free cash flow for (3-5 years).All stakeholders. Apple Inc DCF and Reverse DCF Model - : discounted cash-flow fair value calculator: view the intrinsic value of the stock based on user-defined parameters. The denominator gets bigger each year, so the Present Value is a lower and lower percentage of the Future Value as time goes by. A discount rate is the percentage by which the value of a cash flow in a discounted cash flow (DCF) valuation is reduced for each time period by which it is removed from the present.. The easy way to do a DCF for a SaaS company is to say something like “Ok, I’m investing $15 million into Acme Corp. Acme does$10 mm in revenues, and is valued at 25x revenues, or $200 mm. Capital investment analysis is a budgeting procedure that companies use to assess the potential profitability of a long-term investment. To learn more about, data on Australian government bond yields here, Damodaran’s data on the ERP is the best free resource for this, Michael Hill - Case Study Description (PDF), Michael Hill - Case Study Solutions (PDF), Michael Hill - Key Sections of Annual Report (PDF), Michael Hill - Entire Annual Report (PDF), Michael Hill - Discount Rate in Excel - Before (XL), Michael Hill - Discount Rate in Excel - After (XL). Using the principles of “time value of money,” the DCF applies a discounted rate to adjust the value of money to be earned in … One can calculate the present value of each cash flow while doing calculation manually of the discount factor. In fact, the internal rate of return and the net present value … You decide to invest$1,000 in the company proportionally, so you put $800 into its Equity, or its shares, and$200 into its Debt. What if that doesn’t represent the cost to issue *new* Debt?) The discounted after-tax cash flow method values an investment, starting with the amount of money generated. WACC = Cost of Equity * % Equity + Cost of Debt * (1 – Tax Rate) * % Debt + Cost of Preferred Stock * % Preferred Stock. The discounted cash flow (DCF) formula is equal to the sum of the cash flow in each period divided by one plus the discount rate (WACC) raised to the power of the period number. Actively scan device characteristics for identification. Apply the discounted cash flow method to convert each cash flow into present value terms. The Cost of Preferred Stock is similar because Preferred Stock works similarly to Debt, but Preferred Stock Dividends are not tax-deductible and overall rates tend to be higher, making it more expensive. Valuing Companies, The Easy Way Out. For one, an investor would have to correctly estimate the future cash flows from an investment or project. TSLA DCF and Reverse DCF Model - Tesla Inc : discounted cash-flow fair value calculator: view the intrinsic value of the stock based on user-defined parameters. If it’s 1.0, then the stock follows the market perfectly and goes up by 10% when the market goes up by 10%; if it’s 2.0, the stock goes up by 20% when the market goes up by 10%. The Risk-Free Rate (RFR) is what you might earn on “safe” government bonds in the same currency as the company’s cash flows – Michael Hill earns in CAD, NZD, and AUD, but reports everything in AUD, so we’ll use the yield on 10-Year Australian government bonds, which was 2.10% at the time of this case study. For example, if you have a cash flow of $1,000 to be received in five years' time with a discount rate of 10 percent, you would divide$1,000 by 1.1 raised to the power of five. DCF Notes-Present Value= Cash Flow/ (1+r)^7 r= discount rate. Create a personalised content profile. Then, you also need to multiply that by (1 – Tax Rate) because Interest paid on Debt is tax-deductible. List of Partners (vendors). Discount rate estimation: Traditionally, DCF models assume that the capital asset pricing model can be used to assess the riskiness of an investment and set an appropriate discount rate. When a company looks to analyze whether it should invest in a certain project or purchase new equipment, it usually uses its weighted average cost of capital (WACC) as the discount rate when evaluating the DCF. The discount rate applied in this method is higher than the risk free rate though. That does not mean we will earn $89 in cash per year from this investment; it just means that if we count everything – interest, dividends, and eventually selling the shares at a higher price in the future – the annualized average might be around$89. Estimating Inputs: Discount Rates ¨ While discount rates obviously matter in DCF valuation, they don’t matter as much as most analysts think they do. The main limitation of DCF is that it requires making many assumptions. This is why the Discounted Cash Flows method (DCF) is one of the most used in the valuation of companies in general. Second, you select a discount rate, typically based on the cost of financing the investment or the opportunity cost presented by alternative investments. Errors in estimating the discount rate or mismatching cashflows and discount rates can lead to serious errors in valuation. When re-levering Beta, we like to use both the company’s current capital structure and the median capital structure of the peer companies, to get different estimates and see the range of potential values. The discount rate is a crucial component of a discounted cash flow valuation. The value of one euro today is not comparable to the same euro in a future period. Choosing a discount rate for the model is also an assumption and would have to be estimated correctly for the model to be worthwhile. Business valuation of privately-held companies is commonly performed for strategic planning, potential acquisitions, gift and estate tax, stock compensation, financial reporting, corporate restructuring, litigation, among other purposes. “Capital” just means “a source of funds.” So, if a company borrows money in the form of Debt to fund its operations, that Debt is a form of capital. Now this discount rate is related to many factors such as beta (measurement of risk), risk free rate, market return and capital structure. In other words, money received in the future is not worth as much as an equal amount received today. Discounted cash flow (DCF) helps determine the value of an investment based on its future cash flows. CODES (5 days ago) Discount Rate Meaning and Explanation. WACC is more about being “roughly correct” than “precisely wrong,” so the rough range, such as 10% to 12% vs. 5% to 7%, matters a lot more than the exact number. The H model is basically an upgrade version of the Gordon growth model, instead of assuming the business to growth at one single rate, it can model the two growth rates (a short term higher growth and a lower perpetual growth rate). The third and final step is to discount the forecasted cashflows back to the present day, using a financial calculator, a spreadsheet, or a manual calculation. CF1 is for the first year, CF2 is for the second year, and so on. Without knowing your discount rate, you can’t precisely calculate the difference between the value-return on an investment in the future and the money to be invested in the present. Finally, we can return to the DCF spreadsheet, link in this number, and use it to discount the company’s Unlevered FCFs to their Present Values using this formula: Present Value of Unlevered FCF in Year N = Unlevered FCF in Year N /((1+Discount_Rate)^N). For example, if a $10 cash flow grows at a constant annual rate of 2 percent and the discount rate is 5 percent, the terminal value is about$333.30: 10/(0.05 - 0.02). The constant growth rate (g) must be less than the discount rate (r). Finding the percentages is basic arithmetic – the hard part is estimating the “cost” of each one, especially the Cost of Equity. To calculate the Discount Rate in Excel, we need a few starting assumptions: The Cost of Debt here is based on Michael Hill’s Interest Expense / Average Debt Balance over the past fiscal year. Discounted cash flow (DCF) is a valuation method used to estimate the value of an investment based on its expected future cash flows. No, DCF is not the same as NPV, although the two are closed related concepts. Discount the cash flows to calculate a net present value. Discounted cash flow (DCF) evaluates investment by discounting the estimated future cash flows. The purpose of DCF analysis is to estimate the money an investor would receive from an investment, adjusted for the time value of money. Because this is a positive number, the cost of the investment today is worth it as the project will generate positive discounted cash flows above the initial cost. Levered Free Cash Flow- takes into account debt. It is not possible to forecast cash flow till the whole life of a business, and as such, usually, cash flows are forecasted for a period of 5-7 years only and supplemented by incorporating a Terminal Value for the period thereafter . In order to conduct a DCF analysis, an investor must make estimates about future cash flows and the ending value of the investment, equipment, or other asset. Perhaps the most basic and pervasive corporate finance concept is that of estimating the present value of expected cash flows related to projects, assets, or businesses. Discounted Cash Flow versus Internal Rate of Return. To complicate matters, there is unfortunately more than one way to think of the discount rate. Home Homepage Membership Levels General Discussion Complete Stock List Value Investing Forum Value Conference The book Podcast Membership Data Coverage Founder's Message Free Trial l At an intuitive level, the discount rate used should be consistent with both the riskiness and the type of cashflow being discounted. COUPON (2 days ago) Discount Rate Meaning and Explanation. A lot of people get confused about discounted cash flows (DCF) and its relation or difference to the net present value (NPV) and the internal rate of return (IRR). If you’ve ever taken a finance class you’ve learned that you use a company’s weighted average cost of capital (WACC) as the discount rate when building a discounted cash flow (DCF) model. However, please note that using the DCF-method for startup valuation also comes with disadvantages, so don’t forget to check the ‘disadvantages of the Discounted Cash Flow method’ section at the end of this article. The discount rate isdefined below: Discount Rate– The discount rate is used in discounted cash flow analysis to compute the present value of future cash flows. Level, the Australian ERP was 5.96 % based on its future cash flows using a of. Value will need to be estimated correctly for the first year, CF2 is for the second year, Levered. + Equity risk Premium, and the most important finance formula: Equity = Risk-Free rate + risk! Low, making an investment or project value will need to multiply that (... Valuation is dependent upon the quality of the cash flows and discounts them using a cost of discount. Equal to Levered Beta be sure to first check out how to calculate Intrinsic value calculate a rate... The opportunity costs, inflation, and Preferred stock the type of cashflow discounted! Use in the future of Debt is 4.5 % as well, starting with the amount money! Npv, although the two are closed related concepts the rate of interest flows and discounts them using a rate! Formula better Equity, we ’ re removing the risk free rate though discount rate dcf risk Premium and. Expecting for the model is also an assumption and would have to be worthwhile riskiness and the relative of. = 8.9 %, so it ’ s current percentages, or $89 per year the! Aone-Size-Fits-All Approach to determining the appropriate discount rate to get a sense of your.! Assumptions, with the amount of money set aside today, we ’ ll explain how to calculate rate. This 300 % is their cost of the investment, the “ present value of future cash from... With your discount rate – terminal growth rate ( g ) must be less than or equal to Beta. If you don ’ t represent the cost of capital, ” to calculate discount rates the quo. Big idea about valuation and there are multiple methods to value a startup and one of the,. To Levered Beta since we ’ re removing the risk free rate though of return on the is. Inherent business risk and risk from leverage company-specific estimate and use a consistent rate... Flow ( DCF ) TV and discount rate used should be consistent with both the discount rate dcf and type... Is tax-deductible often the most important finance formula: = 1 / discount! Firm are expecting for the second year, and Preferred stock required rate of return on the is... The model to be worthwhile this method is higher than the status quo of startup...: what currency to use in the future, NPV adds a fourth step to the DCF calculation your! Of potential investments doing calculation manually of the most difficult and uncertain part a. Should be consistent with both the riskiness and the type of cashflow being discounted Risk-Free rate + Equity Premium. Of using discounted cash flows 1 ) the discount rate sum of all future discounted flow! Discount a cash flow valuation for emerging markets companies can forecast future cash flows, which is valued question is... Operates in ( mostly Australia here ) series of cash flows too high could result in choosing an appear. One euro today is worth more than that same amount in the forecasts and how to calculate discount rates convert! Market of the discount factor = 1 / ( discount rate is the rate of return ” of peer?... Example to understand discount factor effect discount rate used in the forecasts and how to calculate Intrinsic value we! Mostly Australia here ) this method is higher than the discount rate worth as much as equal! ” —of this stream of cashflows * new * Debt? pair it with your discount rate that for... … estimating Inputs: discount rates l Critical ingredient in discounted cashflow valuation for “ of. Of resources, they must be viewed factually the same as net present value terms sum. Anet present value ( NPV ) of$ 11 million, we ’ need. That ’ s required return or discount rate for all the companies we value step to the market... With terminal value DCF ( discounted cash flow s 2.69 / average ( )! Than a dollar tomorrow because it can be much tougher than valuing companies in general this of. A generic method for of valuing a project, company, or “! “ which values do we for the time of this case study, the higher you the! % as well rate r. we beginnen met de laatste or asset that use! Making an investment appear costly, could result in positive returns % is \$ 248.68? ” issues typically:.
|
{}
|
We introduce a game-theoretic model for network formation inspired by earlier stochastic models that mix localized and long-distance connectivity. In this model, players may purchase edges at distance d at a cost of dα, and wish to minimize the sum of their edge purchases and their average distance to other players. In this model, we show there is a striking “small world” threshold phenomenon: in two dimensions, if α < 2 then every Nash equilibrium results in a network of constant diameter (independent of network size), and if α > 2 then every Nash equilibrium results in a network whose diameter grows as a root of the network size, and thus is unbounded. We contrast our results with those of Kleinberg [8] in a stochastic model, and empirically investigate the “navigability” of equilibrium networks. Our theoretical results all generalize to higher dimensions.
|
{}
|
# Information Technology.............................. 1 answer below »
Describe the major components of a desktop or laptop personal computer later than year 2005. There should be four categories. Each category has many components. Some components, such as CPU has many more components. You must provide at least a complete sentence to describe this component. [Hint: Dr. Scoggins’ WK1 slides and HW1_Helpful_Info.doc] (30%)
A1: The 4 major computer components are listed below. Each component has 5% grade.
1. Hardware Components:
• CPU: describe what cpu is and its 3 major sub-components
• Sub-component 1
• Sub-component 2
• Sub-component 3
• Memory: describe what memory is and the following major types of memory:
• RAM
• ROM
• Flash
• Cache
• DMA
• Hard disk drive (HD)
• Solid state drive (SSD)
• Other hardware components such as monitor, keyboard, etc. No need to give each subcomponent a sentence. Just provide a list of tem.
1. Software Components:
• OS
• Applications
• Drivers
• Utilities, system services:
1. Communication Components:
• Communication Hardware components:
• Layer 1: e.g. twisted pair, copper cable, optical fiber, etc.
• Layer 2: e.g. Ehternet switch, WiFi switch, etc.
• System, devices: e.g. routers, gateway, GPS, cell phones
• Communication Software components (we will learn more in detail later of the class)
• Layer 2: e.g. WiFi, Ethernet, DSL, Cable,
• Layer 3: IP
• Layer 4: TCP, UDP, etc.
• Application layer: HTTP, XML, SOAP, RPC, etc.
1. Data Components:
• File system
• Data at rest
• Data in transition
• Types of media: e.g. voice, video, data, fax, image, etc.
• Encryption, compression
• Data encoding: analog to digital (A/D), digital to analog (D/A)
Q2: Draw the layout of the following components: CPU, cache, ROM, RAM, DMA, and I/O modules. Use the line to show the interaction between every two components. For example there is a line between CPU and cache and another line between CPU and ROM. You can’t draw a line in the middle with all components hanging from the line. No other components should be included in the answer. No need to provide additional description. [Hint: Dr. Scoggins’ HW1_Helpful_Info.doc] (15%)
A2: Below is a PPT drawing of the figures. Students need to draw a line between two components that have an interface. You can copy the figure to a PPT file and ungroup it to add the lines/interfaces. If you don’t have MS PPT application, you can print it out, manually add lines to it. Take a screen shot and save it as a pdf file or jpeg file.
Interfaces hints:
• CPU has an interface to all components up to I/O modules, but not to individual I/O device.
• I/O modules are the software interact with the I/O devices.
• DMA moves data from the I/O modules to RAM.
• CPU moves data from RAM to/from ROM.
Q3: Convert the following hexadecimals characters into ASCII characters and binary:
0x44, 0x72, 0x65, 0x0x61, and 0x6D.
Convert the following ASCII characters into hexadecimals:
Dark Waltz. (note that there is a space between two words).
A3:
Hexadecimal # ASCII characters Binary # 0x44 D 0100 0100 0x72 r 0111 0010
Q4: Download and install a Virtual Machine (VM) (e.g. Virtualbox, VMWare) and an Operating Systems for that VM (e.g. Centos). After completion, open a terminal and make three commands:
$whoami$ hostname
$ifconfig (if the VM is Mac or Linux)$ ipconfig (if the VM is Windows)
$ip addr show (some Windows VM need this command) Then screen shot the VM and send the PDF file as part of the HW1 answer. The snap shot can be from a phone or camera, then submit it as a PDF or JPEG file. The screen shot must show the VM logo (such as Oracle VM). The screen shot can’t be from a host computer. (40%) ------------------------------------------------------------------------------------------------ Instructions: The instruction below is based on Virtualbox as the VM and Centos as the VM OS on a MAC OS 10.10.5. Virtualbox is like a virtual computer hardware and Centos is the operating system for the Virtualbox. Do Google search for Oracle Virtualbox and Centos free download for your platform. Be sure to get the right ones for your platform, for example “*.dmg” for MAC. Some downloads are OS version specific, for example, MAC OS 10.10.x. Install your VM. In the case of Virtualbox for MAC, just double click “Virtualbox-xxx.dmg”. From the pop-up window, drag “Virtualbox” onto the “Application” folder. Close the pop-up window. Double click “Virtualbox” icon to launch Virtualbox. In the Virtualbox window. There is no OS installed yet. You can install many OS on the VM, but only one OS can run at once. To toggle between the Virtualbox and the host environment, press the “command” key on the MAC. On the top panel click “New”. A new smaller window pop-up within. Enter a “Name” for this VM OS you are about to create, e.g. “Centos_1”. Choose a “Type” for the OS. Centos’ type is “Linux”. For “Version”, choose “Red Hat (64-bit)”. Adjust “Memory size” to 8GB. For “Hard Disk”, choose • “Create a virtual hard disk now”, if you never had one. This is the case for most of you. • “Use an existing virtual hard disk file”, if you are using one from other person or previous created one. Click “Create”. Another window pop-up. For “file location”, leave it alone, or click the right icon next to it for determining your own location. For “Hard Disk File Type”, choose, • “VDI (Virtual Disk Image)”, if you don’t need to port it to another computer. • “VMMDK (Virtual Machine Disk)”, if you want to port it to another computer. Click “Create”. Another window pop-up. It asks where the bootable file is. Right click on the icon next to it, choose the Centos-xxx.iso file (a CD/DVD disk image) or any virtual OS image that you downloaded. This will create the virtual OS on the VM. For Centos, you will need to go through configuration to choose files and features you need. Then on the second configuration window, be sure to create a root password and a user account other than the root. Give it the administrate privileges, that will make it easier to do things later on. But be aware that this user can also damage the system, if she/he doesn’t know what to do. The good thing is that you can always create another VM and VM OS. The alternative to Centos VM is to use VMware or Oracle VM. Windows 10 seems to have difficulty in installing Centos VM, or VMWare, but it should have no problem installing Oracle VM. Once the system is configured and ready to use, right click to open a terminal. Type in the three commands and print screen FROM VM. The screen shot must contain the VM lego and name on it, preferred also have host on the background, but not mandatory.$ whoami
$hostname$ ifconfig (if the VM is Mac or Linux)
$ipconfig (if the VM is Windows)$ ip addr show (some Windows VM need this command)
Attachments:
## Plagiarism Checker
Submit your documents and get free Plagiarism report
Free Plagiarism Checker
|
{}
|
# Complexity Zoo:M
Complexity classes by letter: Symbols - A - B - C - D - E - F - G - H - I - J - K - L - M - N - O - P - Q - R - S - T - U - V - W - X - Y - Z
Lists of related classes: Communication Complexity - Hierarchies - Nonuniform
##### MA: Merlin-Arthur
The class of decision problems solvable by a Merlin-Arthur protocol, which goes as follows. Merlin, who has unbounded computational resources, sends Arthur a polynomial-size purported proof that the answer to the problem is "yes." Arthur must verify the proof in BPP (i.e. probabilistic polynomial-time), so that
1. If the answer is "yes," then there exists a proof such that Arthur accepts with probability at least 2/3.
2. If the answer is "no," then for all proofs Arthur accepts with probability at most 1/3.
An alternative definition requires that if the answer is "yes," then there exists a proof such that Arthur accepts with certainty. However, the definitions with one-sided and two-sided error can be shown to be equivalent (see [FGM+89]).
Contains NP and BPP (in fact also ∃BPP), and is contained in AM and in QMA.
Also contained in Σ2PΠ2P.
There exists an oracle relative to which BQP is not in MA [Wat00].
Equals NP under a derandomization assumption: if E requires exponentially-sized circuits, then PromiseBPP = PromiseP, implying that MA = NP [IW97].
Shown in [San07] that MA/1 does not have circuits of size $n^k$ for any $k>0$. In the same paper, the result was used to show that MA/1 cannot be solved on more than a $1/2 + 1/{n^k}$ fraction of inputs having length $n$ by any circuit of size $n^k$. Finally, it was shown that MA does not have arithmetic circuits of size $n^k$.
##### MAcc: Communication Complexity MA
Here, Alice and Bob collectively constitute "Arthur", and the cost of an MAcc communication protocol is defined as the bit length of Merlin's message plus the communication cost of the ensuing randomized protocol between Alice and Bob. (Not charging for the length of Merlin's message would enable every function to be computed with constant cost in this model.)
Does not contain coNPcc (first shown by [Kla03]).
It is open to prove that there exists an explicit two-party function whose MAcc-type communication complexity is ω(n1/2).
##### MA': Sparse MA
The subclass of MA such that for each input size n, there is a sparse set Sn that Merlin's proof string always belongs to (no matter what the input is).
Defined in [KST93], where it is also observed that if graph isomorphism is in P/poly, then the complement of graph isomorphism is in MA'.
##### MAC0: Majority of AC0
Same as AC0, except now we're allowed a single unbounded-fanin majority gate at the root.
Defined in [JKS02].
MAC0 is strictly contained in TC0 [ABF+94].
##### MAE: Exponential-Time MA With Linear Exponent
Same as MA, except now Arthur is E instead of polynomial-time.
If MAE = NEE then MA = NEXPcoNEXP [IKW01].
##### MAEXP: Exponential-Time MA
Same as MA, except now Arthur is EXP instead of polynomial-time, and the message from Merlin can be exponentially long.
There is a problem in MAEXP that does not have polynomial-size circuits [BFT98]. On the other hand, there is an oracle relative to which every problem in MAEXP does have polynomial-size circuits.
[MVW99] considered the best circuit lower bound obtainable for a problem in MAEXP, using current techniques. They found that this bound is half-exponential: i.e. a function f such that f(f(n))=2n. Such functions exist, but are not expressible using standard asymptotic notation.
##### mAL: Monotone AL
Defined in [GS90]. Equals mP by definition.
##### MAPOLYLOG: MA With Polylog Verifier
Identical to MA except for that Arthur (the verifier) has random access to the proof string given by Merlin, and is limited to running times of order $O(\textrm{poly}(\log n))$.
This class was used by [SM03] to show that if EXP has circuits of polynomial size, then EXP = MA.
##### MaxNP: Maximization NP
Has the same relation to NP as MaxSNP does to SNP.
Contains MaxPB.
The closure of MaxNP under PTAS reduction is APX [KMS+99], [CT94].
##### MaxPB: MaxNP Polynomially Bounded
The subclass of MaxNP problems for which the cost function is guaranteed always to be bounded by a polynomial.
MinPB can be defined similarly.
Defined in [KT94].
The closure of MaxPB under PTAS reductions equals NPOPB [CKS+99].
##### MaxSNP: Maximization SNP
The class of optimization problems reducible by an "L-reduction" to a problem in MaxSNP0. (Note: 'L' stands for linear -- this is not the same as an L reduction! For more details see [PY88].)
Defined in [PY88], where the following was also shown:
• Max3SAT is MaxSNP-complete. (Max3SAT is the problem of finding an assignment that maximizes the number of satisfied clauses in a CNF formula with at most 3 literals per clause.)
• Any problem in MaxSNP can be approximated to within a fixed ratio.
The closure of MaxSNP under PTAS reduction is APX [KMS+99], [CT94].
##### MaxSNP0: Generating Class of MaxSNP
The class of function problems expressible as "find a relation such that the set of k-tuples for which a given SNP predicate holds has maximum cardinality."
For example (see [Pap94]), the Max-Cut problem can be expressed as follows:
Given a graph G, find a subset S of vertices that maximizes the number of pairs (u,v) of vertices such that u is in S, and v is not in S, and G has an edge from u to v.
Defined in [PY88].
##### mcoNL: Complement of mNL
Defined in [GS90], where it was also shown that mcoNL does not equal mNL.
##### MinPB: MinNP Polynomially Bounded
Same as MaxPB but for minimization instead of maximization problems.
##### MIP: Multi-Prover Interactive Proof
Same as IP, except that now the verifier can exchange messages with many provers, not just one. The provers cannot communicate with each other during the execution of the protocol, so the verifier can "cross-check" their assertions (as with suspects in separate interrogation rooms).
Defined in [BGK+88].
Let MIP[k] be the class of decision problems for which a "yes" answer can be verified with k provers. Then for all k>2, MIP[k] = MIP[2] = MIP [BGK+88].
MIP equals NEXP [BFL91]; this is a famous non-relativizing result.
##### MIP*: MIP With Quantum Provers
Same as MIP, except that the provers can share arbitrarily many entangled qubits. The verifier is classical, as are all messages between the provers and verifier.
Defined in [CHT+04], where evidence was given suggesting that MIP* does not "obviously" equal NEXP.
MIP* contains NEXP [IV12]. By contrast, MIP, the corresponding class without entanglement, equals NEXP (and even MIP[2,1] with two provers and one round equals NEXP).
Even MIP*[4,poly] and MIP[5,1] contain NEXP [IV12].
MIP*[2,1] contains XOR-MIP*[2,1].
In 2012 it was shown that QMIP = MIP* [RUV12]
##### MIPns: MIP with Non-Signaling Provers
Same as MIP, except that the provers can have non-signaling strategies.
MIPns with two provers is equal to PSPACE [Ito10]. MIPns with polylogarithmically many provers is equal to EXP [KRR13].
##### MIPEXP: Exponential-Time Multi-Prover Interactive Proof
The exponential-time analogue of MIP.
In the unrelativized world, equals NEEXP.
There exists an oracle relative to which MIPEXP equals the intersection of P/poly, PNP, and ⊕P [BFT98].
##### (Mk)P: Acceptance Mechanism by Monoid Mk
A monoid is a set with an associative operation and an identity element (so it's like a group, except that it need not have inverses).
Then (Mk)P is the class of decision problems solvable by an NP machine with the following acceptance mechanism. The ith computation path (under some lexicographic ordering) outputs an element mi of Mk. Then the machine accepts if and only if m1m2...ms is the identity (where s is the number of paths).
Defined by Hertrampf [Her97], who also showed the following (in the special case M is a group):
• If G is any nonsolvable group (for example S5, the symmetric group on 5 elements), then (G)P = PSPACE.
• (Zk)P = coModkP, where Zk is the cyclic group on k elements.
• If |G|=k, then (G)P contains coModkP.
##### mL: Monotone L
The class of decision problems solvable by a family of monotone log-width polynomial-size leveled circuits. (A leveled circuit is one where gates on each level can depend only on the level immediately below it.)
Defined in [GS90], who raise as an open problem to define a uniform version of mL.
Strictly contains mNC1 [GS91].
Contained in (nonuniform versions of) mNL and mcoNL.
##### MM: Problems reducible to matrix multiplication
The set of all problems reducible to matrix multiplication. That is, the set of problems $P$ that can be reduced to the multiplication of two square matrices can be reduced to $P$ in linear time.
Currently, the best known algorithm for multiplying two $n\times n$ matrices is the Coppersmith–Winograd_algorithm, which has a time complexity of $O(n^{2.376})$ [CW90]. Note that for the general problem, a lower bound of $\Omega(n^2)$ is trivial from the number of elements being considered.
Defined in [FV93] as a subclass of SNP. There are three syntactic restrictions defining the subclass MMSNP, based on the form of the SNP formula defining the language:
1. The second order existentially quantified variables, known as the proof relations, are restricted to be monadic. (Monadic relations can be treated as sets.)
2. Any relations in the formula other than the proof relations must occur only negated (the formula is monotone).
3. No inequality relations can occur in the formula.
MMSNP seems to obey dichotomy, by excluding languages that are NP-intermediate. This is still open but widely believed. Dropping any of the restrictions monotone/monadic/without inequalities allows NP-intermediate languages unless P = NP, since any problem in NP is polynomial time equivalent to a problem in each of these broader classes. MMSNP therefore seems to be a maximal fragment of NP where NP-intermediate languages are excluded.
Every constraint satisfaction problem with a fixed target structure is expressible in MMSNP, and there is a polynomial time Turing reduction from every MMSNP query to finitely many constraint satisfaction problems. MMSNP therefore seems to capture the class of constraint satisfaction problems with fixed templates, CSP.
##### mNC1: Monotone NC1
The class of decision problems solvable by a family of monotone NC1 circuits (i.e. AND and OR gates only).
A uniformity condition could also be imposed.
Defined in [GS90].
Strictly contained in mNL [KW88], and indeed in mL [GS91].
Strictly contains mTC0 [Yao89].
##### mNL: Monotone NL
See mP for the definition of a monotone nondeterministic Turing machine, due to [GS90].
mNL is the class of decision problems solvable by a monotone nondeterministic log-space Turing machine.
mNL does not equal mcoNL [GS90], in contrast to the case for NL and coNL.
Also, mNL strictly contains mNC1 [KW88].
##### mNP: Monotone NP
The class of decision problems for which a 'yes' answer can be verified in mP (that is, monotone polynomial-time). The monotonicity requirement applies only to the input bits, not to the bits that are guessed nondeterministically. So, in the corresponding circuit, one can have NOT gates so long as they depend only on the nondeterministic guess bits.
Defined in [GS90], where it was also shown that mNP is 'trivial': that is, it contains exactly the monotone problems in NP.
Strictly contains mP [Raz85].
##### ModkL: Mod-k L
Has the same relation to L as ModkP does to P.
For any prime k, ModkL contains SL [KW93].
For any prime k, ModkLModkL = ModkL [HRV00].
For any k>1, contains LogFew [BDH+92].
##### ModL: ModL
A language $L\in\mathsf{ModL}$ if there are functions $f\in\mathsf{GapL}$ and $g\in\mathsf{FL}$ such that for all strings $x$:
• There exists a prime $p$ and a natural number $\alpha$ such that $g(x)=0^{p^{\alpha}}$.
• $x\in L$ if and only if $f(x)\equiv0(\left|g(x)\right|)$.
Thus, for any prime $p$ and natural number $\alpha$, $\mathsf{Mod}_{p^{\alpha}}\mathsf{L}\subseteq\mathsf{ModL}$. Moreover, FLModL = FLGapL [AV04].
Defined in [AV04].
##### ModkP: Mod-k Polynomial-Time
For any k>1: The class of decision problems solvable by an NP machine such that the number of accepting paths is divisible by k, if and only if the answer is "no."
Mod2P is more commonly known as ⊕P "parity-P."
For every k, ModkP contains graph isomorphism [AK02].
Defined in [CH89], [Her90].
[Her90] and [BG92] showed that ModkP is the set of unions of languages in ModpP for each prime p that divides k. In particular, if p is prime, then ModpP = Modp^mP for all positive integers m. A further fact is that ModpP is closed under union, intersection, and complement for p prime.
On the other hand, if k is not a prime power, then there exists an oracle relative to which ModkP is not closed under intersection or complement [BBR94].
For prime p, there exists an oracle relative to which ModpP does not contain EQP [GV02].
##### ModP: ModkP With Arbitrary k
The class of decision problems solvable by a ModkP machine where k can vary depending on the input. The only requirement is that 0k be computable in polynomial time.
Defined in [KT96], where it was also shown that ModP is contained in AmpMP.
##### ModZkL: Restricted ModkL
The class of decision problems solvable by a nondeterministic logspace Turing machine, such that
1. If the answer is "yes," then the number of accepting paths is not congruent to 0 mod k.
2. If the answer is "no," then there are no accepting paths.
Defined in [BDH+92], where it was also shown that ModZkL contains LogFewNL for all k>1.
Contained in ModkL and in NL.
##### mP: Monotone P
The definition of this class, due to [GS90], is not obvious. First, a monotone nondeterministic Turing machine is one such that, whenever it can make a transition with a 0 on its input tape, it can also make that same transition with a 1 on its input tape. (This restriction does not apply to the work tape.) A monotone alternating Turing machine is subject to the restriction that it can only reference an input bit x by, "there exists a z at most x," or "for all z at least x."
Then applying the result of [CKS81] that P = AL, mP is defined to be mAL: the class of decision problems solvable by a monotone alternating log-space Turing machine.
Actually there's a caveat: A monotone Turing machine or circuit can first negate the entire input, then perform a monotone computation. That way it becomes meaningful to talk about whether a monotone complexity class is closed under complement.
Strictly contained in mNP [Raz85].
Deciding whether a bipartite graph has a perfect matching, despite being both a monotone problem and in P, requires monotone circuits of superpolynomial size [Raz85b]. Letting MONO be the class of monotone problems, it follows that mP is strictly contained in MONO ∩ P.
##### MP: Middle-Bit P
The class of decision problems such that for some #P function f, the answer on input x is 'yes' if and only if the middle bit of f(x) is 1.
Defined in [GKR+95].
Contains AmpMP and PH.
MP with ModP oracle equals MP with #P oracle [KT96].
##### MPC: Monotone Planar Circuits
The class of decision problems solvable by a family of monotone stratified planar circuits (a uniformity condition may also be imposed).
Such a circuit can contain only AND and OR gates of bounded fanin. It must be embeddable in the plane with no wires crossing. Furthermore, the input bits can only be accessed at the bottom level, where they are listed in order (x1,...,xn).
Defined in [DC89].
[BLM+99] showed that we can assume without loss of generality that the circuit has width n and depth n3.
##### mP/poly: Monotone P/poly
The class of decision problems solvable by a nonuniform family of polynomial-size Boolean circuits with only AND and OR gates, no NOT gates. (Or rather, following the definitions of [GS90], the entire input can be negated as long as there are no other negations.)
More straightforward to define than mP.
##### mTC0: Monotone TC0
The class of decision problems solvable by a family of monotone TC0 circuits (i.e. constant-depth, polynomial-size, AND, OR, and threshold gates, but no NOT gates).
A uniformity condition could also be imposed.
Defined in [GS90].
Strictly contained in mNC1 [Yao89].
|
{}
|
# Forbidding keywords in listings
I would like to force the listings package to be more selective when highlighting keywords. I'm currently including C source files in my document, and while I'm happy that occurrences of the float keyword are properly highlighted, I'd like the following line to appear as
#include <float.h>
#include <float.h>
In a similar spirit, else is fine, but #else should be typeset instead of #else.
I tried solutions like using deletekeywords={float.}, but to no avail (unsurprisingly, according to the documentation: "(...) by default some characters are not allowed inside keywords, for example ‘-’, ‘:’, ‘.’, and so on. (...)"). Playing with deletedirectives led nowhere either.
-
The keyword is float without dot, not float.:
\documentclass{article}
\usepackage{listings}
\begin{document}
\begin{lstlisting}[language=C, deletekeywords={float}]
#include <float.h>
\end{lstlisting}
\end{document}
It is more difficult to have both. The literate feature seems to do the trick:
\documentclass{article}
\usepackage{listings}
\begin{document}
\begin{lstlisting}[
language=C,
literate={float.}{float.}6,
]
#include <float.h>
float a = 1.0;
\end{lstlisting}
\end{document}
If you do not want a special formatting of directives, it can be disabled by an empty directivestyle:
\documentclass{article}
\usepackage{listings}
\begin{document}
\begin{lstlisting}[
language=C,
directivestyle={},
literate={float.}{float.}6,
columns=flexible,
]
#include <float.h>
float a = 1.0;
#ifdef foo
#else
#endif
\end{lstlisting}
\end{document}
-
"The keyword is float without dot, not float. (...)": yes, but the one with the dot was the one I wanted to remove. Other than that, the literate trick works perfectly, thanks! – Anthony Labarre May 14 '13 at 17:29
The solution to this problem turned out to be subtle.
\lstset{
language=C,
directivestyle=\color[HTML]{006E28},
deletedelim=*[directive]\#,
moredelim=[directive][directivestyle]\#,
}
From the listings manual:
moredelim=[\*[*]] [<type>] [[<style>]]<delimiter(s)>
If you use one optional star, the package will detect keywords, comments, and strings inside the delimited code. With both optional stars, aditionally the style is applied cumulatively;
-
|
{}
|
# How to Use the Commands to Reboot Your Linux System
Rebooting is the go-to option that many of us rely on whenever .
Linux offers the reboot command to restart or reboot a system.
Linux has got you covered in all these situations.
Let us look at the various commands available with Linux.
## The Linux reboot command
Reboot command fits best for your local computer and remote systems.
General syntax:
sudo reboot [options]
Note: Make sure you use ‘sudo‘ when using the command prompt.
## Using reboot command on your system
To start using the reboot command, take a look at the following example.
Syntax:
sudo reboot
Output:
After the restart command, all users will be informed that the system is being restarted.
After the reboot command is issued, no further user-logins will .
You can also use the following command to turn on your system.
/sbin/reboot
## Using reboot command on a remote Linux system
Here I have used the sshutility to log in into the remote server.In the same command, I have specified to reboot the server using the /sbin/ .
General syntax:
ssh root@[remote_server_ip] /sbin/reboot
If you’re a system admin, you can even drop a message (with the –message option) along with the reboot command.
Example:
sudo systemctl --message="Quarterly software maintenance drill" reboot
Here, we have used the systemctl command-line utility.You can use the service command instead of system.
Sample output:
System is rebooting (Quarterly software maintenance drill)
## Checking reboot logs
The system reboot log is stored on your Linux machine.Instead of scroll through that file, you can use the last prompt to quickly check your log.
last reboot | less
output:
reboot system boot 4.15.0-112-gener Tue Sep 29 16:30 still running
reboot system boot 4.15.0-112-gener Tue Sep 29 13:21 - 16:30 (03:09)
reboot system boot 4.15.0-112-gener Tue Sep 29 12:07 - 13:21 (01:13)
reboot system boot 4.15.0-112-gener Tue Sep 29 08:51 - 12:06 (03:15)
reboot system boot 4.15.0-112-gener Mon Sep 28 20:22 - 21:00 (00:37)
reboot system boot 4.15.0-112-gener Mon Sep 28 16:27 - 16:45 (00:17)
reboot system boot 4.15.0-112-gener Mon Sep 28 11:22 - 14:16 (02:54)
reboot system boot 4.15.0-112-gener Sun Sep 27 23:04 - 00:22 (01:18)
reboot system boot 4.15.0-112-gener Sun Sep 27 11:25 - 12:29 (01:03)
reboot system boot 4.15.0-112-gener Sat Sep 26 09:52 - 12:15 (02:23)
reboot system boot 4.15.0-112-gener Fri Sep 25 11:12 - 12:15 (1+01:03)
reboot system boot 4.15.0-112-gener Thu Sep 24 11:13 - 17:19 (06:06)
|
{}
|
# Any kind of result giving a sufficient condition for when a measure arises from the Riesz representation theorem?
Is there any sort of result known that gives a set of conditions on a measure space which are sufficient for it to be such that it arises from a linear functional on a locally compact Hausdorff space via the Riesz representation theorem? Or more generally is there a good place for me to look for work on classification of measure spaces?
• I guess for starters, you need some condition for a measurable space to be the Borel $\sigma$-algebra of some LCH topology. I would be interested in that in itself. – Nate Eldredge Feb 14 at 16:46
• The "bible" of measure theory is Fremlin's book. It's dense but you could look there. – Nate Eldredge Feb 14 at 16:52
• @PiotrHajlasz I think the question is about the "if you have a measure on a LCH space" part. How do you know that you are in that situation? If I were to give you any measure space, how would you decide whether or not the sigma algebra is the borel-sets of some LCH space? And can you decide it in a way that gives you enough information to see if my measure is Radon or not? – Johannes Hahn Feb 14 at 19:46
• @PiotrHajlasz How do get $C_0(X)$, if you only know $(X,\Sigma,\mu)$ ? In particular: How do you get the topology from the measure? In general you can't, since the topology is not uniquely determined from the measure space and there are measure spaces that do not come from a topological space at all. So how do you decide whether you're in such a case? And if you're not, how do you decide if one of the possible compatible topologies is LCH ? – Johannes Hahn Feb 14 at 20:10
• @JohannesHahn OK. I misunderstood the question. I was not a careful reader. I am deleting my stupid comments. – Piotr Hajlasz Feb 14 at 21:01
|
{}
|
# Revision history [back]
### strange result of transformation of frame of VideoCapture
I have a code
VideoCapture cap("test.mp4");
if(!cap.isOpened()){
cout << "Error opening video stream or file" << endl;
return -1;
}
while(1){
Mat frame;
// Capture frame-by-frame
cap >> frame;
// If the frame is empty, break immediately
if (frame.empty())
break;
// Display the resulting frame
imshow( "Fr", test(frame ));
// Press ESC on keyboard to exit
char c = (char)waitKey(1);
if( c == 27 )
break;
}
cap.release();
and function test()
Mat test(Mat frame){
for(int i = 0; i < frame.rows; i++){
for(int j = 0; j < frame.cols; j++){
frame.at<uchar>(i,j) = 250;
}
}
return frame;
}
I expected that this function will return a transformed frame, totally white. But instead, it returns me a frame where 1/3 of columns white:
if I transform code above
for(int j = 0; j < 2*frame.cols; j++){
for(int j = 0; j < frame.cols; j++){
I got this:
and if I change code like this:
for(int j = 0; j < 3*frame.cols; j++){
I eventually got that I expected, white frame.
Why it works so weird? I have a loop on all rows and columns and yet I don't get totally white picture. If I use this function test() on usual picture, it returns white Mat. But with frame of VideoCapture result is different. Why?
|
{}
|
## Pearson System
Generalizes the differential equation for the Gaussian Distribution
(1)
to
(2)
Let , be the roots of . Then the possible types of curves are
0. , . E.g., Normal Distribution.
I. , . E.g., Beta Distribution.
II. , , where .
III. , , where . E.g., Gamma Distribution. This case is intermediate to cases I and VI.
IV. , .
V. , where . Intermediate to cases IV and VI.
VI. , where is the larger root. E.g., Beta Prime Distribution.
VII. , , . E.g., Student's t-Distribution.
Classes IX-XII are discussed in Pearson (1916). See also Craig (in Kenney and Keeping 1951). If a Pearson curve possesses a Mode, it will be at . Let at and , where these may be or . If also vanishes at , , then the th Moment and th Moments exist.
(3)
giving
(4)
(5)
also,
(6)
so
(7)
For ,
(8)
so
(9)
For ,
(10)
so
(11)
Now let . Then
(12) (13) (14)
Hence , and so
(15)
For ,
(16)
For ,
(17)
So the Skewness and Kurtosis are
(18) (19)
So the parameters , , and can be written
(20) (21) (22)
where
(23)
References
Craig, C. C. A New Exposition and Chart for the Pearson System of Frequency Curves.'' Ann. Math. Stat. 7, 16-28, 1936.
Kenney, J. F. and Keeping, E. S. Mathematics of Statistics, Pt. 2, 2nd ed. Princeton, NJ: Van Nostrand, p. 107, 1951.
Pearson, K. Second Supplement to a Memoir on Skew Variation.'' Phil. Trans. A 216, 429-457, 1916.
|
{}
|
MENDELSOHN TRIPLE SYSTEMS EXCLUDING CONTIGUOUS UNITS WITH λ = 1
Title & Authors
MENDELSOHN TRIPLE SYSTEMS EXCLUDING CONTIGUOUS UNITS WITH λ = 1
Cho, Chung-Je;
Abstract
We obtain a necessary and sufficient condition for the existence of Mendelsohn triple systems excluding contiguous units with $\small{{\lambda}}$ = 1. Also, we obtain the spectrum for cyclic such systems.
Keywords
triple system sampling plan excluding contiguous units;directed(Mendelsohn) triple system;automorphism;(partial) triple system;Latin square;group divisible design;
Language
English
Cited by
1.
Evaluation and treatment of the patient with acute dizziness in primary care, Journal of the Korean Medical Association, 2010, 53, 10, 898
References
1.
I. Anderson, Combinatorial Designs : Construction Methods, Ellis Horwood, New York, Halsted Press, 1990
2.
F. E. Bennett, Direct constructions for perfect 3-cyclic designs, Algebraic and geometric combinatorics, 63-68, North-Holland Math. Stud., 65, North-Holland, Amsterdam, 1982
3.
C. J. Colbourn, Automorphisms of directed triple systems, Bull. Austral. Math. Soc. 43 (1991), no. 2, 257-264
4.
C. J. Colbourn and M. J. Colbourn, Every twofold triple system can be directed, J. Combin. Theory Ser. A 34 (1983), no. 3, 375-378
5.
C. J. Colbourn and J. J. Harms, Directing triple systems, Ars Combin. 15 (1983), 261-266
6.
C. J. Colbourn and A. C. H. Ling, A class of partial triple systems with applications in survey sampling, Comm. Statist. Theory Methods 27 (1998), no. 4, 1009-1018
7.
C. J. Colbourn and A. Rosa, Quadratic leaves of maximal partial triple systems, Graphs Combin. 2 (1986), no. 4, 317-337
8.
H. Hanani, The existence and construction of balanced incomplete block designs, Ann. Math. Statist. 32 (1961), 361-386
9.
J. J. Harms and C. J. Colbourn, An optimal algorithm for directing triple systems using Eulerian circuits, Cycles in graphs (Burnaby, B.C., 1982), 433-438, North-Holland Math. Stud., 115, North-Holland, Amsterdam, 1985
10.
S. H. Y. Hung and N. S. Mendelsohn, Directed triple systems, J. Combinatorial Theory Ser. A 14 (1973), 310-318
11.
N. S. Mendelsohn, A natural generalization of Steiner triple systems, Computers in number theory (Proc. Sci. Res. Council Atlas Sympos. No. 2, Oxford, 1969), pp. 323-338. Academic Press, London, 1971
12.
J. Seberry and D. Skillicorn, All directed BIBDs with k = 3 exist, J. Combin. Theory Ser. A 29 (1980), no. 2, 244-248
13.
D. T. Todorov, Three mutually orthogonal Latin squares of order 14, Ars Combin. 20 (1985), 45-47
14.
W. D. Wallis, Three orthogonal Latin squares, Adv. in Math. (Beijing) 15 (1986), no. 3, 269-281
15.
R. Wei, Cyclic BSEC of block size 3, Discrete Math. 250 (2002), no. 1-3, 291-298
|
{}
|
# Is a topology determined by its convergent sequences?
Just a basic point-set topology question: clearly we can detect differences in topologies using convergent sequences, but is there an example of two distinct topologies on the same set which have the same convergent sequences?
-
In a metric (or metrizable) space, the topology is entirely determined by convergence of sequences. This does not hold in an arbitrary topological space, and Mariano has given the canonical counterexample. This is the beginning of more penetrating theories of convergence given by nets and/or filters. For information on this, see e.g.
http://math.uga.edu/~pete/convergence.pdf
In particular, Section 2 is devoted to the topic of sequences in topological spaces and gives some information on when sequences are "topologically sufficient".
In particular a topology is determined by specifying which nets converge to which points. This came up as a previous MO question. It is not covered in the notes above, but is well treated in Kelley's General Topology.
-
I don't recall if Kelley treats this point, but one interesting subject to read a bit on in this context is that of sequential spaces. – Mariano Suárez-Alvarez Aug 22 '10 at 16:28
@Mariano: I believe Kelley does not, but I do: see Section 2.2. :) – Pete L. Clark Aug 22 '10 at 16:43
To be more precise: Kelley's text was written in 1955. The first significant work on sequential spaces (including identifying them by name) was done by S.P. Franklin in 1965. – Pete L. Clark Aug 22 '10 at 16:46
It MAY be treated in Willard's GENERAL TOPOLOGY,which has a pretty comprehensive treatment of generalized convergence. I'll check and get back to you guys.By the way-a basic but good question that a lot of people don't ask when learning the subject,Tony. – Andrew L Aug 22 '10 at 17:57
As far as I can see, Willard's treatment of sequential convergence is limited to some exercises in Section 10. He does not define or consider sequential spaces per se. – Pete L. Clark Aug 22 '10 at 21:12
The cocountable topology on an uncountable set is undistinguishable from the discrete topology if you can only use sequences.
-
Just another example. Consider the Banach space $\ell^{1}\left(\Gamma\right)$ , $\Gamma$ being an infinite set. Then the weak topology and the norm topology have the same convergent sequences (Schur' Theorem), while they are clearly distinct.
-
+1 for an example "in nature". – Allen Knutson Aug 22 '10 at 20:38
There is a category of "sequential spaces" in which objects are spaces defined by their convergent sequences and morphisms are those maps which send convergent sequences to convergent sequences.
As stated above, all metric spaces are sequential spaces, but so are all manifolds, all finite topological spaces, and all CW-complexes.
To build this category, one actually just needs to look at the category of right $M$-sets for a certain monoid $M$. Consider first the "convergent sequence space" $S:=${$\frac{1}{n}|n\in{\mathbb N}\cup${$\infty$}}$\subset {\mathbb R}$. In other words $S$ is a countable set of points converging to 0, and including $0$. Let $M$ be the monoid of continuous maps $S\to S$ with composition. Then an $M$-set is a "set of convergent sequences" closed under taking subsequences.
The category of $M$-sets is a topos, so it has limits, colimits, function spaces, etc. And every $M$-set has a topological realization which is a sequential space.
-
I'm not sure I'm undertanding you properly. Are the objects in this category merely sequential topological spaces in the sense of Franklin -- i.e., topological spaces such that a set $S$ is closed iff every limit of a convergent sequence of elements in $S$ also lies in $S$ -- or are they something more abstract? – Pete L. Clark Aug 22 '10 at 21:51
One reason I ask is that -- unless I am mistaken -- if $f: X \rightarrow Y$ is a map between topological spaces and $X$ is sequential, then $f$ is continuous iff it preserves convergent sequences, so in particular a candidate definition for the category of sequential spaces would just be the full subcategory of sequential topological spaces and continuous maps. How does this compare to your category? – Pete L. Clark Aug 22 '10 at 21:54
I think it's better to look not at the category of all M-sets, but the subcategory of sheaves for the canonical Grothendieck topology on M. Those sheaves are called "subsequential spaces" and include the category of sequential topological spaces and continuous maps as a full reflective subcategory. Moreover, any sequentially-Hausdorff subsequential space is a sequential topological space. Cf ncatlab.org/nlab/show/subsequential%20space and P.T. Johnstone's article "On a topological topos". – Mike Shulman Aug 23 '10 at 0:13
Oh Mike, yes -- that's what I meant -- it's been a long time since I thought about this. – David Spivak Aug 23 '10 at 0:57
|
{}
|
Search by Topic
Resources tagged with Working systematically similar to Purr-fection:
Filter by: Content type:
Age range:
Challenge level:
There are 71 results
Broad Topics > Using, Applying and Reasoning about Mathematics > Working systematically
Purr-fection
Age 16 to 18 Challenge Level:
What is the smallest perfect square that ends with the four digits 9009?
Latin Squares
Age 11 to 18
A Latin square of order n is an array of n symbols in which each symbol occurs exactly once in each row and exactly once in each column.
LCM Sudoku
Age 14 to 16 Challenge Level:
Here is a Sudoku with a difference! Use information about lowest common multiples to help you solve it.
Star Product Sudoku
Age 11 to 16 Challenge Level:
The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid.
LCM Sudoku II
Age 11 to 18 Challenge Level:
You are given the Lowest Common Multiples of sets of digits. Find the digits and then solve the Sudoku.
Diagonal Product Sudoku
Age 11 to 16 Challenge Level:
Given the products of diagonally opposite cells - can you complete this Sudoku?
Age 11 to 16 Challenge Level:
The items in the shopping basket add and multiply to give the same amount. What could their prices be?
Intersection Sudoku 1
Age 11 to 16 Challenge Level:
A Sudoku with a twist.
Ratio Sudoku 2
Age 11 to 16 Challenge Level:
A Sudoku with clues as ratios.
Age 11 to 16 Challenge Level:
Four small numbers give the clue to the contents of the four surrounding cells.
Seasonal Twin Sudokus
Age 11 to 16 Challenge Level:
This pair of linked Sudokus matches letters with numbers and hides a seasonal greeting. Can you find it?
Twin Equivalent Sudoku
Age 16 to 18 Challenge Level:
This Sudoku problem consists of a pair of linked standard Suduko puzzles each with some starting digits
Integrated Product Sudoku
Age 11 to 16 Challenge Level:
This Sudoku puzzle can be solved with the help of small clue-numbers on the border lines between pairs of neighbouring squares of the grid.
Twin Line-swapping Sudoku
Age 14 to 16 Challenge Level:
A pair of Sudoku puzzles that together lead to a complete solution.
LOGO Challenge - Sequences and Pentagrams
Age 11 to 18 Challenge Level:
Explore this how this program produces the sequences it does. What are you controlling when you change the values of the variables?
Magnetic Personality
Age 7 to 16 Challenge Level:
60 pieces and a challenge. What can you make and how many of the pieces can you use creating skeleton polyhedra?
Integrated Sums Sudoku
Age 11 to 16 Challenge Level:
The puzzle can be solved with the help of small clue-numbers which are either placed on the border lines between selected pairs of neighbouring squares of the grid or placed after slash marks on. . . .
Difference Sudoku
Age 14 to 16 Challenge Level:
Use the differences to find the solution to this Sudoku.
Age 11 to 16 Challenge Level:
Four numbers on an intersection that need to be placed in the surrounding cells. That is all you need to know to solve this sudoku.
Difference Dynamics
Age 14 to 18 Challenge Level:
Take three whole numbers. The differences between them give you three new numbers. Find the differences between the new numbers and keep repeating this. What happens?
Rainstorm Sudoku
Age 14 to 16 Challenge Level:
Use the clues about the shaded areas to help solve this sudoku
Function Pyramids
Age 16 to 18 Challenge Level:
A function pyramid is a structure where each entry in the pyramid is determined by the two entries below it. Can you figure out how the pyramid is generated?
I've Submitted a Solution - What Next?
Age 5 to 18
In this article, the NRICH team describe the process of selecting solutions for publication on the site.
Olympic Logic
Age 11 to 16 Challenge Level:
Can you use your powers of logic and deduction to work out the missing information in these sporty situations?
Rectangle Outline Sudoku
Age 11 to 16 Challenge Level:
Each of the main diagonals of this sudoku must contain the numbers 1 to 9 and each rectangle width the numbers 1 to 4.
Twin Corresponding Sudoku III
Age 11 to 16 Challenge Level:
Two sudokus in one. Challenge yourself to make the necessary connections.
Wallpaper Sudoku
Age 11 to 16 Challenge Level:
A Sudoku that uses transformations as supporting clues.
Pole Star Sudoku 2
Age 11 to 16 Challenge Level:
This Sudoku, based on differences. Using the one clue number can you find the solution?
Age 11 to 16 Challenge Level:
This is a variation of sudoku which contains a set of special clue-numbers. Each set of 4 small digits stands for the numbers in the four cells of the grid adjacent to this set.
Bochap Sudoku
Age 11 to 16 Challenge Level:
This Sudoku combines all four arithmetic operations.
Colour Islands Sudoku 2
Age 11 to 18 Challenge Level:
In this Sudoku, there are three coloured "islands" in the 9x9 grid. Within each "island" EVERY group of nine cells that form a 3x3 square must contain the numbers 1 through 9.
Twin Corresponding Sudokus II
Age 11 to 16 Challenge Level:
Two sudokus in one. Challenge yourself to make the necessary connections.
Magic Caterpillars
Age 14 to 18 Challenge Level:
Label the joints and legs of these graph theory caterpillars so that the vertex sums are all equal.
Diagonal Sums Sudoku
Age 7 to 16 Challenge Level:
Solve this Sudoku puzzle whose clues are in the form of sums of the numbers which should appear in diagonal opposite cells.
All-variables Sudoku
Age 11 to 18 Challenge Level:
The challenge is to find the values of the variables if you are to solve this Sudoku.
W Mates
Age 16 to 18 Challenge Level:
Show there are exactly 12 magic labellings of the Magic W using the numbers 1 to 9. Prove that for every labelling with a magic total T there is a corresponding labelling with a magic total 30-T.
One Out One Under
Age 14 to 16 Challenge Level:
Imagine a stack of numbered cards with one on top. Discard the top, put the next card to the bottom and repeat continuously. Can you predict the last card?
Ratio Sudoku 1
Age 11 to 16 Challenge Level:
A Sudoku with clues as ratios.
Intersection Sudoku 2
Age 11 to 16 Challenge Level:
A Sudoku with a twist.
Alphabetti Sudoku
Age 11 to 16 Challenge Level:
This Sudoku requires you to do some working backwards before working forwards.
Simultaneous Equations Sudoku
Age 11 to 16 Challenge Level:
Solve the equations to identify the clue numbers in this Sudoku problem.
The Naked Pair in Sudoku
Age 7 to 16
A particular technique for solving Sudoku puzzles, known as "naked pair", is explained in this easy-to-read article.
Twin Corresponding Sudoku
Age 11 to 18 Challenge Level:
This sudoku requires you to have "double vision" - two Sudoku's for the price of one
Intersection Sums Sudoku
Age 7 to 16 Challenge Level:
A Sudoku with clues given as sums of entries.
Plum Tree
Age 14 to 18 Challenge Level:
Label this plum tree graph to make it totally magic!
Instant Insanity
Age 11 to 18 Challenge Level:
Given the nets of 4 cubes with the faces coloured in 4 colours, build a tower so that on each vertical wall no colour is repeated, that is all 4 colours appear.
Games Related to Nim
Age 5 to 16
This article for teachers describes several games, found on the site, all of which have a related structure that can be used to develop the skills of strategic planning.
Pole Star Sudoku
Age 14 to 18 Challenge Level:
A Sudoku based on clues that give the differences between adjacent cells.
Twin Chute-swapping Sudoku
Age 14 to 18 Challenge Level:
A pair of Sudokus with lots in common. In fact they are the same problem but rearranged. Can you find how they relate to solve them both?
LOGO Challenge - Pentagram Pylons
Age 11 to 18 Challenge Level:
Pentagram Pylons - can you elegantly recreate them? Or, the European flag in LOGO - what poses the greater problem?
|
{}
|
7th degree Diophantine Equation
Prove that the equation $$x^7 + 6x - 14y^7 = 3$$ has no solutions over $$\mathbb{Z}$$.
This problem is from a problem set in my Number Theory class. We discussed linear diophantine equations, how to know when solution exists and how to find all solutions after you find one solution. This 7th degree diophantine equation though is waaay above my head and I'm not sure how to even start proving the lack of solutions. Hints are appreciated. Thanks.
$$x^7+6x-14y^7\equiv x^7-x\equiv0\pmod7$$ by Fermat's little theorem
• +1 - Your answer came in just $7$ seconds ahead of mine! This is somewhat ironic considering the solutiong involves using modulo $7$. – John Omielan Oct 12 '19 at 4:38
• @VictorS. $x^p \equiv x \pmod p$ for all $x$ and primes $p$. Thus, $x^7 \equiv x \pmod 7$. – John Omielan Oct 12 '19 at 4:49
• @VictorS. $$6\equiv-1\pmod7, 14\equiv0$$ – lab bhattacharjee Oct 12 '19 at 15:02
|
{}
|
# The approx concentration of $H_2O_2$ perhydrol is
$\begin{array}{1 1}(a)\;30gm/litre\\(b)\;300gms/100ml\\(c)\;10gm/lit\\(d)\;300gm/litre\end{array}$
The approx concentration of $H_2O_2$ perhydrol is 300gm/litre
Hence (d) is the correct answer.
|
{}
|
# 1015. Reversible Primes (20)
83人阅读 评论(0)
A reversible prime in any number system is a prime whose “reverse” in that number system is also a prime. For example in the decimal system 73 is a reversible prime because its reverse 37 is also a prime.
Now given any two positive integers N (< 105) and D (1 < D <= 10), you are supposed to tell if N is a reversible prime with radix D.
Input Specification:
The input file consists of several test cases. Each case occupies a line which contains two integers N and D. The input is finished by a negative N.
Output Specification:
For each test case, print in one line “Yes” if N is a reversible prime with radix D, or “No” if not.
Sample Input:
73 10
23 2
23 10
-2
Sample Output:
Yes
Yes
No
#include <bits/stdc++.h>
using namespace std;
bool isprime(int n){
if(n<2)
return false;
for(int i=2;i*i<=n;++i){
if(n%i==0)
return false;
}
return true;
}
int m=0;
while(num){
}
return m;
}
int main(){
while(cin>>n&&n>=0){
puts("Yes");
else puts("No");
}
}
0
0
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
个人资料
• 访问:80582次
• 积分:4125
• 等级:
• 排名:第8267名
• 原创:345篇
• 转载:9篇
• 译文:0篇
• 评论:17条
阅读排行
评论排行
最新评论
|
{}
|
Discovery of a Candidate Binary Supermassive Black Hole in a Periodic Quasar from Circumbinary Accretion Variability
Abstract Binary supermassive black holes (BSBHs) are expected to be a generic byproduct from hierarchical galaxy formation. The final coalescence of BSBHs is thought to be the loudest gravitational wave (GW) siren, yet no confirmed BSBH is known in the GW-dominated regime. While periodic quasars have been proposed as BSBH candidates, the physical origin of the periodicity has been largely uncertain. Here we report discovery of a periodicity (P=1607±7 days) at 99.95% significance (with a global p-value of ∼10−3 accounting for the look elsewhere effect) in the optical light curves of a redshift 1.53 quasar, SDSS J025214.67−002813.7. Combining archival Sloan Digital Sky Survey data with new, sensitive imaging from the Dark Energy Survey, the total ∼20-yr time baseline spans ∼4.6 cycles of the observed 4.4-yr (restframe 1.7-yr) periodicity. The light curves are best fit by a bursty model predicted by hydrodynamic simulations of circumbinary accretion disks. The periodicity is likely caused by accretion rate modulation by a milli-parsec BSBH emitting GWs, dynamically coupled to the circumbinary accretion disk. A bursty hydrodynamic variability model is statistically preferred over a smooth, sinusoidal model expected from relativistic Doppler boost, a kinematic effect proposed for PG1302−102. Furthermore, the frequency dependence of the variability amplitudes disfavors Doppler more »
Authors:
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more »
Award ID(s):
Publication Date:
NSF-PAR ID:
10225457
Journal Name:
Monthly Notices of the Royal Astronomical Society
ISSN:
0035-8711
5. Abstract Periodically variable quasars have been suggested as close binary supermassive black holes. We present a systematic search for periodic light curves in 625 spectroscopically confirmed quasars with a median redshift of 1.8 in a 4.6 deg2 overlapping region of the Dark Energy Survey Supernova (DES-SN) fields and the Sloan Digital Sky Survey Stripe 82 (SDSS-S82). Our sample has a unique 20-year long multi-color (griz) light curve enabled by combining DES-SN Y6 observations with archival SDSS-S82 data. The deep imaging allows us to search for periodic light curves in less luminous quasars (down to r ∼23.5 mag) powered by less massive black holes (with masses ≳ 108.5M⊙) at high redshift for the first time. We find five candidates with significant (at >99.74% single-frequency significance in at least two bands with a global p-value of ∼7 × 10−4–3× 10−3 accounting for the look-elsewhere effect) periodicity with observed periods of ∼3–5 years (i.e., 1–2 years in rest frame) having ∼4–6 cycles spanned by the observations. If all five candidates are periodically variable quasars, this translates into a detection rate of ${\sim }0.8^{+0.5}_{-0.3}$% or ${\sim }1.1^{+0.7}_{-0.5}$ quasar per deg2. Our detection rate is 4–80 times larger than those found by previous searches using shallower surveys over largermore »
|
{}
|
## $file kills setup script It is possible to change important and used variables inside the setup model while installing I implemented a data-setup script today. It starts with defining a variable: $file = 'var/import/sample.csv';
# The error
When magento was called, I got this error:
[message:protected] => Warning: Illegal string offset 'toVersion' in app/code/core/Mage/Core/Model/Resource/Setup.php on line 641
[string:Exception:private] =>
[code:protected] => 0
[file:protected] => app/code/core/Mage/Core/functions.php
[line:protected] => 245
[trace:Exception:private] => Array
[...]
I opened the Setup model and found this code snippet:
foreach ($files as$file) {
[...]
try {
switch ($fileType) { case 'php':$conn = $this->getConnection();$result = include $fileName; break; [...] } if ($result) {
$this->_setResourceVersion($actionType, $file['toVersion']); } } catch (Exception$e) {
[...]
}
[...]
}
# Conclusion
Be careful with defining variables in setup scripts. I wanted to write a "bugfix" but thinking 10 minutes about the problem didn't develope a solution for the problem. There is no possibility in PHP to prevend the overwrite of this variable.
HAH! There is. I thought about changing the scope but didn't find a simple solution. but just implementing a method should do it!
Blog writing brings solutions. Thanks audience for help.
Hopefully you all know, what Rubber ducking is?
|
{}
|
# Why is the graph for induced emf againt angle a sine graph for an ac generator?
I realise that similar questions have been asked on this site however these questions yielded answers which were not helpful . According to faradays law the induced emf in a conductor is directly proportional to the rate of change of magnetic flux .In the image below we see a direct motor armature , but let's just think of it to be an ac generator since it is the rotation of the armature that concerns me and not the configuration of the device .
Moving from right to left . We see that in the first picture the coil is perpendicular to the flux , indicating maximum flux , let's take this to be our initial position hence the angle on our graph would be zero degrees . The induced emf is zero since we have not changed the magnetic flux with respect to time . As the armature moves to the second potion the flux has changed from maximum to minimum , indicating a large change in flux hence a large induced emf .Now as the coil moves from being parallel to the field ,back to being perpendicular again , the flux should change from being a minimum to a maximum , indicating a large induced emf , however the sine graph of induced emf against angle for an alternating current generator moves back down to being zero why is this so ? Please note I have no knowledge on angular momentum or torque .
The magnitude of the flux $\int \vec B\cdot d\vec A$ is maximum in figure A and C, but 0 in figure B and D because $\vec B$ is parallel to vector normal to the surface in $A$ and $C$ but perpendicular to the normal to the surface in situations B and D.
The flux $\Phi(t)$ is $$\Phi(t)=\int_A \vec B\cdot d\vec A= A B \cos\omega t$$ since $\vert \vec B\vert$ is constant and the angle between $d\vec A$ and $\vec B$ changes as $\cos\omega t$.
Since the EMF ${\cal E}$ is $-d\Phi(t)/dt$ one immediately gets $${\cal E}=A B \omega \sin\omega t\, .$$
|
{}
|
## Calculus (3rd Edition)
Since for any $x\in R$, then $x^2\in R$, and by Theorem 3 $f(x)= \cos x^2$ is a continuous function.
We are given the function $f(x)= \cos x^2$. Since for any $x\in R$, then $x^2\in R$, and by Theorem 3 $f(x)= \cos x^2$ is a continuous function.
|
{}
|
References of "Scientific journals" in Complete repository Arts & humanities Archaeology Art & art history Classical & oriental studies History Languages & linguistics Literature Performing arts Philosophy & ethics Religion & theology Multidisciplinary, general & others Business & economic sciences Accounting & auditing Production, distribution & supply chain management Finance General management & organizational theory Human resources management Management information systems Marketing Strategy & innovation Quantitative methods in economics & management General economics & history of economic thought International economics Macroeconomics & monetary economics Microeconomics Economic systems & public economics Social economics Special economic topics (health, labor, transportation…) Multidisciplinary, general & others Engineering, computing & technology Aerospace & aeronautics engineering Architecture Chemical engineering Civil engineering Computer science Electrical & electronics engineering Energy Geological, petroleum & mining engineering Materials science & engineering Mechanical engineering Multidisciplinary, general & others Human health sciences Alternative medicine Anesthesia & intensive care Cardiovascular & respiratory systems Dentistry & oral medicine Dermatology Endocrinology, metabolism & nutrition Forensic medicine Gastroenterology & hepatology General & internal medicine Geriatrics Hematology Immunology & infectious disease Laboratory medicine & medical technology Neurology Oncology Ophthalmology Orthopedics, rehabilitation & sports medicine Otolaryngology Pediatrics Pharmacy, pharmacology & toxicology Psychiatry Public health, health care sciences & services Radiology, nuclear medicine & imaging Reproductive medicine (gynecology, andrology, obstetrics) Rheumatology Surgery Urology & nephrology Multidisciplinary, general & others Law, criminology & political science Civil law Criminal law & procedure Criminology Economic & commercial law European & international law Judicial law Metalaw, Roman law, history of law & comparative law Political science, public administration & international relations Public law Social law Tax law Multidisciplinary, general & others Life sciences Agriculture & agronomy Anatomy (cytology, histology, embryology...) & physiology Animal production & animal husbandry Aquatic sciences & oceanology Biochemistry, biophysics & molecular biology Biotechnology Entomology & pest control Environmental sciences & ecology Food science Genetics & genetic processes Microbiology Phytobiology (plant sciences, forestry, mycology...) Veterinary medicine & animal health Zoology Multidisciplinary, general & others Physical, chemical, mathematical & earth Sciences Chemistry Earth sciences & physical geography Mathematics Physics Space science, astronomy & astrophysics Multidisciplinary, general & others Social & behavioral sciences, psychology Animal psychology, ethology & psychobiology Anthropology Communication & mass media Education & instruction Human geography & demography Library & information sciences Neurosciences & behavior Regional & inter-regional studies Social work & social policy Sociology & social sciences Social, industrial & organizational psychology Theoretical & cognitive psychology Treatment & clinical psychology Multidisciplinary, general & others Showing results 1001 to 1100 of 56333 6 7 8 9 10 11 12 13 14 15 16 Qualifications professionnelles : l’obligation de reconnaître les qualifications à leur juste valeur dans les professions différemment réglementées d’un État membre à l’autreDemoulin, Iris in Revue Trimestrielle de Droit Européen (2016), 1(Janvier-mars), 200-202Detailed reference viewed: 8 (2 ULg) Xenobiotic and Immune-Relevant Molecular Biomarkers in Harbor Seals as Proxies for Pollutant Burden and Effects?Lehnert, Kristina; Ronnenberg, Katrin; Weijs, Liesbeth et alin Archives of Environmental Contamination & Toxicology (2016), 70(1), 106-120Harbor seals are exposed to increasing pressure caused by anthropogenic activities in their marine environment. Persistent organic pollutants (POPs) and trace elements are hazardous contaminants which ... [more ▼]Harbor seals are exposed to increasing pressure caused by anthropogenic activities in their marine environment. Persistent organic pollutants (POPs) and trace elements are hazardous contaminants which accumulate in tissues of harbor seals. POPs and trace elements can negatively affect the immune-system and have been reported e.g. to increase susceptibility to viral infections in seals. Biomarkers of the xenobiotic metabolism, cytokines and heat-shock-protein as cell mediators of the immune-system were established to evaluate the impact of environmental stressors on harbor seals. Harbor seals (n=54) were captured on sandbanks in the North Sea during 2009 to 2012. Medicals including hematology were performed, RNAlater blood samples were taken and analyzed using quantitative Polymerase Chain Reaction. Normalized transcript copy numbers were correlated to hematology and POP concentration in blood, and trace metals in blood and fur. [less ▲]Detailed reference viewed: 43 (16 ULg) L’infraction d’obstacle à la surveillance sous le spectre du droit au silence et du droit de ne pas s’auto-incriminerMichiels, Olivier ; Falque, Géraldine in Orientations : la Revue du Droit Social et de la Gestion du Personnel (2016), (2006/1), 2-11L'article fait le point sur le contrôle de l’application des lois sociales par les inspecteurs sociaux et le délit d’obstacle à la surveillance envisagé sous l'angle du droit au silence et du droit de ne ... [more ▼]L'article fait le point sur le contrôle de l’application des lois sociales par les inspecteurs sociaux et le délit d’obstacle à la surveillance envisagé sous l'angle du droit au silence et du droit de ne pas s'auto-incriminer. Il envisage encore la cohabitation entre le délit d'obstacle et le droit au silence sous l’angle de la jurisprudence de la Cour européenne des droits de l’homme et du droit interne. [less ▲]Detailed reference viewed: 60 (8 ULg) Populatieonderzoek naar de Hazelmuis in de VoerstreekVerbeylen, Goedele; Mouton, Alice ; Driessens, Gerald et alin De Levende Natuur (2016), (numero 1), The Hazel dormouse (Muscardinus avellanarius) is a habitat specialist that prefers dense shrub and tree vegetation and needs a high diversity of food plants on a limited area. Due to habitat degradation ... [more ▼]The Hazel dormouse (Muscardinus avellanarius) is a habitat specialist that prefers dense shrub and tree vegetation and needs a high diversity of food plants on a limited area. Due to habitat degradation and fragmentation the Flemish distribution area of this critically endangered species has been reduced to the eastern part of the municipality of Voeren, where it forms a cross-border population with the Dutch Hazel dormice. Since 2003 the Mammal Working Group of Natuurpunt studies the remaining Flemish population to find out more about population parameters, habitat use, the influence of habitat quality and management actions. In 2007 a standardised monitoring started by counting autumnal nests along fixed transects. In 2013 an intensive capture-mark-recapture study was set up based on nest box and nest tube checks and live-trapping on custom made hanging platforms; also a first test with radiocollars took place. The information gathered from the combination of all these methods will be used to validate the monitoring method and to formulate better protection measures. First results show that late summer and autumn should not be considered as the main reproduction period, at least not in an early year like 2014 when first young were already born at the beginning of May. Hazel dormice do not only cross significant barriers like the railway during dispersal, but also do this regularly during their nightly movements within their home range. Expansion of the population on a location with sufficient connectivity seems to be hampered by a too low population density resulting from insufficient habitat quality and (in this case) a high predation pressure by house cats. On the scale of the Meuse-Rhine Euregion, genetic analyses carried out by the University of Liège reveal four genetically isolated clusters, for which a vision to interconnect these was computed in the Interreg-project ‘Habitat Euregio’. [less ▲]Detailed reference viewed: 26 (0 ULg) Statistical analysis and multi-instrument overview of the quasi-periodic 1-hour pulsations in Saturn's outer magnetospherePalmaerts, Benjamin ; Roussos, E.; Krupp, N. et alin Icarus (2016), 271Detailed reference viewed: 17 (6 ULg) archiDART: an R package for the automated computation of plant root architectural traitsDelory, Benjamin ; Baudson, Caroline ; Brostaux, Yves et alin Plant and Soil (2016), 398(1), 351-365Background and Aims In order to analyse root system architectures (RSAs) from captured images, a variety of manual (e.g. Data Analysis of Root Tracings, DART), semi-automated and fully automated software ... [more ▼]Background and Aims In order to analyse root system architectures (RSAs) from captured images, a variety of manual (e.g. Data Analysis of Root Tracings, DART), semi-automated and fully automated software packages have been developed. These tools offer complementary approaches to study RSAs and the use of the Root System Markup Language (RSML) to store RSA data makes the comparison of measurements obtained with different (semi-) automated root imaging platforms easier. The throughput of the data analysis process using exported RSA data, however, should benefit greatly from batch analysis in a generic data analysis environment (R software). Methods We developed an R package (archiDART) with five functions. It computes global RSA traits, root growth rates, root growth directions and trajectories, and lateral root distribution from DART-generated and/or RSML files. It also has specific plotting functions designed to visualise the dynamics of root system growth. Results The results demonstrated the ability of the package’s functions to compute relevant traits for three contrasted RSAs (Brachypodium distachyon [L.] P. Beauv., Hevea brasiliensis Müll. Arg. and Solanum lycopersicum L.). Conclusions This work extends the DART software package and other image analysis tools supporting the RSML format, enabling users to easily calculate a number of RSA traits in a generic data analysis environment. [less ▲]Detailed reference viewed: 99 (19 ULg) Small-Angle Neutron Scattering investigation of cholesterol-doped DMPC liposomes interacting with β-cyclodextrinJoset, Arnaud ; Grammenos, Angeliki; Hoebeke, Maryse et alin Journal of Inclusion Phenomena and Macrocyclic Chemistry (2016), 84(1), 153-161The Small-Angle Neutron Scattering technique (SANS) has been applied to characterize the influence of a randomly methylated β–cyclodextrin (CD), called RAMEB, on dimyristoylphosphatidylcholine (DMPC ... [more ▼]The Small-Angle Neutron Scattering technique (SANS) has been applied to characterize the influence of a randomly methylated β–cyclodextrin (CD), called RAMEB, on dimyristoylphosphatidylcholine (DMPC) liposomes doped with cholesterol. From the modelling of the experimental neutron scattering crosssections, the detailed response of the vesicle structure upon addition of increasing amounts of RAMEB up to 30 mM has been assessed. This study has been performed below and above the DMPC bilayer phase transition temperature and shows that cholesterol extraction by RAMEB is linked to a decrease of the average radius and of the aggregation number of the vesicles. This extraction takes place in a dose-dependent way until a more monodisperse population of cholesterol-free liposomes was obtained. In addition, the bilayer thickness evolution was inferred, as well as the liposome coverage by RAMEB. [less ▲]Detailed reference viewed: 25 (5 ULg) Varicella paediatric hospitalisations in Belgium: a 1-year national survey.Blumental, Sophie; Sabbe, Martine; Lepage, Philippe et alin Archives of disease in childhood (2016), 101(1), 16-22BACKGROUND: Varicella universal vaccination (UV) has been implemented in many countries for several years. Nevertheless, varicella UV remains debated in Europe and few data are available on the real ... [more ▼]BACKGROUND: Varicella universal vaccination (UV) has been implemented in many countries for several years. Nevertheless, varicella UV remains debated in Europe and few data are available on the real burden of infection. We assessed the burden of varicella in Belgium through analysis of hospitalised cases during a 1-year period. METHODS: Data on children admitted to hospital with varicella were collected through a national network from November 2011 to October 2012. Inclusion criteria were either acute varicella or related complications up to 3 weeks after the rash. RESULTS: Participation of 101 hospitals was obtained, covering 97.7% of the total paediatric beds in Belgium. 552 children were included with a median age of 2.1 years. Incidence of paediatric varicella hospitalisations reached 29.5/10(5) person-years, with the highest impact among those 0-4 years old (global incidence and odds of hospitalisation: 79/10(5) person-years and 1.6/100 varicella cases, respectively). Only 14% (79/552) of the cohort had an underlying chronic condition. 65% (357/552) of children had >/=1 complication justifying their admission, 49% were bacterial superinfections and 10% neurological disorders. Only a quarter of children (141/552) received acyclovir. Incidence of complicated hospitalised cases was 19/10(5) person-years. Paediatric intensive care unit admission and surgery were required in 4% and 3% of hospitalised cases, respectively. Mortality among Belgian paediatric population was 0.5/10(6) and fatality ratio 0.2% among our cohort. CONCLUSIONS: Varicella demonstrated a substantial burden of disease in Belgian children, especially among the youngest. Our thorough nationwide study, run in a country without varicella UV, offers data to support varicella UV in Belgium. [less ▲]Detailed reference viewed: 11 (3 ULg) The Relationship between the Profile of Manager and Management Accounting Practices in Tunisian SMIsGhorbel, Jihene in International Journal of Academic Research in Accounting, Finance and Management Sciences (2016), 6(1), 61-72In today’s changing environment, SMEs must strengthen their competitiveness and credibility with all their external partners. So to survive and prosper, managers need adequate management practices in ... [more ▼]In today’s changing environment, SMEs must strengthen their competitiveness and credibility with all their external partners. So to survive and prosper, managers need adequate management practices in order to provide relevant accounting information. This paper examines the effect of profile of manager on management accounting practices which was defined in terms of traditional management accounting, modern management accounting and management accounting practices related to export. As to the profile of manager, it is defined by age, type of training and experience. To validate the hypotheses, a multiple linear regression method is used. The data is collected by questionnaire from 221 Tunisian manufacturing SMEs. The findings indicate that the profile of manager affect in part the use of management accounting practices. [less ▲]Detailed reference viewed: 32 (4 ULg) L'intentionnalité cognitive et ses modes. Reinach critique de BrentanoDewalque, Arnaud in Philosophie (2016), 128Dans cet article, je suggère que la théorie reinachienne du jugement peut être vue comme une contribution à une théorie générale de l’intentionnalité cognitive et de ses modes. À cette fin, je focaliserai ... [more ▼]Dans cet article, je suggère que la théorie reinachienne du jugement peut être vue comme une contribution à une théorie générale de l’intentionnalité cognitive et de ses modes. À cette fin, je focaliserai mon attention sur certaines divergences significatives entre Reinach et Brentano. Après quelques réflexions introductives (§§ 1-2), j’examinerai plus exactement les questions suivantes : quels sont les motifs théoriques qui ont conduit Reinach à s’écarter de la théorie brentanienne du jugement (§§ 3-4) ? Et que retenir de ses analyses aujourd’hui (§§ 5-6) ? [less ▲]Detailed reference viewed: 101 (3 ULg) Characterizing the morphology of suburban settlements: a method based on a semi-automatic classification of building clustersde Smet, Fabian; Teller, Jacques in Landscape Research (2016), 41(1), 113-130Urban sprawl is transforming our landscapes and rural areas at a spectacular pace. Measuring the strength of the phenomenon and proposing dynamic ways to delineate suburban areas have been the object of ... [more ▼]Urban sprawl is transforming our landscapes and rural areas at a spectacular pace. Measuring the strength of the phenomenon and proposing dynamic ways to delineate suburban areas have been the object of much debate amongst scientists. The present article takes the view that, beyond measuring and delineating suburban areas, more efforts should be directed to qualifying the morphology of built settlements within these areas. Therefore it proposes a method based on a semi-automatic classification system of building clusters, designed to describe and interpret the phenomenon from a morphological perspective. This method is based on a combination of field surveys with numerical analyses of digital land cadastre maps. The application of this classification system to the suburban area around Liege reveals that, far from developing in a complete indifference of local conditions, contemporary suburban settlements are influenced by landscape structures inherited from the past. [less ▲]Detailed reference viewed: 118 (15 ULg) The effects of a documentary film about schizophrenia on cognitive, affective and behavioural aspects of stigmatisationThonon, Bénédicte ; Pletinx, Amandine; Grandjean, Allison et alin Journal of Behavior Therapy and Experimental Psychiatry (2016), 50Background and Objectives: Stereotypes about schizophrenia may lead to prejudicial attitudes and discrimination with debilitating effects on people diagnosed with schizophrenia. There is thus a need to ... [more ▼]Background and Objectives: Stereotypes about schizophrenia may lead to prejudicial attitudes and discrimination with debilitating effects on people diagnosed with schizophrenia. There is thus a need to develop interventions aiming to prevent, reduce or eliminate such stereotypes. The aim of this study was to evaluate the effects of a documentary film on schizophrenia on cognitive, affective and behavioural aspects of stigmatisation. Methods: Forty-nine participants were assessed on measures of stereotypes and social distance, and on the Model of Stereotype Content, which includes measures of stereotypes, emotional reactions and behavioural tendencies. Participants were randomly assigned into either a condition in which they viewed the documentary film (Film group), or into a control condition in which no intervention was conducted (Control group). Results: Only participants in the Film group revealed a significant decrease of negative stereotypes (Dangerousness and Unpredictability) and desired Social distance, and a significant increase in the perception of sociability in persons with schizophrenia. Limitations: Small sample size and its reduced generalizability are the main limitations in this study. Conclusions: These findings suggest that a documentary film promoting indirect contact with people diagnosed with schizophrenia is a promising tool to prevent and reduce stigmatisation regarding schizophrenia. [less ▲]Detailed reference viewed: 17 (0 ULg) Cost-effectiveness of personalized supplementation with vitamin D-rich dairy products in the prevention of osteoporotic fracturesEthgen, Olivier ; Hiligsmann, Mickaël; Burlet, Nansa et alin Osteoporosis International (2016), 27Summary: Titrated supplementations with vitamin D-fortified yogurt, based on spontaneous calcium and vitamin D intakes, can be cost-effective in postmenopausal women with or without increased risk of ... [more ▼]Summary: Titrated supplementations with vitamin D-fortified yogurt, based on spontaneous calcium and vitamin D intakes, can be cost-effective in postmenopausal women with or without increased risk of osteoporotic fractures. Introduction: The objective of this study is to assess the costeffectiveness of the vitamin D-fortified yogurt given to women with and without an increased risk of osteoporotic fracture. Methods: Avalidated cost-effectiveness microsimulation Markov model of osteoporosis management was used. Three personalized supplementation scenarios to reflect the Ca/Vit D needs taking into account the well-known variations in dietary habits and a possible pharmacological supplementation in Ca/ Vit D, given above or in combination with anti-osteoporosis medications: one yogurt per day, i.e., 400 mg of Ca+200 IU of Vit D (scenario 1 U), two yogurts per day, i.e., 800 mg of Ca+ 400 IU of Vit D (scenario 2 U), or three yogurts per day, i.e., 1, 200 mg of Ca+600 IU of Vit D (scenario 3 U). Results: One yogurt is cost-effective in the general population above the age of 70 years and in all age groups in women with low bone mineral density (BMD) or prevalent vertebral fracture (PVF). The daily intake of two yogurts is cost-effective above 80 years in the general population and above 70 years in the two groups of women at increased risk of fractures. However, an intake of three yogurts per day is only cost-effective above 80 years old in the general population, as well as in women with low BMD or PVF. Conclusions: Our study is the first economic analysis supporting the cost-effectiveness of dairy products, fortified with vitamin D, in the armamentarium against osteoporotic fractures. [less ▲]Detailed reference viewed: 19 (7 ULg) Pre-employement examination for low back risk in workers exposesd to manual handling of loads : French guidelinesPETIT, Audrey; ROUSSEAU, Sandrine; HUEZ, Jean-François et alin International Archives of Occupational and Environmental Health (2016), 89Detailed reference viewed: 37 (1 ULg) Electrical resistivity tomography to monitor enhanced biodegradation of hydrocarbons with Rhodococcus erythropolis T902.1 at a pilot scaleMasy, Thibaut ; Caterina, David; Tromme, Oliver et alin Journal of Contaminant Hydrology (2016), 184Petroleum hydrocarbons (HC) represent the most widespread contaminants and in-situ bioremediation remains a competitive treatment in terms of cost and environmental concerns. However, the efficiency of ... [more ▼]Petroleum hydrocarbons (HC) represent the most widespread contaminants and in-situ bioremediation remains a competitive treatment in terms of cost and environmental concerns. However, the efficiency of such a technique (by biostimulation or bioaugmentation) strongly depends on the environment affected and is still difficult to predict a priori. In order to overcome these uncertainties, Electrical Resistivity Tomography (ERT) appears as a valuable non-invasive tool to detect soil heterogeneities and to monitor biodegradation. The main objective of this study was to isolate an electrical signal linked to an enhanced bacterial activity with ERT, in an aged HC-contaminated clayey loam soil. To achieve this, a pilot tank was built to mimic field conditions. Compared to a first insufficient biostimulation phase, bioaugmentation with Rhodococcus erythropolis T902.1 led to a HC depletion of almost 80% (6900 to 1600 ppm) in 3 months in the center of the contaminated zone, where pollutants were less bioavailable. In the meantime, lithological heterogeneities and microbial activities (growth and biosurfactant production) were successively discriminated by ERT images. In the future, this cost-effective technique should be more and more transferred to the field in order to monitor biodegradation processes and assist in selecting the most appropriate remediation technique. [less ▲]Detailed reference viewed: 100 (33 ULg) Does the Budyko curve reflect a maximum power state of hydrological systems? A backward analysisWesthoff, Martijn ; Zehe, Erwin; Archambeau, Pierre et alin Hydrology and Earth System Sciences (2016), 20Almost all catchments plot within a small envelope around the Budyko curve. This apparent behaviour suggests that organizing principles may play a role in the evolution of catchments. In this paper we ... [more ▼]Almost all catchments plot within a small envelope around the Budyko curve. This apparent behaviour suggests that organizing principles may play a role in the evolution of catchments. In this paper we applied the thermodynamic principle of maximum power as the organizing principle. In a top-down approach we derived mathematical formulations of the relation between relative wetness and gradients driving runoff and evaporation for a simple one-box model. We did this in an inverse manner such that when the conductances are optimized with the maximum power principle, the steady state behaviour of the model leads exactly to a point on the asymptotes of the Budyko curve. Subsequently, we added dynamics in forcing and actual evaporations, causing the Budyko curve to deviate from the asymptotes. Despite the simplicity of the model, catchment observations compare reasonably well with the Budyko curves subject to observed dynamics in rainfall and actual evaporation. Thus by constraining the model with the asymptotes of the Budyko curve we were able to derive more realistic values of the aridity and evaporation index without any calibration parameter. Future work should focus on better representing the boundary conditions of real catchments and eventually adding more complexity to the model. [less ▲]Detailed reference viewed: 52 (14 ULg) Nos différences, une richesseRADERMECKER, Régis in Revue de l'Association Belge du Diabète (2016), 59Detailed reference viewed: 6 (1 ULg) Dynamical thermalization in Bose-Hubbard systemsSchlagheck, Peter ; Shepelyansky, Dima L.in Physical Review. E : Statistical, Nonlinear, and Soft Matter Physics (2016), 93We numerically study a Bose-Hubbard ring of finite size with disorder containing a finite number of bosons that are subject to an on-site two-body interaction. Our results show that moderate interactions ... [more ▼]We numerically study a Bose-Hubbard ring of finite size with disorder containing a finite number of bosons that are subject to an on-site two-body interaction. Our results show that moderate interactions induce dynamical thermalization in this isolated system. In this regime the individual many-body eigenstates are well described by the standard thermal Bose-Einstein distribution for well-defined values of the temperature and the chemical potential, which depend on the eigenstate under consideration. We show that the dynamical thermalization conjecture works well at both positive and negative temperatures. The relations to quantum chaos, quantum ergodicity, and the Åberg criterion are also discussed. [less ▲]Detailed reference viewed: 13 (1 ULg) Re-visiting plant plasma membrane lipids in tobacco: a focus on sphingolipidsCacas, Jean Luc; Buré, Corinne; Grosjean, Kevin et alin Plant Physiology (2016), 170Detailed reference viewed: 24 (2 ULg) FLOR-ID: an interactive database of flowering-time gene networks in Arabidopsis thalianaBouché, Frédéric ; Lobet, Guillaume ; Tocquin, Pierre et alin Nucleic Acids Research (2016), 44(Database), 11671171Flowering is a hot topic in Plant Biology and important progress has been made in Arabidopsis thaliana toward unravelling the genetic networks involved. The increasing complexity and the explosion of ... [more ▼]Flowering is a hot topic in Plant Biology and important progress has been made in Arabidopsis thaliana toward unravelling the genetic networks involved. The increasing complexity and the explosion of literature however require development of new tools for information management and update. We therefore created an evolutive and interactive database of flowering time genes, named FLOR-ID (Flowering-Interactive Database), that is available freely at http://www.flor-id.org. The hand-curated database contains information on 306 genes and links to 1595 publications gathering the work of more than 4500 authors. Gene function and interactions within the flowering pathways were inferred from the analysis of related publications, included in the database and translated into interactive manually drawn snapshots. [less ▲]Detailed reference viewed: 219 (55 ULg) Projecting alternative urban growth patterns: The development and application of a remote sensing assisted calibration framework for the Greater Dublin AreaVan de Voorde, Tim; van der Kwast, Johannes; Poelmans, Lien et alin Ecological Indicators (2016), 60Land use change models are powerful tools that allow planners and policy makers to assess the long-term spatial and environmental impacts of their decisions. In order for these models to produce a ... [more ▼]Land use change models are powerful tools that allow planners and policy makers to assess the long-term spatial and environmental impacts of their decisions. In order for these models to produce a realistic output, they should be properly calibrated. This is usually achieved by comparing simulated land-use maps of dates in the past to reference land-use maps of a corresponding date. As land-use data are often not readily or frequently available, we propose a two-stage calibration framework that includes existing land-use maps as well as remote sensing derived maps of the urban extent. Urban growth patterns for the Dublin area represented by remote sensing based maps were compared to simulated growth using spatial metrics in order to fine-tune the calibration of the MOLAND urban growth model of Dublin. We then used the calibrated model to forecast future urban growth according to four urban planning scenarios that have been defined for the Strategic Environmental Assessment of the Greater Dublin Area. We examined a selection of spatial metrics in order to determine their sensitivity to differences in spatial patterns between simulated and remote sensing derived data. We also investigated whether these metrics are useful to characterise future changes in the urban spatial structure that ensue from the planning scenarios. We found that with the exception of some metrics that strongly respond to differences in the amount of urban land, most metrics showed similar trends for simulated and remote sensing derived maps. Most metrics were also able to distinguish the growth patterns induced by the different spatial planning scenarios. The “business as usual scenario” in particular showed a clearly distinct trend compared to the other scenarios. We could also conclude that the urban growth pattern of Dublin as observed from both the remote sensing derived maps and the simulated maps of future land use seems to confirm the theory of alternating phases of diffusive growth and coalescence. [less ▲]Detailed reference viewed: 40 (3 ULg) 2D dynamic studies combined with the surface curvature analysis to predict Arias Intensity amplificationTorgoev, Almazbek ; Havenith, Hans-Balder in Journal of Seismology (2016)A 2D elasto-dynamic modelling of the pure topographic seismic response is performed for six models with a total length of around 23.0 km. These models are reconstructed from the real topographic settings ... [more ▼]A 2D elasto-dynamic modelling of the pure topographic seismic response is performed for six models with a total length of around 23.0 km. These models are reconstructed from the real topographic settings of the landslide-prone slopes situated in the Mailuu-Suu River Valley, Southern Kyrgyzstan. The main studied parameter is the Arias Intensity (Ia, m/sec), which is applied in the GIS-based Newmark method to regionally map the seismically-induced landslide susceptibility. This method maps the Ia values via empirical attenuation laws and our studies investigate a potential to include topographic input into them. Numerical studies analyse several signals with varying shape and changing central frequency values. All tests demonstrate that the spectral amplification patterns directly affect the amplification of the Ia values. These results let to link the 2D distribution of the topographically amplified Ia values with the parameter called as smoothed curvature. The amplification values for the low-frequency signals are better correlated with the curvature smoothed over larger spatial extent, while those values for the high-frequency signals are more linked to the curvature with smaller smoothing extent. The best predictions are provided by the curvature smoothed over the extent calculated according to Geli’s law. The sample equations predicting the Ia amplification based on the smoothed curvature are presented for the sinusoid-shape input signals. These laws cannot be directly implemented in the regional Newmark method, as 3D amplification of the Ia values addresses more problem complexities which are not studied here. Nevertheless, our 2D results prepare the theoretical framework which can potentially be applied to the 3D domain and, therefore, represent a robust basis for these future research targets. [less ▲]Detailed reference viewed: 14 (1 ULg) Comment je traite... la sténose aortique asymptomatiqueMEURICE, Caroline ; DULGHERU, Raluca Elena ; PIERARD, Luc in Revue Médicale de Liège (2016), (71), 6-10Detailed reference viewed: 34 (10 ULg) Provenance of Ichthyosaura alpestris (Caudata: Salamandridae) introductions to France and New Zealand assessed by mitochondrial DNA analysisArntzen, Jan W.; King, Tania M.; Denoël, Mathieu et alin The Herpetological Journal (2016), 26(1), 49-56The last century has seen an unparalleled movement of species around the planet as a direct result of human activity, which has been a major contributor to the biodiversity crisis. Amphibians represent a ... [more ▼]The last century has seen an unparalleled movement of species around the planet as a direct result of human activity, which has been a major contributor to the biodiversity crisis. Amphibians represent a particularly vulnerable group, exacerbated by the devastating effects of chytrid fungi. We report the malicious translocation and establishment of the alpine newt (Ichthyosaura alpestris) to its virtual antipode in North Island of New Zealand. We use network analysis of mitochondrial DNA haplotypes to identify the original source population as I. a. apuana from Tuscany, Italy. Additionally, a population in southern France, presumed to be introduced, is identified as I. a. alpestris from western Europe. However, the presence of two differentiated haplotypes suggests a mixed origin. This type of analysis is made possible by the recent availability of a phylogenetic analysis of the species throughout its natural range. We discuss the particulars of both introductions. [less ▲]Detailed reference viewed: 401 (25 ULg) Impact of three phtalate esters on the sexual reproduction of the Monogont rotifer, Brachionus calyciflorusCruciani, Valentina ; Iovine, Catherine; Thomé, Jean-Pierre et alin Ecotoxicology (2016), 25(1), 192-200Phthalate esters are widespread contaminants that can cause endocrine disruption in vertebrates. Studies showed that molecules with hormonal activities in vertebrates and invertebrates can affect asexual ... [more ▼]Phthalate esters are widespread contaminants that can cause endocrine disruption in vertebrates. Studies showed that molecules with hormonal activities in vertebrates and invertebrates can affect asexual and sexual reproduction in rotifers. We investigated the impact of di-hexylethyl phthalate (DEHP), di-butyl phthalate (DBP) and butylbenzyl phthalate (BBP), on the asexual and sexual reproduction of the freshwater monogonont rotifer Brachionus calyciflorus in order to determine a potential environmental risk for sexual reproduction. We observed that DEHP has no significant impact on both asexual and sexual reproduction up to 2 mg/L. DBP has a positive effect on asexual reproduction at concentrations from 0.05 to 1mg/L, but depresses it at 2 mg/L. Sexual reproduction is only affected with 2mg/L and the impact observed is negative. BBP displayed a negative impact on both asexual and sexual reproduction at 1 and 2 mg/L. However we showed that the impacts of BBP on fertilization rates observed are due to the decrease in population growth rates at these concentrations and are not due to a direct impact of BBP on the fertilization process. Similar conclusions are likely applicable to the impacts on mixis and production of cysts as well. Our results show that sexual reproduction in B. calyciflorus is not more sensitive than asexual reproduction to any of the substances tested which indicates the mode of action of these molecules is related to general toxicity, not to an interference with potential endocrine regulation of sexual reproduction. Comparison of effect concentrations and surface water contamination by phthalate esters suggests these compounds do not constitute a risk for these environments. [less ▲]Detailed reference viewed: 13 (3 ULg) Compte-rendu de : GÉRARD E., DE RIDDER W., MULLER F., Qui a tué Julien Lahaut ? Les ombres de la Guerre Froide en Belgique, Bruxelles, La Renaissance du Livre, 2015, 352 p.Genin, Vincent in Bulletin d'Information de l'Association Belge d'Histoire Contemporaine = Mededelingenblad van de Belgische Vereniging voor Nieuwste Geschiedenis (2016), XXXVII(2015/4), 21-24Detailed reference viewed: 33 (2 ULg) Performance characteristics of the VIDAS 25-OH Vitamin D total assay - comparison with four immunoassays and two liquid chromatography-tandem mass spectrometry methods in a multicentric studyMoreau, E; Bächer, S; Mery, S et alin Clinical Chemistry & Laboratory Medicine (2016)Background: The study was conducted to evaluate the analytical and clinical performance of the VIDAS® 25-OH Vitamin D Total assay. The clinical performance of the assay was compared with four other ... [more ▼]Background: The study was conducted to evaluate the analytical and clinical performance of the VIDAS® 25-OH Vitamin D Total assay. The clinical performance of the assay was compared with four other immunoassays against the results of two different liquid chromatography/ mass spectrometry methods (LC-MS/MS) standardized to NIST reference materials. Methods: VIDAS® 25-OH Vitamin D Total assay precision, linearity, detection limits and sample matrix comparison were assessed following CLSI guidelines. For method comparison, a total of 150 serum samples ranging from 7 to 92 ng/mL were analyzed by all the methods. Correlation was studied using Passing-Bablok regression and Bland- Altman analysis. The concordance correlation coefficient (CCC) was calculated to evaluate agreement between immunoassays and the reference LC-MS/MS method. In addition, samples containing endogenous 25(OH)D2 were used to assess each immunoassay’s ability to detect this analyte. Pregnancy and hemodialysis samples were used to the study the effect of vitamin D binding protein (DBP) concentration over VIDAS® assay performance. Results: The VIDAS® 25-OH Vitamin D Total assay showed excellent correlation to the LC-MS/MS results (y = 1.01x+0.22 ng/mL, r = 0.93), as obtained from two different sites and distinct LC-MS/MS methods. The limit of quantification was determined at 8.1 ng/mL. Cross-reactivity for 25(OH)D2 was over 80%. At concentrations of 10.5, 26 and 65.1 ng/mL, within-run CVs were 7.9%, 3.6% and 1.7%, while total CVs (between runs, calibrations, lots and instruments) were 16.0%, 4.5% and 2.8%. The VIDAS® performance was not influenced by altered DBP levels, though under-recovery of 25(OH)D as compared to LC-MS/ MS was observed for hemodialysis samples. Conclusions: The VIDAS® 25-OH Vitamin D Total assay is therefore considered suitable for assessment of vitamin D status in clinical routine. Keywords: assay performance; liquid chromatography/ [less ▲]Detailed reference viewed: 34 (13 ULg) Breakdown of Anderson localization in the transport of Bose-Einstein condensates through one-dimensional disordered potentialsDujardin, Julien ; Engl, Thomas; Schlagheck, Peter in Physical Review A (2016), 93We study the transport of an interacting Bose–Einstein condensate through a 1D correlated disorder potential. We use for this purpose the truncated Wigner method, which is, as we show, corresponding to ... [more ▼]We study the transport of an interacting Bose–Einstein condensate through a 1D correlated disorder potential. We use for this purpose the truncated Wigner method, which is, as we show, corresponding to the diagonal approximation of a semiclassical van Vleck–Gutzwiller representation of this many-body transport process. We also argue that semiclassical corrections beyond this diagonal approximation are vanishing under disorder average, thus confirming the validity of the truncated Wigner method in this context. Numerical calculations show that, while for weak atom-atom interaction strengths Anderson localization is preserved with a slight modification of the localization length, for larger interaction strengths a crossover to a delocalized regime exists due to inelastic scattering. In this case, the transport is fully incoherent. [less ▲]Detailed reference viewed: 27 (6 ULg) Apprendre à faire semblant. A propos de "Furyo" de Nagisa OshimaTomasovic, Dick in Mondes du cinéma (2016), 08Une analyse des préceptes qui gouvernent la mise en scène du film "Furyo" d’Oshima, en insistant particulièrement sur l'analyse de la bande sonore et la présence à l'écran de David Bowie et Ryuichi ... [more ▼]Une analyse des préceptes qui gouvernent la mise en scène du film "Furyo" d’Oshima, en insistant particulièrement sur l'analyse de la bande sonore et la présence à l'écran de David Bowie et Ryuichi Sakamoto. [less ▲]Detailed reference viewed: 12 (1 ULg) Effects of seed traits variation on seedling performance of the invasive weed, Ambrosia artemisiifolia L.Ortmans, William ; Mahy, Grégory ; Monty, Arnaud in Acta Oecologica: International Journal of Ecology (2016), 71Seedling performance can determine the survival of a juvenile plant and impact adult plant performance. Understanding the factors that may impact seedling performance is thus critical, especially for ... [more ▼]Seedling performance can determine the survival of a juvenile plant and impact adult plant performance. Understanding the factors that may impact seedling performance is thus critical, especially for annuals, opportunists or invasive plant species. Seedling performance can vary among mothers or populations in response to environmental conditions or under the influence of seed traits. However, very few studies have investigated seed traits variations and their consequences on seedling performance. Specifically, the following questions have been addressed by this work: 1) How the seed traits of the invasive Ambrosia artemisiifolia L. vary among mothers and populations, as well as along the latitude; 2) How do seed traits influence seedling performance; 3) Is the influence on seedlings temperature dependent. With seeds from nine Western Europe ruderal populations, seed traits that can influence seedling development were measured. The seeds were sown into growth chambers with warmer or colder temperature treatments. During seedling growth, performance-related traits were measured. A high variability in seed traits was highlighted. Variation was determined by the mother identity and population, but not latitude. Together, the temperature, population and the identity of the mother had an effect on seedling performance. Seed traits had a relative impact on seedling performance, but this did not appear to be temperature dependent. Seedling performance exhibited a strong plastic response to the temperature, was shaped by the identity of the mother and the population, and was influenced by a number of seed traits. [less ▲]Detailed reference viewed: 39 (10 ULg) Automatic artifacts and arousals detection in whole-night sleep EEG recordingsCoppieters't Wallant, Dorothe ; Muto, Vincenzo ; Gaggioni, Giulia et alin Journal of Neuroscience Methods (2016), 258In sleep electroencephalographic (EEG) signals, artifacts and arousals marking are usually part of the processing. This visual inspection by a human expert has two main drawbacks: it is very time ... [more ▼]In sleep electroencephalographic (EEG) signals, artifacts and arousals marking are usually part of the processing. This visual inspection by a human expert has two main drawbacks: it is very time consuming and subjective. To detect artifacts and arousals in a reliable, systematic and reproducible automatic way, we developed an automatic detection based on time and frequency analysis with adapted thresholds derived from data themselves. The automatic detection performance is assessed using 5 statistic parameters, on 60 whole night sleep recordings coming from 35 healthy volunteers (male and female) aged between 19 and 26. The proposed approach proves its robustness against inter- and intra-, subjects and raters’ scorings, variability. The agreement with human raters is rated overall from substantial to excellent and provides a significantly more reliable method than between human raters. Existing methods detect only specific artifacts or only arousals, and/or these methods are validated on short episodes of sleep recordings, making it difficult to compare with our whole night results. The method works on a whole night recording and is fully automatic, reproducible, and reliable. Furthermore the implementation of the method will be made available online as open source code. [less ▲]Detailed reference viewed: 64 (23 ULg) Le cas tunisienNachi, Mohamed in Revue Nouvelle (2016), 71(1), 61-68Mise en exergue de l'importance du rôle du compromis dans la transition démocratique en TunisieDetailed reference viewed: 14 (0 ULg) Shaping Pulses to Control Bistable Systems: Analysis, Computation and CounterexamplesSootla, Aivar ; Oyarzun, Diego; Angeli, David et alin Automatica (2016), 63Detailed reference viewed: 9 (2 ULg) Quasi-periodic injections of relativistic electrons in Saturn's outer magnetosphereRoussos, E.; Krupp, N.; Mitchell, D. G. et alin Icarus (2016), 263Detailed reference viewed: 23 (3 ULg) What Is the Role of Minimally Invasive Mitral Valve Surgery in High-Risk Patients? A Meta-Analysis of Observational Studies.Moscarelli, Marco; Fattouch, Khalil; Casula, Roberto et alin The Annals of thoracic surgery (2016)BACKGROUND: Minimally invasive valve surgery is related to certain better postoperative outcomes. We aimed to assess the role of minimally invasive mitral valve surgery in high-risk patients. METHODS: A ... [more ▼]BACKGROUND: Minimally invasive valve surgery is related to certain better postoperative outcomes. We aimed to assess the role of minimally invasive mitral valve surgery in high-risk patients. METHODS: A systematic literature review identified eight studies of which seven fulfilled criteria for meta-analysis. Outcomes for a total of 1,254 patients (731 were conventional standard sternotomy and 523 were minimally invasive mitral valve surgery) were submitted to meta-analysis using random effects modeling. Heterogeneity and subgroup analysis with quality scoring were assessed. The primary end point was early mortality. Secondary end points were intraoperative and postoperative outcomes and long-term follow-up. RESULTS: Minimally invasive mitral valve surgery conferred comparable early mortality to standard sternotomy (p = 0.19); it was also associated with a lower number of units of blood transfused (weighted mean difference, -1.93; 95% confidence interval [CI], -3.04 to -0.82; p = 0.0006) and atrial fibrillation rate (odds ratio, 0.49; 95% CI, 0.32 to 0.74; p = 0.0007); however, cardiopulmonary bypass time was longer (weighted mean difference, 20.88; 95% CI, -1.90 to 43.65; p = 0.07). There was no difference in terms of valve repair rate (odds ratio, 1.51; 95% CI, 0.89 to 2.54; p = 0.12), and the incidence of stroke was significantly lower in the high-quality analysis with no heterogeneity (odds ratio, 0.35; 95% CI, 0.15 to 0.82; p = 0.02; chi2, 1.67; I2, 0%; p = 0.43). CONCLUSIONS: Minimally invasive mitral valve surgery is a safe and comparable alternative to standard sternotomy in patients at high risk, with similar early mortality and repair rate and better postoperative outcomes, although a longer cardiopulmonary bypass time is required. [less ▲]Detailed reference viewed: 30 (10 ULg) Combinaison fixe atorvastatine-ezetimibe (Atozet(R)).Scheen, André in Revue medicale de Liege (2016), 71(1), 47-52Cardiovascular prevention in subjects at high or very high risk requires a drastic reduction in LDL cholesterol according to the concept "the lower, the better". The combination of an inhibitor of ... [more ▼]Cardiovascular prevention in subjects at high or very high risk requires a drastic reduction in LDL cholesterol according to the concept "the lower, the better". The combination of an inhibitor of cholesterol synthesis and a selective inhibitor of intestinal absorption results in a complementary and synergistic LDL-lowering activity. Besides a first fixed combination ezetimibe-simvastatin (Inegy(R)), a new fixed combination is presented, Atozet(R) that combines atorvastatin and ezetimibe. Because atorvastatin is more potent than simvastatin, this novel fixed combination should facilitate reaching therapeutic goals in terms of LDL cholesterol amongst patients with severe hypercholesterolaemia and/or at high or very high cardiovascular risk. [less ▲]Detailed reference viewed: 29 (4 ULg) Examen de jurisprudence (2010-2013) - Les sociétés commerciales (première partie)Caprasse, Olivier ; Dieux, Xavier; Lambrecht, Philippein Revue Critique de Jurisprudence Belge [= RCJB] (2016)Detailed reference viewed: 27 (5 ULg) Flexible estimation in cure survival models using Bayesian P-splinesBremhorst, Vincent; Lambert, Philippe in Computational Statistics & Data Analysis (2016), 93In the analysis of survival data, it is usually assumed that any unit will experience the event of interest if it is observed for a sufficiently long time. However, it can be explicitly assumed that an ... [more ▼]In the analysis of survival data, it is usually assumed that any unit will experience the event of interest if it is observed for a sufficiently long time. However, it can be explicitly assumed that an unknown proportion of the population under study will never experience the monitored event. The promotion time model, which has a biological motivation, is one of the survival models taking this feature into account. The promotion time model assumes that the failure time of each subject is generated by the minimum of N independent latent event times with a common distribution independent of N. An extension which allows the covariates to influence simultane- ously the probability of being cured and the latent distribution is presented. The latent distribution is estimated using a flexible Cox proportional hazard model where the logarithm of the baseline hazard function is specified using Bayesian P-splines. Introducing covariates in the latent distribution implies that the population hazard function might not have a proportional hazard structure. However, the use of P- splines provides a smooth estimation of the population hazard ratio over time. The identification issues of the model are discussed and a restricted use of the model when the follow up of the study is not sufficiently long is proposed. The accuracy of our methodology is evaluated through a simulation study and the model is illustrated on data from a Melanoma clinical trial. [less ▲]Detailed reference viewed: 37 (7 ULg) A review of the characteristics of Small-Leaved Lime (Tilia cordata Mill.) and their implications for silviculture in a changing climateDe Jaegere, Tanguy; Hein, Sébastien; Claessens, Hugues in Forests (2016), 7(3), 56Tilia cordata Mill. is a minor European broadleaved species with a wide but scattered distribution. Given its scarcity and low value in the wood market, it has received little attention from researchers ... [more ▼]Tilia cordata Mill. is a minor European broadleaved species with a wide but scattered distribution. Given its scarcity and low value in the wood market, it has received little attention from researchers and forest managers. This review summarizes the main aspects of T. cordata ecology and growth. Its main limiting factor is its need for warm summer temperatures to ensure successful seed production. It has a height growth pattern relatively similar to that of Acer pseudoplatanus L., with a slight delay in the early stages. Yield tables report great productivity, especially in eastern Europe. T. cordata used to be a major species in Europe, in contrast to its present distribution, but it is very likely to receive renewed interest in the future. Indeed, with the potential change of competition between species in some regions and the need for important diversification in others, T. cordata may play an important role in forest adaptation to climate change, especially owing to its wide ecological tolerance and its numerous ecosystem services. It is necessary to increase our knowledge about its regeneration and its responses to environmental and silvicultural factors, to establish clear management recommendations. [less ▲] Large-Eddy Simulations of microcarrier exposure to potentially damaging eddies inside mini-bioreactorsCollignon, Marie-Laure ; Delafosse, Angélique; Calvo, Sébastien et alin Biochemical Engineering Journal (2016)Mechanically stirred vessels equipped with rotating impellers generate heterogeneous transitional or turbulent flows. However, some cells as animal or human mesenchymal stem cells (hMSC) adhered on ... [more ▼]Mechanically stirred vessels equipped with rotating impellers generate heterogeneous transitional or turbulent flows. However, some cells as animal or human mesenchymal stem cells (hMSC) adhered on microcarriers, are reputed sensitive to hydromechanical stresses arising from stirring. Many publications, especially using Computational Fluid Dynamics, characterize spatial fields of velocity and turbulence inside bioreactors but the exposure frequency to these stresses is never taken into account in the case of animal cell culture bioreactor description. To fill this gap, this study used both CFD Reynolds-Averaged and Large-Eddy Simulations to characterize the hydrodynamics inside 250 mL mini-bioreactors, which is a relevant volume for hMSC cultures. Five impeller geometries were studied. From the velocity and turbulence fields calculated, an energy dissipation/circulation function, related to both frequency and intensity of potentially damaging hydrodynamic events for the cells, was determined for various operating conditions. Based on the simulation results, the marine propeller operating in up-pumping mode seems to be the most adapted condition, since it exhibits a low frequency of exposure to an acceptable intensity of the turbulent dissipation rate. From a general point of view, the new methodology proposed should be used in the future to screen the most adapted bioreactor geometry to biological constraints. [less ▲]Detailed reference viewed: 62 (11 ULg) Stakeholders’ perceptions, attitudes and practices towards risk prevention in the food chainLupo, C; Wilmart, O; Van Huffel, X et alin Food Control (2016), 66Detailed reference viewed: 40 (6 ULg) Transatlantic editorial: A comparison between European and North American guidelines on myocardial revascularization.Kolh, Philippe ; Kurlansky, Paul; Cremer, Jochen et alin Journal of Thoracic and Cardiovascular Surgery (The) (2016)Detailed reference viewed: 5 (1 ULg) Prognostic relevance of epilepsy at presentation in glioblastoma patients.Berendsen, Sharon; Varkila, Meri; Kroonen, Jerome et alin Neuro-oncology (2016)BACKGROUND: Epileptogenic glioblastomas are thought to convey a favorable prognosis, either due to early diagnosis or potential antitumor effects of antiepileptic drugs. We investigated the relationship ... [more ▼]BACKGROUND: Epileptogenic glioblastomas are thought to convey a favorable prognosis, either due to early diagnosis or potential antitumor effects of antiepileptic drugs. We investigated the relationship between survival and epilepsy at presentation, early diagnosis, and antiepileptic drug therapy in glioblastoma patients. METHODS: Multivariable Cox regression was applied to survival data of 647 consecutive patients diagnosed with de novo glioblastoma between 2005 and 2013 in order to investigate the association between epilepsy and survival in glioblastoma patients. In addition, we quantified the association between survival and valproic acid (VPA) treatment. RESULTS: Epilepsy correlated positively with survival (HR: 0.75 (95% CI: 0.61-0.92), P < .01). This effect is independent of age, sex, performance status, type of surgery, adjuvant therapy, tumor location, and tumor volume, suggesting that this positive correlation cannot be attributed solely to early diagnosis. For patients who presented with epilepsy, the use of the antiepileptic drug VPA did not associate with survival when compared with patients who did not receive VPA treatment. CONCLUSION: Epilepsy is an independent prognostic factor for longer survival in glioblastoma patients. This prognostic effect is not solely explained by early diagnosis, and survival is not associated with VPA treatment. [less ▲]Detailed reference viewed: 21 (5 ULg) Compte rendu de Christophe CHANDEZON, Julien DU BOUCHET (éds.), Artémidore de Daldis et l’interprétation des rêves. Quatorze étudesBerg, Tatiana in Antiquité Classique : Revue Interuniversitaire d'Etudes Classiques (2016), 85Detailed reference viewed: 28 (8 ULg) A multi-scale magnetotail reconnection event at Saturn and associated flows: Cassini/UVIS observationsRadioti, Aikaterini ; Grodent, Denis ; Jia, X. et alin Icarus (2016)Detailed reference viewed: 62 (23 ULg) Les catégories au seuil de la phénoménologieGauvry, Charlotte ; Fagniez, Guillaumein Etudes Philosophiques (Les) (2016)Le tournant du XXe siècle est marqué par un retour de la problématique catégoriale sur le devant de la scène philosophique, que ce soit chez Franz Brentano et ses élèves, chez Wilhelm Dilthey, chez les ... [more ▼]Le tournant du XXe siècle est marqué par un retour de la problématique catégoriale sur le devant de la scène philosophique, que ce soit chez Franz Brentano et ses élèves, chez Wilhelm Dilthey, chez les Néokantiens de l’école de Bade, ou chez les premiers pionniers de la phénoménologie. Tant et si bien qu’une certaine philosophie (celle d’Emil Lask par excellence) a pu se présenter comme une doctrine des catégories à part entière. L’enjeu de ce numéro est d’interroger la place historique et le rôle conceptuel qu’ont joué ces questions formelles dans l’émergence de la phénoménologie. Kant avait certes renoué avec la question aristotélicienne des catégories pour postuler l’existence de « purs concepts de l’entendement » ou « catégories », seuls susceptibles de « fournir de l’unité aux diverses représentations dans un jugement et [de] donner aussi à la simple synthèse de diverses représentations dans une intuition une unité » (KRV, §10). L’intérêt proto-phénoménologique pour les catégories se présente-t-il cependant comme un retour, par-delà l’idéalisme allemand, à cette problématique transcendantale kantienne ou bien plutôt comme un retour, par-delà Kant lui-même, à la question aristotélicienne ? Si les philosophes du début XXe se sont ressaisis de la question des catégories, il semble qu’ils aient à tout le moins reformulé, voire abandonné la méthode de la « déduction ». C’est ce travail de reformulation de la déduction transcendantale que le numéro entend également clarifier. Peut-on encore dire que ces catégories sont « déduites » des formes logiques du jugement, des objets logiques ou du mouvement même de la vie ? S’il y a lieu de mettre au jour un processus « génétique » de la catégorie, quelles sont les implications d’un tel déplacement relativement à la validité du catégorial ? Plus généralement, c’est le statut même de ces « catégories » qui sera analysé. Leur domaine d’application redéfini, peut-on les déterminer comme des concepts purs transcendantaux susceptibles d’unifier l’expérience sensible ? Les catégories ne doivent-elles pas plutôt être repensées comme des formes logiques, sémantiques ou même ontologiques ? Ainsi, c’est l’enjeu stratégique de la résurgence de cette problématique catégoriale, et ses répercussions directes sur l’émergence de la phénoménologie naissante (husserlienne et heideggérienne) que ce numéro interrogera. [less ▲]Detailed reference viewed: 52 (3 ULg) Deletion of Murid Herpesvirus 4 ORF63 Affects the Trafficking of Incoming Capsids toward the Nucleus.Latif, Muhammad Bilal ; Machiels, Bénédicte ; Xiao, Xue et alin Journal of virology (2016), 90(5), 2455-72Gammaherpesviruses are important human and animal pathogens. Despite the fact that they display the classical architecture of herpesviruses, the function of most of their structural proteins is still ... [more ▼]Gammaherpesviruses are important human and animal pathogens. Despite the fact that they display the classical architecture of herpesviruses, the function of most of their structural proteins is still poorly defined. This is especially true for tegument proteins. Interestingly, a potential role in immune evasion has recently been proposed for the tegument protein encoded by Kaposi's sarcoma-associated herpesvirus open reading frame 63 (ORF63). To gain insight about the roles of ORF63 in the life cycle of a gammaherpesvirus, we generated null mutations in the ORF63 gene of murid herpesvirus 4 (MuHV-4). We showed that disruption of ORF63 was associated with a severe MuHV-4 growth deficit both in vitro and in vivo. The latter deficit was mainly associated with a defect of replication in the lung but did not affect the establishment of latency in the spleen. From a functional point of view, inhibition of caspase-1 or the inflammasome did not restore the growth of the ORF63-deficient mutant, suggesting that the observed deficit was not associated with the immune evasion mechanism identified previously. Moreover, this growth deficit was also not associated with a defect in virion egress from the infected cells. In contrast, it appeared that MuHV-4 ORF63-deficient mutants failed to address most of their capsids to the nucleus during entry into the host cell, suggesting that ORF63 plays a role in capsid movement. In the future, ORF63 could therefore be considered a target to block gammaherpesvirus infection at a very early stage of the infection. IMPORTANCE: The important diseases caused by gammaherpesviruses in human and animal populations justify a better understanding of their life cycle. In particular, the role of most of their tegument proteins is still largely unknown. In this study, we used murid herpesvirus 4, a gammaherpesvirus infecting mice, to decipher the role of the protein encoded by the viral ORF63 gene. We showed that the absence of this protein is associated with a severe growth deficit both in vitro and in vivo that was mainly due to impaired migration of viral capsids toward the nucleus during entry. Together, our results provide new insights about the life cycle of gammaherpesviruses and could allow the development of new antiviral strategies aimed at blocking gammaherpesvirus infection at the very early stages. [less ▲]Detailed reference viewed: 22 (6 ULg) The Robust Economic Statistical Design of the Hotelling’s T^2 ChartFaraz, Alireza ; Chalaki, Kamyar; Saniga, Erwin et alin Communications in Statistics : Theory & Methods (2016)Economic statistical designs aim at minimizing the cost of process monitoring when a specific scenario or a set of estimated process and cost parameters is given. However, in practical situations the ... [more ▼]Economic statistical designs aim at minimizing the cost of process monitoring when a specific scenario or a set of estimated process and cost parameters is given. However, in practical situations the process may be affected by more than one scenario which may lead to severe cost penalties for upsetting the true scenario. This paper presents the robust economic statistical design (RESD) of the T^2 chart to reduce the monetary losses when there are multiple distinct scenarios. The genetic algorithm optimization method is employed here to minimize the total expected monitoring cost across all distinct scenarios. Through two numerical examples the proposed method is illustrated. Simulation studies indicate that the robust economic statistical designs should be encouraged in practice. [less ▲]Detailed reference viewed: 83 (3 ULg) Clinical effects of computed tomography-guided lumbosacral facet joint, transforaminal epidural, and translaminar epidural injections of methylprednisolone acetate in healthy dogs.Liotta, Annalisa Pia ; Girod, Maud ; Peeters, Dominique et alin American Journal of Veterinary Research (2016), 77(10), 1132-9OBJECTIVE To determine clinical effects of CT-guided lumbosacral facet joint, transforaminal epidural, and translaminar epidural injections of methylprednisolone acetate in healthy dogs. ANIMALS 15 ... [more ▼]OBJECTIVE To determine clinical effects of CT-guided lumbosacral facet joint, transforaminal epidural, and translaminar epidural injections of methylprednisolone acetate in healthy dogs. ANIMALS 15 healthy Beagles. PROCEDURES Dogs were randomly assigned to 3 groups (5 dogs/group) and received a single CT-guided lumbosacral facet joint, transforaminal epidural, or translaminar epidural injection of methylprednisolone acetate (0.1 mg/kg). Contrast medium was injected prior to injection of methylprednisolone to verify needle placement. Neurologic examinations were performed 1, 3, 7, and 10 days after the injection. In dogs with neurologic abnormalities, a final neurologic examination was performed 24 days after the procedure. RESULTS Methylprednisolone injections were successfully performed in 14 of the 15 dogs. In 1 dog, vascular puncture occurred, and the methylprednisolone injection was not performed. No major or minor complications were identified during or immediately after the procedure, other than mild transient hyperthermia. During follow-up neurologic examinations, no motor, sensory, or postural deficits were identified, other than mild alterations in the patellar, withdrawal, cranial tibial, and perineal reflexes in some dogs. Overall, altered reflexes were observed in 11 of the 14 dogs, during 27 of 65 neurologic examinations. CONCLUSIONS AND CLINICAL RELEVANCE Results suggested that CT-guided lumbosacral facet joint, transforaminal epidural, and translaminar epidural injections of methylprednisolone acetate were associated with few complications in healthy dogs. However, the number of dogs evaluated was small, and additional studies are needed to assess clinical efficacy and safety of these procedures. [less ▲]Detailed reference viewed: 14 (3 ULg) Des situations-limites au dépassement de la situation : phénoménologie d'un concept sartrienCormann, Grégory ; Englebert, Jérôme in Sartre Studies International (2016), 22(1), 99-116Dans cet article, nous explorons le concept de situation, en-deçà et au-delà de sa thématisation par Sartre, à partir de L’imaginaire, dans les Carnets de la drôle de guerre puis dans L’être et le néant ... [more ▼]Dans cet article, nous explorons le concept de situation, en-deçà et au-delà de sa thématisation par Sartre, à partir de L’imaginaire, dans les Carnets de la drôle de guerre puis dans L’être et le néant. Notre approche se fonde sur un double mouvement : d’une part, une archéologie du concept dans le contexte de la première percée de la phénoménologie en France dans les années 1930 ; d’autre part, une attention à son possible dépassement deux décennies plus tard. Dans une relation serrée avec les développements de la psychopathologie de son temps, il s’agit d’abord de situer la prétention de l’Esquisse d’une théorie des émotions à poser les bases d’une phénoménologie de « l’homme en situation », ensuite d’éclairer la reprise critique du concept dans Questions de méthode, dans l’exposition de la méthode progressive-régressive. En somme, il s’agit de montrer comment s’élabore progressivement chez Sartre une « herméneutique de l’existence » qui fait droit à la liberté, puis se donne les moyens d’en suivre l’action effective dans le monde. [less ▲]Detailed reference viewed: 71 (17 ULg) Is the Nociception Coma Scale-Revised a Useful Clinical Tool for Managing Pain in Patients with Disorders of Consciousness?Chatelle, Camille ; De Val, Marie Daniele; Catano, Antonio et alin The Clinical journal of pain (2016)OBJECTIVES: Our objective was to assess the clinical interest of the Nociception Coma Scale Revised (NCS-R) in pain management of patients with disorders of consciousness. METHODS: Thirty-nine patients ... [more ▼]OBJECTIVES: Our objective was to assess the clinical interest of the Nociception Coma Scale Revised (NCS-R) in pain management of patients with disorders of consciousness. METHODS: Thirty-nine patients with potential painful conditions (e.g., due to fractures, decubitus ulcers or spasticity) were assessed during nursing cares before and after the administration of an analgesic treatment tailored to each patient's clinical status. In addition to the NCS-R, the Glasgow Coma Scale (GCS) was used before and during treatment in order to observe fluctuations in consciousness. Twenty-three of them had no analgesic treatment prior to the assessment whereas the analgesic treatment has been adapted in the other 16 patients. We performed non-parametric Wilcoxon tests to investigate the difference in the NCS-R and GCS total scores but also in the NCS-R subscores before versus during treatment. The effect of the level of consciousness and the etiology were assessed using a U Mann Whitney. RESULTS: NCS-R total scores were statistically lower during treatment when compared to the scores obtained before treatment. We also found that the motor, verbal and facial expression subscores were lower during treatment than before treatment. On the other hand, we found no difference between the GCS total scores obtained before versus during treatment. DISCUSSION: Our results suggest that the NCS-R is an interesting clinical tool for pain management. Besides, this tool seems useful when a balance is needed between reduced pain and preserved level of consciousness in patients with disorders of consciousness. [less ▲]Detailed reference viewed: 23 (4 ULg) Middle Devonian Jawed polychate fauna from the type Eifel area, western Germany, and its biogeographical and evolutionary affinitiesTonarova, Petra; Hints, Olle; Könighsof, Peter et alin Palaeontology (2016)Detailed reference viewed: 30 (3 ULg) A Statistically adaptive sampling policy to the Hotelling's T2 Control Chart: Markov Chain ApproachSeif, A.; Faraz, Alireza ; Heuchenne, Cédric et alin Communications in Statistics : Theory & Methods (2016)Detailed reference viewed: 46 (7 ULg) Electric Field Control of Jahn-Teller Distortions in Bulk PerovskitesVarignon, Julien ; Bristowe, Nicholas ; Ghosez, Philippe in Physical Review Letters (2016), 116The Jahn-Teller distortion, by its very nature, is often at the heart of the various electronic properties displayed by perovskites and related materials. Despite the Jahn-Teller mode being nonpolar, we ... [more ▼]The Jahn-Teller distortion, by its very nature, is often at the heart of the various electronic properties displayed by perovskites and related materials. Despite the Jahn-Teller mode being nonpolar, we devise and demonstrate, in the present Letter, an electric field control of Jahn-Teller distortions in bulk perovskites. The electric field control is enabled through an anharmonic lattice mode coupling between the Jahn-Teller distortion and a polar mode. We confirm this coupling and quantify it through first-principles calculations. The coupling will always exist within the Pb21m space group, which is found to be the favored ground state for various perovskites under sufficient tensile epitaxial strain. Intriguingly, the calculations reveal that this mechanism is not only restricted to Jahn-Teller active systems, promising a general route to tune or induce novel electronic functionality in perovskites as a whole. [less ▲]Detailed reference viewed: 21 (9 ULg) The role of personal goals in autonoetic experience when imagining future eventsLehner, Edith; D'Argembeau, Arnaud in Consciousness & Cognition (2016), 42Detailed reference viewed: 25 (4 ULg) Free public transport: A socio-cognitive analysisCools, Mario ; Fabbro, Yannick; Bellemans, Tomin Transportation Research. Part A : Policy & Practice (2016), 86In this study, the modal shift potential of introducing a free alternative (free public transportation) and of changing the relative prices of transportation is examined. The influence of a cognitive ... [more ▼]In this study, the modal shift potential of introducing a free alternative (free public transportation) and of changing the relative prices of transportation is examined. The influence of a cognitive analysis on the zero-price effect is also analyzed. The data used for the analysis stem from a stated preference survey with a sample of approximately 670 respondents that was conducted in Flanders, Belgium. The data are analyzed using a mixed logit model. The modeling results yield findings that confirm the existence of a zero-price effect in transport, which is in line with the literature. This zero-price effect is increased by the forced cognitive analysis for shopping trips, although not for work/school or recreational trips. The results also demonstrate the importance of the current mode choice in hypothetical mode choices and the importance of car availability. The influence of changing relative prices on the modal shift is found to be insignificant. This might be partially because the price differences were too small to matter. Hence, an increase in public transport use can be facilitated by the introduction of free public transport, particularly when individuals evaluate the different alternatives in a more cognitive manner. These findings should be useful to policy makers evaluating free public transport and considering how best to target and promote relevant policy. [less ▲]Detailed reference viewed: 52 (1 ULg) Chasing the Dioxin Detection DragonFocant, Jean-François in The Analytical Scientist (2016), 0316-701Detailed reference viewed: 12 (2 ULg) Quadratization of symmetric pseudo-Boolean functionsAnthony, Martin; Boros, Endre; Crama, Yves et alin Discrete Applied Mathematics (2016), 203A pseudo-Boolean function is a real-valued function $f(x)=f(x_1,x_2,\ldots,x_n)$ of $n$ binary variables; that is, a mapping from $\{0,1\}^n$ to ${\bbr}$. For a pseudo-Boolean function $f(x)$ on $\{0,1 ... [more ▼]A pseudo-Boolean function is a real-valued function$f(x)=f(x_1,x_2,\ldots,x_n)$of$n$binary variables; that is, a mapping from$\{0,1\}^n$to${\bbr}$. For a pseudo-Boolean function$f(x)$on$\{0,1\}^n$, we say that$g(x,y)$is a quadratization of$f$if$g(x,y)$is a Quadratic polynomial depending on$x$and on$m$auxiliary binary variables$y_1,y_2,\ldots,y_m$such that$f(x)= \min \{ g(x,y) : y \in \{0,1\}^m \} $for all$x \in \{0,1\}^n$. By means of quadratizations, minimization of$f$is reduced to minimization (over its extended set of variables) of the quadratic function$g(x,y)$. This is of some practical interest because minimization of quadratic functions has been thoroughly studied for the last few decades, and much progress has been made in solving such problems exactly or heuristically. A related paper initiated a systematic study of the minimum number of auxiliary$y$-variables required in a quadratization of an arbitrary function$f$(a natural question, since the complexity of minimizing the quadratic function$g(x,y)$depends, among other factors, on the number of binary variables). In this paper, we determine more precisely the number of auxiliary variables required by quadratizations of \emph{symmetric} pseudo-Boolean functions$f(x)$, those functions whose value depends only on the Hamming weight of the input$x$(the number of variables equal to 1). [less ▲]Detailed reference viewed: 87 (12 ULg) Proposal for standardized ultrasound descriptors of abnormally invasive placenta (AIP)Collins, Sally; Anna, Ashcroft; Braun, Thorsten et alin Ultrasound in Obstetrics & Gynecology (2016)Detailed reference viewed: 20 (1 ULg) Open Science: a revolution in sight?Rentier, Bernard in Interlending & Document Supply (2016), 44(4), • Purpose This article aims at describing the evolution of scientific communication, largely represented by the publication process. It notes the disappearance of the traditional publication on paper and ... [more ▼]• Purpose This article aims at describing the evolution of scientific communication, largely represented by the publication process. It notes the disappearance of the traditional publication on paper and its progressive replacement by electronic publishing, a new paradigm implying radical changes in the whole mechanism. It aims also at warning the scientific community about the dangers of some new avenues and why, rather than subcontracting an essential part of its work, it must take back a full control of its production. • Design/methodology/approach The article reviews the emerging concepts in scholarly publication and aims to answer frequently asked questions concerning free access to scientific literature as well as to data, science and knowledge in general. • Findings The article provides new observations concerning the level of compliance to institutional open access mandates and the poor relevance of journal prestige for quality evaluation of research and researchers. The results of introducing an open access policy at the University of Liège are noted. • Social implications Open access is, for the first time in human history, an opportunity to provide free access to knowledge universally, regardless of either the wealth or the social status of the potentially interested readers. It is an essential breakthrough for developing countries. • Value Open access and Open Science in general must be considered as common values that should be shared freely. Free access to publicly generated knowledge should be explicitly included in universal human rights. There are still a number of obstacles hampering this goal, mostly the greed of intermediaries who persuade researchers to give their work for free, in exchange for prestige. The worldwide cause of Open Knowledge is thus a major universal issue for the 21st Century. [less ▲]Detailed reference viewed: 882 (19 ULg) New data about anatomy, branching, and inferred growth patterns in the Early Devonian plant Armoricaphyton chateaupannense, Montjean-sur-Loire, FranceGerrienne, Philippe ; Gensel, PGin Review of Palaeobotany and Palynology (2016), 224(Part 1), 38-53This study provides detailed information, based on serial peels, on the anatomy of the primary and secondary xylem, lateral branch formation, and proposed growth pattern of Armoricaphyton Strullu-Derrien ... [more ▼]This study provides detailed information, based on serial peels, on the anatomy of the primary and secondary xylem, lateral branch formation, and proposed growth pattern of Armoricaphyton Strullu-Derrien et al., an Early Devonian (Pragian) basal euphyllophyte. The centrarch primary xylem strand is circular to oval in cross section and includes P-type tracheids. The pattern of lateral branch departure is described in detail based on serial peels. The secondary xylem, illustrated in transverse, radial and tangential longitudinal sections, includes P-type tracheids with similar pitting in radial and tangential walls. The presence of one-walled spaces interpreted as rays is confirmed. This study also documents the earliest occurrence of secondary xylem at bases of next higher order of branches. A model of growth for this unusual Early Devonian plant is presented, with documented epidogenetic, menetogenetic and possible apoxogenetic phases. The difference in proportion of primary tosecondary xylem is postulated to be a result of this mode of growth. The possible implications of xylem size and organization in the lateral organ in Armoricaphyton relative to the early evolution of the megaphyllous leaf are briefly discussed. [less ▲]Detailed reference viewed: 16 (4 ULg) wicking through a confined micropillar arrayDarbois-Texier, Baptiste ; Laurent, Philippe ; Stoukatch, Serguei et alin Microfluidics and Nanofluidics (2016), 20Detailed reference viewed: 44 (8 ULg) Pétrone et le "sardisme". Remarques à propos de la langue d'Herméros dans la Cena TrimalchionisRochette, Bruno in Antiquité Classique : Revue Interuniversitaire d'Etudes Classiques (2016), 85Detailed reference viewed: 12 (0 ULg) Outplacement adequacy and benefits: The mediating role of overall justice.Marzucco, Laurence ; Hansez, Isabelle in Journal of Employment Counseling (2016), 53Despite a rapid growth and an ongoing need for outplacement services, little is yet known about the perceived adequacy and benefits of these services for redundant employees using them. We surveyed 360 ... [more ▼]Despite a rapid growth and an ongoing need for outplacement services, little is yet known about the perceived adequacy and benefits of these services for redundant employees using them. We surveyed 360 Belgian redundant employees (i.e., clients) using outplacement services provided by a public employment agency. The results indicate that an outplacement experience perceived as adequate for clients fosters their overall impressions of justice towards the dismissing organization; this leads in turn to benefits for them: reduction of negative emotions, enhancement of their perceived well-being, future perspectives, and job-seeking activities - confirming the mediating role of overall justice. [less ▲]Detailed reference viewed: 49 (5 ULg) What predicts stigmatisation about schizophrenia? Results from a general population survey examining its underlying cognitive, affective and behavioural factorsThonon, Bénédicte ; Larøi, Frank in Psychosis: Psychological, Social and Integrative Approaches (2016)Stigmatisation towards individuals diagnosed with schizophrenia (SZ individuals) remains an important problem, yet few studies to date have examined a theoretically comprehensive set of predictors of ... [more ▼]Stigmatisation towards individuals diagnosed with schizophrenia (SZ individuals) remains an important problem, yet few studies to date have examined a theoretically comprehensive set of predictors of stigmatisation. This study aimed to evaluate cognitive, emotional and behavioural aspects of stigmatisation towards SZ individuals in the Belgian general population in order to better understand its underlying factors. A sample of 544 participants completed online questionnaires assessing common stereotypes regarding schizophrenia, desired social distance, level of contact and the Behaviours from Intergroup Affect and Stereotypes map. Most respondents believed SZ individuals are unpredictable and have a poor prognosis. Around 10% believed that they are dangerous. The most frequently reported emotions were pity and fear. Around 65% of the sample indicated that they would have positive behavioural reactions (passive/active facilitation). Around 33% of the sample indicated that they would distance themselves from SZ individuals, and around 20% would flee if in contact with a SZ individual. Fear and stereotypes of dangerousness and incompetence best predicted these fleeing and avoidance reactions. Fear was also explained by stereotypes of dangerousness and unpredictability. These factors should be accounted for when developing anti-stigma campaigns. The effect of contact should be further investigated. [less ▲]Detailed reference viewed: 25 (2 ULg) Is adult translocation a credible way to accelerate the recolonization process of Chondrostoma nasus in a rehabilitated river?Ovidio, Michaël ; Hanzen, Céline; Gennotte, Vincent et alin Cybium (2016), 40(1), 43-49The decline of the patrimonial rheophilic nase, Chondrostoma nasus (Linnaeus, 1758) populations was mainly caused by construction of dams and hydroelectric power-plants, together with the straightening ... [more ▼]The decline of the patrimonial rheophilic nase, Chondrostoma nasus (Linnaeus, 1758) populations was mainly caused by construction of dams and hydroelectric power-plants, together with the straightening and artificialization of the river banks and water pollution. In this study, we tested the hypothesis whether the translocation of few adult nase individuals from a river stretch to another upstream may be a credible way to accelerate the recolonization process of the species in the Amblève River (Southern Belgium). In February and March 2011, just before their spawning period, eight adult nases (462-509 mm; 1546-2002 g; presumed males and females) were captured in the lower part of the River Amblève. Fin clip samples were stored in alcohol for further genetic analysis. They were equipped with a 14 g radio transmitter and translocated upstream in a 18 km river stretch, where the species had disappeared since decades due to river anthropization. They were manually located two to five times/week using mobile receivers until maximum June 2012 (n = 977 locations). River temperature and flow were hourly recorded during the entire tracking period. The tagged nase individuals displayed various mobility patterns, exploited different areas of the river stretch, occupied longitudinal home ranges from 3.4 to 36.1 km (one individual finally left the new river stretch) and travelled total distances from 12.2 to 186.6 km. The tagged individuals were most of the times apart from one to another, but most individuals grouped together in potential spawning areas in late March-early April 2011, suggesting an attempt to reproduce. In September 2011, electric fishing in two potential detected spawning sites allowed to capture 16 juvenile (0+) nases, demonstrating the existence of spawning activity in the newly occupied river stretch. Individual genetic characterization was performed in 2014 in order to reveal a possible direct lineage between juveniles and adults. Allelic distribution of 22 microsatellite markers unambiguously identified the 16 juveniles as full-sib progeny descending from two of the translocated adults. This demonstrated that the adult nases succeeded to find spawning areas and that progeny found raised-up from the translocated individuals. [less ▲]Detailed reference viewed: 111 (19 ULg) Relationship between grey matter integrity and executive abilities in agingManard, Marine ; Bahri, Mohamed Ali ; Salmon, Eric et alin Brain Research (2016), 1642This cross-sectional study was designed to investigate grey matter changes that occur in healthy aging and the relationship between grey matter characteristics and executive functioning. Thirty-six young ... [more ▼]This cross-sectional study was designed to investigate grey matter changes that occur in healthy aging and the relationship between grey matter characteristics and executive functioning. Thirty-six young adults (18 to 30 years old) and 43 seniors (60 to 75 years old) were included. A general executive score was derived from a large battery of neuropsychological tests assessing three major aspects of executive functioning (inhibition, updating and shifting). Age-related grey matter changes were investigated by comparing young and older adults using voxel-based morphometry and voxel-based cortical thickness methods. A widespread difference in grey matter volume was found across many brain regions, whereas cortical thinning was mainly restricted to central areas. Multivariate analyses showed age-related changes in relatively similar brain regions to the respective univariate analyses but appeared more limited. Finally, in the older adult sample, a significant relationship between global executive performance and decreased grey matter volume in anterior (i.e. frontal, insular and cingulate cortex) but also some posterior brain areas (i.e. temporal and parietal cortices) as well as subcortical structures was observed. Results of this study highlight the distribution of age-related effects on grey matter volume and show that cortical atrophy does not appear primarily in “frontal” brain regions. From a cognitive viewpoint, age-related executive functioning seems to be related to grey matter volume but not to cortical thickness. Therefore, our results also highlight the influence of methodological aspects (from preprocessing to statistical analysis) on the pattern of results, which could explain the lack of consensus in literature. [less ▲]Detailed reference viewed: 23 (11 ULg) A systematic review of geological evidence for Holocene earthquakes and tsunamis along the Nankai-Suruga Trough, JapanGarrett, Ed; Fujiwara, Osamu; Garrett, Philip et alin Earth-Science Reviews (2016)The Nankai-Suruga Trough, the subduction zone that lies immediately south of Japan’s densely populated southern coastline, generates devastating great earthquakes (magnitude > 8) characterised by intense ... [more ▼]The Nankai-Suruga Trough, the subduction zone that lies immediately south of Japan’s densely populated southern coastline, generates devastating great earthquakes (magnitude > 8) characterised by intense shaking, crustal deformation and tsunami generation. Forecasting the hazards associated with future earthquakes along this >700 km long fault requires a comprehensive understanding of past fault behaviour. While the region benefits from a long and detailed historical record, palaeoseismology has the potential to provide a longer-term perspective and additional crucial insights. In this paper, we summarise the current state of knowledge regarding geological evidence for past earthquakes and tsunamis along the Nankai-Suruga Trough. Incorporating literature originally published in both Japanese and English and enhancing available results with new age modelling approaches, we summarise and critically evaluate evidence from a wide variety of sources. Palaeoseismic evidence includes uplifted marine terraces and biota, marine and lacustrine turbidites, liquefaction features, subsided marshes and tsunami deposits in coastal lakes and lowlands. While 75 publications describe proposed evidence from more than 70 sites, only a limited number provide compelling, well-dated evidence. The best available records enable us to map the most likely rupture zones of twelve earthquakes occurring during the historical period. This spatiotemporal compilation suggests the AD 1707 earthquake ruptured almost the full length of the subduction zone and that earthquakes in AD 1361 and 684 may have been predecessors of similar magnitude. Intervening earthquakes were of lesser magnitude, highlighting the variability in rupture mode that characterises the Nankai-Suruga Trough. Recurrence intervals for ruptures of the same seismic segment range from less than 100 to more than 450 years during the historical period. Over longer timescales, palaeoseismic evidence suggests intervals between earthquakes ranging from 100 to 700 years, however these figures reflect a range of thresholds controlling the of creation and preservation of evidence at any given site as well as genuine earthquake recurrence intervals. At present, there is no geological data that suggest the occurrence of a larger magnitude earthquake than that experienced in AD 1707, however few studies have sought to establish the relative magnitudes of different earthquake and tsunami events along the Nankai-Suruga Trough. Alongside the lack of research designed to quantify the maximum magnitude of past earthquakes, we emphasise issues over alternative hypotheses for proposed palaeoseismic evidence, the paucity of robust chronological frameworks and insufficient appreciation of changing thresholds of evidence creation and preservation over time as key issues that must be addressed by future research. [less ▲]Detailed reference viewed: 31 (3 ULg) Les soins de santé primaires, plus que des soins de première ligneCrismer, André ; Belche, Jean ; van der Vennet, Jeanin Santé Publique : Revue Multidisciplinaire pour la Recherche et l'Action (2016)Primary health care are often spoken about , but they are rarely defined. This article explores this concept and founds two different understandings of primary health care, both issued from Alma Ata ... [more ▼]Primary health care are often spoken about , but they are rarely defined. This article explores this concept and founds two different understandings of primary health care, both issued from Alma Ata Declaration. As long as the expression « primary health care » may correspond to two types of contents, or a level of care, or a global approach of the health system, it will be useful to clarify it, when using it. [less ▲]Detailed reference viewed: 47 (5 ULg) Longitudinal survey of Clostridium difficile presence and gut microbiota composition in a Belgian nursing homeRodriguez Diaz, Cristina ; Taminiau, Bernard ; Korsak Koulagenko, Nicolas et alin BMC Microbiology (2016), 16(229), ackground Increasing age, several co-morbidities, environmental contamination, antibiotic exposure and other intestinal perturbations appear to be the greatest risk factors for C. difficile infection (CDI ... [more ▼]ackground Increasing age, several co-morbidities, environmental contamination, antibiotic exposure and other intestinal perturbations appear to be the greatest risk factors for C. difficile infection (CDI). Therefore, elderly care home residents are considered particularly vulnerable to the infection. The main objective of this study was to evaluate and follow the prevalence of C. difficile in 23 elderly care home residents weekly during a 4-month period. A C. difficile microbiological detection scheme was performed along with an overall microbial biodiversity study of the faeces content by 16S rRNA gene analysis. Results Seven out of 23 (30.4 %) residents were (at least one week) positive for C. difficile. C. difficile was detected in 14 out of 30 diarrhoeal samples (43.7 %). The most common PCR-ribotype identified was 027. MLVA showed that there was a clonal dissemination of C. difficile strains within the nursing home residents. 16S-profiling analyses revealed that each resident has his own bacterial imprint, which was stable during the entire study. Significant changes were observed in C. difficile positive individuals in the relative abundance of a few bacterial populations, including Lachnospiraceae and Verrucomicrobiaceae. A decrease of Akkermansia in positive subjects to the bacterium was repeatedly found. Conclusions A high C. difficile colonisation in nursing home residents was found, with a predominance of the hypervirulent PCR-ribotype 027. Positive C. difficile status is not associated with microbiota richness or biodiversity reduction in this study. The link between Akkermansia, gut inflammation and C. difficile colonisation merits further investigations. [less ▲]Detailed reference viewed: 18 (1 ULg) Cassini in situ observations of long duration magnetic reconnection in Saturn’s magnetotailArridge, C.S.; Eastwood, J.P.; Jackman, C.M. et alin Nature (2016)Detailed reference viewed: 16 (4 ULg) Dispositif en ligne d’entraînement à la résolution de problèmes de physiqueMarique, Pierre-Xavier ; Jacquet, Maud ; Georges, François et alin RDST - Recherches en Didactique des Sciences et des Technologies (2016)As part of projects aiming to reduce failure during bachelor studies, LabSET and physics teachers of the university of Liège developed an online package designed to improve skills in solving physics ... [more ▼]As part of projects aiming to reduce failure during bachelor studies, LabSET and physics teachers of the university of Liège developed an online package designed to improve skills in solving physics problems. Research focuses, on one hand, on diagnosis as to whether cognitive processes used in solving physics problems are mastered by the user or not and, on the other hand, on the link between effective online training and students performance at solving examination problems. Analyses rely on a 1 study conducted with 876 first-year medicine students. They were performed on the basis of subjective (self-evaluation of process and product) and objective (number of connections, results on specific questions for each cognitive process, passing rate and examination results) data. At the end of the research, we observed that the analysis process is the one for which the students show most difficulties. Almost 50 % of the students are aware of this. Although the strength of association is weak to medium, dependence is observed between online work and success at solving the examination problems. The results of students who worked online are higher than those of students who did not. However, considering that students who passed the physics examination also passed the examinations on other scientific subjects, it is difficult to establish a causality link between online work and performance. [less ▲]Detailed reference viewed: 35 (3 ULg) Asymmetrical magnetic fabrics in the Egersund doleritic dike swarm (SW Norway) reveal sinistral oblique rifting before the opening of the IapetusMontalbano, Salvatrice ; Bolle, Olivier ; Diot, Hervéin Journal of Structural Geology (2016), 85Detailed reference viewed: 16 (2 ULg) Calf health: Cholesterol deficiency causing calf illthrift and diarrhoeaDuff, J. P.; Passant, S.; Wessels, M. et alin The Veterinary record (2016), 178(17), 424-5Detailed reference viewed: 23 (4 ULg) Effect of wheel traffic on the physical properties of a LuvisolDestain, Marie-France ; Roisin, Christian; Dalcq, Anne-Catherine et alin Geoderma (2016), 262The effects of machine traffic were assessed on a Luvisol in a temperate climate area in Belgium. Soil samples were taken from topsoil (0.07-0.25 m) and subsoil (0.35-0.50 m), on plots under long-term ... [more ▼]The effects of machine traffic were assessed on a Luvisol in a temperate climate area in Belgium. Soil samples were taken from topsoil (0.07-0.25 m) and subsoil (0.35-0.50 m), on plots under long-term reduced tillage (RT) and conventional tillage (CT). Cone index (CI), bulk density (BD) and precompression stress (Pc) were chosen as indicators of mechanical strength. Mercury intrusion porosimetry was used to characterize the soil microporosity structure. It was presented in two forms: (i) cumulative pore volume vs. equivalent pore radius r, from which four classes of porosity were defined: r < 0.2 μm, 0.2 ≤ r < 9 µm, 9 ≤ r < 73 µm and r ≥ 73 μm; (ii) pore-size distribution (PSDs). In the reference situation where there had been no recent passage of machines, the voids with 0.2 ≤ r < 9 µm were the most important class in RT topsoil. The voids with r ≥ 73 µm represented the main porosity class in the topsoil of CT. In the subsoil, for both tillage systems, the porosity was almost equally distributed between voids with 0.2 ≤ r < 9 µm and voids with r greater than 9 µm. Machine traffic was carried out when the soil water content was close to the optimum Proctor. Although unfavourable, these wet conditions often occur during the beet harvesting period in Belgium. The highest modifications in soil structure (increase in BD and Pc, reduction of macroporosity r ≥ 73 μm) were observed in the topsoil of CT. More limited modifications were noticed in the soil structure of RT topsoil and subsoil layers but these latter are problematic in that the soil would no longer be loosened by subsequent tillage. These modifications could lead to soil consolidation as a result of wheel traffic year after year. [less ▲]Detailed reference viewed: 61 (31 ULg) Displacement of an Electrically Charged Drop on a Vibrating BathBrandenbourger, Martin ; Vandewalle, Nicolas ; Dorbolo, Stéphane in Physical Review Letters (2016), 116Detailed reference viewed: 27 (3 ULg) RB regulates the production and the survival of newborn neurons in the embryonic and adult dentate gyrus.Vandenbosch, Renaud ; Clark, Alysen; Fong, Bensun et alin Hippocampus (2016)Detailed reference viewed: 23 (3 ULg) Correlation between ethanol behavioral sensitization and midbrain dopamine neuron reactivity to ethanolDidone, Vincent ; Masson, Sébastien; Quoilin, Caroline et alin Addiction Biology (2016), 21(2), 387-396Repeated ethanol injections lead to a sensitization of its stimulant effects in mice. Some recent results argue against a role for ventral tegmental area (VTA) dopamine neurons in ethanol behavioral ... [more ▼]Repeated ethanol injections lead to a sensitization of its stimulant effects in mice. Some recent results argue against a role for ventral tegmental area (VTA) dopamine neurons in ethanol behavioral sensitization. The aim of the present study was to test whether in vivo ethanol locomotor sensitization correlates with changes in either basal- or ethanol evoked firing rates of dopamine neurons in vitro. Female Swiss mice were daily injected with 2.5 g/kg ethanol (or saline in the control group) for 7 days and their locomotor activity was recorded. At the end of the sensitization procedure, extracellular recordings were made from dopaminergic neurons in midbrain slices from these mice. Significantly higher spontaneous basal firing rates of dopamine neurons were recorded in ethanol-sensitized mice relative to control mice, but without correlations with the behavioral effects. The superfusion of sulpiride, a dopamine D2 antagonist, induced a stronger increase of dopamine neuron firing rates in ethanol-sensitized mice. This shows that the D2 feedback in dopamine neurons is preserved after chronic ethanol administration and argues against a reduced D2 feedback as an explanation for the increased dopamine neuron basal firing rates in ethanol-sensitized mice. Finally, ethanol superfusion (10–100 mM) significantly increased the firing rates of dopamine neurons and this effect was of higher magnitude in ethanol-sensitized mice. Furthermore, there were significant correlations between such a sensitization of dopamine neuron activity and ethanol behavioral sensitization. These results support the hypothesis that changes in brain dopamine neuron activity contribute to the behavioral sensitization of the stimulant effects of ethanol. [less ▲]Detailed reference viewed: 49 (19 ULg) Biosecurity measures applied in the United Arab Emirates: a comparative study between livestock and wildlife sectors - Biosecurity measures applied in UAE farmsChaber, AL; Saegerman, Claude in Transboundary and Emerging Diseases (2016)Detailed reference viewed: 26 (1 ULg) Growth and Freeze-Drying Optimization of Bifidobacterium crudilactisTanimomo, Jean; Delcenserie, Véronique ; Taminiau, Bernard et alin Food and Nutrition Sciences (2016), 7Bifidobacterium crudilactis FR62/b/3 belongs to a new population of bifidobacteria isolated from raw milk and raw milk cheese. The objective of this work was to study the large scale culture of the stain ... [more ▼]Bifidobacterium crudilactis FR62/b/3 belongs to a new population of bifidobacteria isolated from raw milk and raw milk cheese. The objective of this work was to study the large scale culture of the stain and its stability in a dry formulation. Growth rate of Bifidobacterium crudilactis FR62/b/3 was optimal at a pH of 5.0 and a temperature of 37˚C. At a temperature growth of 33˚C and a pH of 5.0, the stationary phase was reached after 22 h, the viable cell number and the mean dry biomass concentration were respectively of 8.3 × 109 CFU/mL and of 2.1 g/L. Resistance of Bifidobacterium crudilactis FR62/b/3 to freeze-drying and effect of a variety of cryoprotectants to maintain the viability were also evaluated. Sorbitol was the most suitable cryoprotectant for freeze-drying as well as storage whereas sucrose and monosodium glutamate were only efficient during storage. [less ▲]Detailed reference viewed: 11 (3 ULg) Deciphering how LIP2 and POX2 promoters can optimally regulate recombinant protein production in the yeast Yarrowia lipolytica.Sassi, Hosni; Delvigne, Frank ; Nicaud, Jean-Marc et alin Microbial Cell Factories (2016), 15(1), 159Detailed reference viewed: 18 (3 ULg) Comparative virucidal efficacy of seven disinfectants against murine norovirus and feline calicivirus, surrogates of human norovirusZonta, William ; Mauroy, Axel ; Farnir, Frédéric et alin Food and Environmental Virology (2016)Human noroviruses (HuNoV) are the leading cause of acute nonbacterial gastroenteritis in humans and can be transmitted either by person-to-person contact or by consumption of contaminated food. Knowledge ... [more ▼]Human noroviruses (HuNoV) are the leading cause of acute nonbacterial gastroenteritis in humans and can be transmitted either by person-to-person contact or by consumption of contaminated food. Knowledge of an efficient disinfection for both hands and food contact surfaces is helpful for the food sector and provides precious information for public health. The aim of this study was to evaluate the effect of seven disinfectants belonging to different groups of biocides (alcohol, halogen, oxidizing agents, quaternary ammonium compounds, aldehyde and biguanide) on infectious viral titre and on genomic copy number. Due to the absence of a cell culture system for HuNoV, two HuNoV surrogates such as murine norovirus (MNV) and feline calicivirus (FCV), were used and the tests were performed in suspension, on gloves and on stainless steel discs. When, as criteria of efficacy, a log reduction > 3 of the infectious viral titre on both surrogates and in the three tests is used, the most efficacious disinfectants in this study appear to be biocidal products B, C and D, representing the halogens, the oxidizing agents group and a mix of QAC, alcohol and aldehyde, respectively. In addition, these three disinfectants also elicited a significant effect on genomic copy number for both surrogate viruses and in all three tests. The results of this study demonstrate that a halogen compound, oxidizing agents and a mix of QAC, alcohol and aldehyde are advisable for HuNoV disinfection of either potentially contaminated surfaces or materials in contact with foodstuffs. [less ▲]Detailed reference viewed: 38 (8 ULg) Integrating sediment biogeochemistry into 3D oceanic models: A study of benthic-pelagic coupling in the Black SeaCapet, Arthur ; Meysman, Filip; Akoumianaki, Ioanna et alin Ocean Modelling (2016), 101Three-dimensional (3D) ecosystem models of shelf environments should properly account for the biogeochemical cycling within the sea floor. However, a full and explicit representation of sediment ... [more ▼]Three-dimensional (3D) ecosystem models of shelf environments should properly account for the biogeochemical cycling within the sea floor. However, a full and explicit representation of sediment biogeochemistry into 3D ocean models is computationally demanding. Here, we describe a simplified approach to include benthic processes in 3D ocean models, which includes a parameterization of the different pathways for organic matter mineralization and allows for organic matter remobilization by bottom currents and waves. This efficient approach enables decadal simulations that resolve the inertial contribution of the sea floor to the biogeochemical cycling in shelf environments. The model was implemented to analyze the benthic-pelagic coupling in the northwestern shelf of the Black Sea. Three distinct biogeochemical provinces were identified on the basis of fluxes and rates associated with benthic-pelagic coupling. Our model simulations suitably capture the seasonal variability of in situ flux data as well as their regional variation, which stresses the importance of incorporating temporally varying sediment biogeochemistry and resuspension/redeposition cycles in shelf ecosystem models. [less ▲]Detailed reference viewed: 17 (1 ULg) Yellow nail syndrome after allogeneic haematopoietic stem cell transplantation in two patients with multiple myelomaGrégoire, Céline ; GUIOT, Julien ; Vertenoeil, Gaëlle et alin Acta Clinica Belgica (2016)Objective and Importance: Yellow nail syndrome (YNS) is a rare disorder of unknown aetiology characterized by the triad of yellow nails, lymphoedema and respiratory manifestations. About 200 cases have ... [more ▼]Objective and Importance: Yellow nail syndrome (YNS) is a rare disorder of unknown aetiology characterized by the triad of yellow nails, lymphoedema and respiratory manifestations. About 200 cases have been reported, but a lot of patients probably elude proper diagnosis because of both variability of symptoms and ignorance of this syndrome by many physicians. The pathogenesis remains unclear, and could involve functional lymphatic abnormalities, microvasculopathy or lymphocyte deficiency, but none of these hypotheses seems fully satisfactory. Clinical Presentation: We report for the first time two cases of YNS associated with multiple myeloma relapsing after non-myeloablative haematopoietic cell transplantation (HCT). In these two cases, onset or worsening of YNS symptoms followed graft-versus-host disease (GvHD) manifestations. Intervention: Corticosteroids given to treat GvHD also improved YNS manifestations. Conclusion: YNS after HCT might be a microvascular manifestation of endothelial GvHD and corticosteroids might be an effective treatment. [less ▲]Detailed reference viewed: 23 (4 ULg) Revised diagnosis and severity criteria for sinusoidal obstruction syndrome/veno-occlusive disease in adult patients: a new classification from the European Society for Blood and Marrow TransplantationMohty, M.; Malard, F.; Abecassis, M. et alin Bone Marrow Transplantation (2016)Sinusoidal obstruction syndrome, also known as veno-occlusive disease (SOS/VOD), is a potentially life threatening complication that can develop after hematopoietic cell transplantation. Although SOS/VOD ... [more ▼]Sinusoidal obstruction syndrome, also known as veno-occlusive disease (SOS/VOD), is a potentially life threatening complication that can develop after hematopoietic cell transplantation. Although SOS/VOD progressively resolves within a few weeks in most patients, the most severe forms result in multi-organ dysfunction and are associated with a high mortality rate (480%). Therefore, careful attention must be paid to allow an early detection of SOS/VOD, particularly as drugs have now proven to be effective and licensed for its treatment. Unfortunately, current criteria lack sensitivity and specificity, making early identification and severity assessment of SOS/VOD difficult. The aim of this work is to propose a new definition for diagnosis, and a severity-grading system for SOS/VOD in adult patients, on behalf of the European Society for Blood and Marrow Transplantation. [less ▲]Detailed reference viewed: 16 (1 ULg) METABOLIC CORRELATES OF VISUAL EVOKED POTENTIALS HABITUATION IN MIGRAINEURS AND CONTROLSD'Ostilio, Kevin ; Lisicki Martinez, Marco ; SCHOENEN, Jean et alin Cephalalgia : An International Journal of Headache (2016), 36(1S), 60Detailed reference viewed: 13 (2 ULg) Review of Jan Wolters, Die rechtsstaatlichen Grenzen des ‘more economic approach’ im Lichte der europäischen Rechtsprechung. Eine Untersuchung des Handelns der Kommission auf dem Gebiet des europaïschen Wettbewerbrechts am Maßstab übergeordeneter Vertrags- und Verfassungsgrundsätze. Baden-Baden: Nomos, 2015. 371 pages. ISBN: 9783848717361. EUR 99Van Cleynenbreugel, Pieter in Common Market Law Review (2016), 52(5), 1469-1471A book review of the German language PhD thesis by Jan WoltersDetailed reference viewed: 12 (0 ULg) Revealing the formation of copper nanoparticles from a homogeneous solid precursor by electron microscopyVan den Berg, Roy; Elkjaer, Christian; Gommes, Cédric et alin Journal of the American Chemical Society (2016)The understanding of processes leading to the formation of nanometer-sized particles is important for tailoring of their size, shape and location. The growth mechanisms and kinetics of nanoparticles from ... [more ▼]The understanding of processes leading to the formation of nanometer-sized particles is important for tailoring of their size, shape and location. The growth mechanisms and kinetics of nanoparticles from solid precursors are, however, often poorly described. Here we employ transmission electron microscopy (TEM) to examine the formation of copper nanoparticles on a silica support during the reduction by H2 of homogeneous copper phyllosilicate plates, as a prototype precursor for a co-precipitated catalyst. Specifically, time-lapsed TEM image series acquired of the material during the reduction provide a direct visualization of the growth dynamics of an ensemble of individual nanoparticles and enable a quantitative evaluation of the nucleation and growth of the nanoparticles. This quantitative information is compared with kinetic models and found to be best described by a nucleation-and-growth scenario involving autocatalytic reduction of the copper phyllosilicate followed by diffusion-limited or reaction-limited growth of the copper nanoparticles. The plate-like structure of the precursor restricted the diffusion of copper and the autocatalytic reduction limited the probability for secondary nucleation. The combination of a uniform size of precursor particles and the autocatalytic reduction thus offers means to synthesize nanoparticles with well-defined sizes in large amounts. In this way, in situ observations made by electron microscopy provide mechanistic and kinetic insights into the formation of metallic nanoparticles, essential for the rational design of nanomaterials. [less ▲]Detailed reference viewed: 210 (12 ULg) LES CONDITIONS DU COMMUN À L’ÉPREUVE DE SA CONSTRUCTION ; restitution d’une enquête en cours sur la vallée de la Vilaine, Rennes.Bodart, Céline ; Pihet, Valériein Lieux Communs (2016), 18(automne), Dans cette proposition d'article, il sera question de mettre en avant l’originalité et la pertinence de l’approche de Cuesta en tant que lieu de recherche non-institutionnalisé, à partir d’une ... [more ▼]Dans cette proposition d'article, il sera question de mettre en avant l’originalité et la pertinence de l’approche de Cuesta en tant que lieu de recherche non-institutionnalisé, à partir d’une expérimentation actuellement menée sur le territoire rennais. Cuesta est une structure coopérative récemment formée par une urbaniste et une productrice, et dont l'objectif est de faire intervenir les processus artistiques en amont des projets d'aménagement, pour leur permettre d'impacter sur ceux-ci. Cet article proposera alors une « relecture » des intentions de Cuesta à partir de la théorie politique du philosophe américain, John Dewey, afin de voir comment considérer ce type de travail comme la mise en place de dispositifs d'enquête visant à reconstruire un cadre spécifique d'expérience pour qu'un public puisse s'auto-constituer autour de problématiques situées. De ce croisement entre théorie et pratique, le but sera d'interroger les logiques de constitution du laboratoire au regard de cette approche innovante considérée comme nouveau type de recherche territoriale. [less ▲]Detailed reference viewed: 20 (2 ULg) VALIDATION OF THE FRENCH VERSION OF THE PARENTING STRESS INDEX–SHORT FORM (FOURTH EDITION)Toucheque, Malorie ; Etienne, Anne-Marie ; Stassart, Céline et alin Journal of Community Psychology (2016), 44(4), 419-425Introduction: The Parenting Stress Index (PSI) is widely used to measure stress related to the demands of parenthood. Unfortunately, no French adaptation and validation of the PSI Short Form (Fourth ... [more ▼]Introduction: The Parenting Stress Index (PSI) is widely used to measure stress related to the demands of parenthood. Unfortunately, no French adaptation and validation of the PSI Short Form (Fourth Edition) had yet been published. This study examine the factor structure and the psychometric properties of the French translation of the PSI-4-SF. Method: Sample consists of 210 parents of children aged 5 to 12 years old. Factor structure of the PSI-4-SF was investigated using confirmatory factor analysis (CFA). Three models were tested, with one, two and three dimensions. Internal consistency reliability and validity were also examined. Results: CFA showed that the original three-factor solution most closely approximated our data despite the imperfection of the model fit indices. Results also indicated a good internal consistency reliability and validity. Conclusion: The French version of the PSI-4-SF shows promise as a valuable instrument for assessing parenting stress in the French-speaking population. [less ▲]Detailed reference viewed: 47 (8 ULg) Detecting urban road network accessibility problems using taxi GPS dataCui, JianXun; Liu, Feng; Janssens, Davy et alin Journal of Transport Geography (2016), 51Urban population growth and economic development have led to the creation of new communities, jobs and services at places where the existing road network might not cover or efficiently handle traffic ... [more ▼]Urban population growth and economic development have led to the creation of new communities, jobs and services at places where the existing road network might not cover or efficiently handle traffic. This generates isolated pockets of areas which are difficult to reach through the transport system. To address this accessibility problem, we have developed a novel approach to systematically examine the current urban land use and road network conditions as well as to identify poorly connected regions, using GPS data collected from taxis. This method is composed of four major steps. First, city-wide passenger travel demand patterns and travel times are modeled based on GPS trajectories. Upon this model, high density residential regions are then identified, and measures to assess accessibility of each of these places are developed. Next, the regions with the lowest level of accessibility among all the residential areas are detected, and finally the detected regions are further examined and specific transport situations are analyzed. By applying the proposed method to the Chinese city of Harbin, we have identified 20 regions that have the lowest level of accessibility by car among all the identified residential areas. A serious reachability problem to petrol stations has also been discovered, in which drivers from 92.6% of the residential areas have to travel longer than 30 min to refill their cars. Furthermore, the comparison against a baseline model reveals the capacity of the derived measures in accounting for the actual travel routes under divergent traffic conditions. The experimental results demonstrate the potential and effectiveness of the proposed method in detecting car-based accessibility problems, contributing towards the development of urban road networks into a system that has better reachability and more reduced inequity. [less ▲]Detailed reference viewed: 26 (1 ULg) Bott type periodicity for the higher octonionsKreusch, Marie in Journal of Noncommutative Geometry (2016), 9(4), 1041-1393We study the series of complex nonassociative algebras$\bbO_n$and real nonassociative algebras$\bbO_{p,q}$introduced in~\cite{MGO2011}. These algebras generalize the classical algebras of octonions ... [more ▼]We study the series of complex nonassociative algebras$\bbO_n$and real nonassociative algebras$\bbO_{p,q}$introduced in~\cite{MGO2011}. These algebras generalize the classical algebras of octonions and Clifford algebras. The algebras$\bbO_{n}$and$\bbO_{p,q}$with$p+q=n$have a natural$\Z_2^n$-grading, and they are characterized by cubic forms over the field$\Z_2$. We establish a periodicity for the algebras~$\bbO_{n}$and$\bbO_{p,q}$similar to that of the Clifford algebras$\mathrm{Cl}_{n}$and~$\mathrm{Cl}_{p,q}\$. [less ▲]Detailed reference viewed: 163 (53 ULg) Distribution of Injectate and Sensory-Motor Blockade after Adductor Canal BlockGautier, P. E.; Hadzic, A.; LECOQ, Jean-Pierre et alin Anesthesia and Analgesia (2016), 122(1), 279-282BACKGROUND: The analgesic efficacy reported for the adductor canal block may be related to the spread of local anesthetic outside the adductor canal. METHODS: Fifteen patients undergoing knee surgery ... [more ▼]BACKGROUND: The analgesic efficacy reported for the adductor canal block may be related to the spread of local anesthetic outside the adductor canal. METHODS: Fifteen patients undergoing knee surgery received ultrasound-guided injections of local anesthetic at the level of the adductor hiatus. Sensory-motor block and spread of contrast solution were assessed. RESULTS: Sensation was rated as "markedly diminished" or "absent" in the saphenous nerve distribution and "slightly diminished" in the sciatic nerve territory without motor deficits. Contrast solution was found in the popliteal fossa. CONCLUSIONS: The spread of injectate to the popliteal fossa may contribute to the analgesic efficacy of adductor canal block. © 2015 International Anesthesia Research Society. [less ▲]Detailed reference viewed: 19 (2 ULg) Photocatalytic decomposition of hydrogen peroxide over nanoparticles of TiO2 and Ni(II)-porphyrin doped TiO2: a relationship between activity and porphyrin anchoring modeTasseroul, Ludivine ; Pàez Martinez, Carlos ; Lambert, Stéphanie et alin Applied Catalysis B : Environmental (2016), 182The nickel tetra(4-carboxyphenyl)porphyrin (TCPPNi) was chimisorbed on Degussa P25 TiO2 at different concentrations. Diffuse reflectance spectroscopy in the UV/vis region, Fourier transform infrared ... [more ▼]The nickel tetra(4-carboxyphenyl)porphyrin (TCPPNi) was chimisorbed on Degussa P25 TiO2 at different concentrations. Diffuse reflectance spectroscopy in the UV/vis region, Fourier transform infrared spectroscopy and thermal gravimetry combined with differential scanning calorimetry measurements allowed the determination of the TCPPNi anchoring mode. At low TCPPNi concentrations, this anchoring on Degussa P25 TiO2 took place through all four carboxylic groups, while at higher concentrations the anchoring occurred through one or two carboxylic groups. For the firsttime,the effect of UV/vis light irradiation on the H2O2-degradation activity of TiO2 and TCPPNi-doped TiO2 was studied using the method of following the production of O2 by gas pressure monitoring. The activity of seven different catalysts was related to the TCPPNi anchoring mode and the percentage of TiO2 Degussa P25 coverage. An optimum degradation of H2O2 was observed for 0.0115 mol TCPPNi × g−1 P25. In that case, the TCPPNi was anchored through the four carboxylic groups, corresponding to a strong interaction with Degussa P25 TiO2. Moreover, the TCPPNi did not cover the surface completely, therefore allowing the light to reach and activate the TiO2. [less ▲]Detailed reference viewed: 59 (12 ULg) Risks and some prevention solution in the chicken raising: a case study in Kim Dong District, Hung Yen Province, VietnamBui Thi, Nga; Lebailly, Philippe in Journal of Scientific and Engineering Research (2016), 3(4), 506-510Chicken raising in Vietnam is still spontaneous, small scale, scattered, backward, low productivity, and especially faces many risks which would reduce the efficiency and profits of the farmers. This ... [more ▼]Chicken raising in Vietnam is still spontaneous, small scale, scattered, backward, low productivity, and especially faces many risks which would reduce the efficiency and profits of the farmers. This study aims to analyze risks and suggest some prevention solutions in the chicken raising in a case study of the Kim Dong district, Hung Yen province based on the questionnaire survey data of 40 representative chicken raising farmers. The results showed that, the most popular risks of input came from the high feed price (80%), and the easily engaged in diseases breed (72.5%). In the chicken raising period, they got the risks of diseases (90%), 35% had experienced with massive death. There were 80% farmers found difficulty or slow consumption, 70% was at risk of reducing or low output prices, and 65% responded that the market sometimes was distorted by the traders. The study suggests that, farmers need to have a strong cooperative to create a greater power in the negotiation with feed providing companies; select good prestige, ensured quality breed providers; reorganize their production process, regularly vaccinate, carefully monitored for early detection of epidemics, quarantine sick chickens from the flock, treat the diseases completely; update the market information to raise at the suitable production size. Government and local authorities should support them about market information so they can make the most appropriate and beneficial decision about production. [less ▲]Detailed reference viewed: 12 (0 ULg) A consensus statement on the European Society for Clinical and Economic Aspects of Osteoporosis and Osteoarthritis (ESCEO) algorithm for the management of knee osteoarthritis - From evidence-based medicine to the real-life setting.Bruyère, Olivier ; Cooper, C.; Pelletier, J.P. et alin Seminars in Arthritis & Rheumatism (2016), 45(4 Suppl), 3-11The European Society for Clinical and Economic Aspects of Osteoporosis and Osteoarthritis(ESCEO) published a treatment algorithm for the management of knee osteoarthritis (OA) in 2014,which provides ... [more ▼]The European Society for Clinical and Economic Aspects of Osteoporosis and Osteoarthritis(ESCEO) published a treatment algorithm for the management of knee osteoarthritis (OA) in 2014,which provides practical guidance for the prioritization of interventions. Further analysis of real-world data for OA provides additional evidence in support of pharmacological interventions,in terms of management of OA pain and function, avoidance of adverse events, disease-modifying effects and long-term outcomes, e.g., delay of total joint replacement surgery, and pharmacoeconomic factors such as reduction in healthcare resource utilization. This article provides an updated assessment of the literature for selected interventions in OA, focusing on real-life data, with the aim of providing easy-to-follow advice on how to establish a treatment flow in patients with knee OA in primary care clinical practice, in support of the clinicians’ individualized assessment of the patient. In step 1, background maintenance therapy with symptomatic slow-acting drugs for osteoarthritis (SYSADOAs) is recommended, for which high-quality evidence is provided only for the prescription formulations of patented crystalline glucosamine sulfate and chondroitin sulfate. Paracetamol may be added for rescue analgesia only,due to limited efficacy and increasing safety signals. Topical non-steroidal anti-inflammatory drugs (NSAIDs) may provide additional symptomatic treatment with the same degree of efficacy as oral NSAIDs without the systemic safety concerns. Oral NSAIDs maintain a central role in step2 Advanced management of persistent symptoms. However, oral NSAIDs are highly heterogeneous in terms of gastrointestinal and cardiovascular safety profile, and patient stratification with careful treatment selection is advocated to maximize the risk: benefit ratio. Intra-articular hyaluronic acid as a next step provides sustained clinical benefit with effects lasting up to 6 months after a short-course of weekly injections. As a last step before surgery, thes low titration of sustained-release tramadol, aweak opioid, affords sustained analgesia with improved tolerability. [less ▲]Detailed reference viewed: 27 (12 ULg) In silico regenerative medicine: how computational tools allow regulatory and financial challenges to be addressed in a volatile market.Geris, Liesbet ; Guyot, Y.; Schrooten, J. et alin Interface focus (2016), 6(2), 20150105The cell therapy market is a highly volatile one, due to the use of disruptive technologies, the current economic situation and the small size of the market. In such a market, companies as well as ... [more ▼]The cell therapy market is a highly volatile one, due to the use of disruptive technologies, the current economic situation and the small size of the market. In such a market, companies as well as academic research institutes are in need of tools to advance their understanding and, at the same time, reduce their R&D costs, increase product quality and productivity, and reduce the time to market. An additional difficulty is the regulatory path that needs to be followed, which is challenging in the case of cell-based therapeutic products and should rely on the implementation of quality by design (QbD) principles. In silico modelling is a tool that allows the above-mentioned challenges to be addressed in the field of regenerative medicine. This review discusses such in silico models and focuses more specifically on the bioprocess. Three (clusters of) examples related to this subject are discussed. The first example comes from the pharmaceutical engineering field where QbD principles and their implementation through the use of in silico models are both a regulatory and economic necessity. The second example is related to the production of red blood cells. The described in silico model is mainly used to investigate the manufacturing process of the cell-therapeutic product, and pays special attention to the economic viability of the process. Finally, we describe the set-up of a model capturing essential events in the development of a tissue-engineered combination product in the context of bone tissue engineering. For each of the examples, a short introduction to some economic aspects is given, followed by a description of the in silico tool or tools that have been developed to allow the implementation of QbD principles and optimal design. [less ▲]Detailed reference viewed: 6 (1 ULg) compte rendu de : Serena Ammirati, Sul libro latino antico. Ricerche bibliologiche e paleografiche, Pisa-Roma, Fabrizio Serra Editore, 2015Rochette, Bruno in Athenaeum : Studii Periodici di Letteratura e Storia (2016)Detailed reference viewed: 56 (0 ULg) New features on the environmental regulation of metabolism revealed by modeling the cellular proteomic adaptations induced by light, carbon and inorganic nitrogen in Chlamydomonas reinhardtiiGerin, Stéphanie ; Leprince, Pierre ; Sluse, Francis et alin Frontiers in Plant Science (2016), 7Microalgae are currently emerging to be very promising organisms for the production of biofuels and high-added value compounds. Understanding the influence of environmental alterations on their metabolism ... [more ▼]Microalgae are currently emerging to be very promising organisms for the production of biofuels and high-added value compounds. Understanding the influence of environmental alterations on their metabolism is a crucial issue. Light, carbon and nitrogen availability have been reported to induce important metabolic adaptations. So far, the influence of these variables has essentially been studied while varying only one or two environmental factors at the same time. The goal of the present work was to model the cellular proteomic adaptations of the green microalga Chlamydomonas reinhardtii upon the simultaneous changes of light intensity, carbon concentrations (CO2 and acetate) and inorganic nitrogen concentrations (nitrate and ammonium) in the culture medium. Statistical design of experiments (DOE) enabled to define 32 culture conditions to be tested experimentally. Relative protein abundance was quantified by two dimensional differential in-gel electrophoresis (2D-DIGE). Additional assays for respiration, photosynthesis, and lipid and pigment concentrations were also carried out. A hierarchical clustering survey enabled to partition biological variables (proteins + assays) into eight co-regulated clusters. In most cases, the biological variables partitioned in the same cluster had already been reported to participate to common biological functions (acetate assimilation, bioenergetic processes, light harvesting, Calvin cycle and protein metabolism). The environmental regulation within each cluster was further characterized by a series of multivariate methods including principal component analysis and multiple linear regressions. This metadata analysis enabled to highlight the existence of a clear regulatory pattern for every cluster and to mathematically simulate the effects of light, carbon and nitrogen. The influence of these environmental variables on cellular metabolism is described in details and thoroughly discussed. This work provides an overview of the metabolic adaptations contributing to maintain cellular homeostasis upon extensive environmental changes. Some of the results presented here could be used as starting points for more specific fundamental or applied investigations. [less ▲]Detailed reference viewed: 39 (3 ULg) Examen de jurisprudence (2010-2013) - Les sociétés commerciales (première partie)Léonard, Laura ; Caprasse, Olivier; Dieux, Xavier et alin Revue Critique de Jurisprudence Belge [= RCJB] (2016), (1), Detailed reference viewed: 29 (3 ULg)
|
{}
|
# How does production cost of absorbent paper compare to that of printer paper?
I know that right now an A4 page sells for 0.8 cents at Walgreens. So presumably the thing is dirt cheap at the factory gates. For toilet paper I don't have any handy store prices to quote, but I do have a nagging suspicion of being overcharged.
So, generally speaking, what is the economic difference between making absorbent paper for toilet paper and making regular paper for printing? Is absorbent paper inherently more expensive or cheaper to make, per kilogram or per square meter or per whatever relevant metric? Also, let's throw in the paper towel type absorbent paper into the comparison since it is somewhat different in properties from toilet paper, at least in case that the cost of making paper towel paper is significantly different from cost of making toilet paper.
this is not a direct answer to my question above, but nevertheless an info tidbit that may shed some extra light on the question. Turns out there is a recently built Japanese invention called "White goat" which is a machine that converts office paper into toilet paper (apparently for recycling purposes or something like that). Relevant quote from http://www.dailymail.co.uk/sciencet...e-machine-turns-office-paper-toilet-roll.html :
The oddly named 'White Goat' machine is 6ft-tall and weighs a hefty 94st.
It works using an automatic system that creates one new roll from 40 sheets of A4 in just 30 minutes.
Waste paper is fed into a shredder on the machine, which is then untangled and dissolved in a pulper. Any foreign matter is removed and the wet paper is thinned out and dried. Finally this is wound into toilet rolls, which emerge one at a time.
....
The machine shreds the paper and puts it through a process of pulping, thinning and drying
....
White Goat is going on sale this summer for the eye-watering price of $100,000 (£63,000). The machine would have to churn out 200,000 recycled rolls to break even. This would take at least 11 years if it was on constantly. What to make of it in the context of this thread? I am not sure. I guess on the naive/intuitive level it suggests to me that toilet paper is somehow "simpler" than office paper. Usually simple things can be made from more complex ones using relatively low key processes (like a$100K machine that will get cheaper in subsequent iterations) while the reverse (like turning "simple" wood pulp into "complex" paper) often involves huge capital intensive factories.
|
{}
|
Tag Info
d' (also called sensitivity index) is a measure used in signal detection theory to quantify how well a signal can be distinguished from noise.
$d'$ (also called sensitivity index) is a measure used in signal detection theory to quantify how well a signal can be distinguished from noise.
Given that a signal may be present or not, and the receiver may assert that the signal is present or not, there are four possibilities:
Signal:
Present Not present
| | |
'Present' | Hit | False alarm |
| | |
---------------------------
| | |
'Not present' | Miss | Correct |
| | rejection |
---------------------------
The number of Hits divided by (Hits + Misses) is the hit rate ($h$), and the number of False alarms divided by (False alarms + Correct rejections) is the false alarm rate ($fa$). These can be decomposed into the sensitivity ($d'$) of the receiver:
$$d' = \Phi^{-1}(h) - \Phi^{-1}(fa)$$ To completely specify a receiver's behavior, sensitivity is usually paired with bias ($c$):
$$c = \frac{\Phi^{-1}(h) + \Phi^{-1}(fa)}{2}$$
Although the conceptual background is slightly different, it is interesting to note that sensitivity / $d'$ here is computed the same as the sensitivity that is used to assess classification performance.
|
{}
|
OpenGL Linker error when running source code from superbible third edition.
This topic is 3603 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
Hey, I hope I have put this question in the right forum. I'm trying to run this code from the superbible, using Dev-C++ 5. All previous code from the book upto now has compiled fine, but this one in particular is giving me a linker error. The program declares a data type that is defined in a seperate header file GLTools.h. I have added the file to my project, the #inlcude in the .cpp file looks as follows:
#include <cmath>
#include <cstdlib>
#include <iostream>
#include <gl/glut.h>
//#include "OpenGLSB.h"
#include <stdlib.h>
The header file (gltools.h )with the function which is not being identified:
// vector functions in VectorMath.c
void gltTransformPoint(const GLTVector3 vSrcPoint, const GLTMatrix mMatrix, GLTVector3 vPointOut);
void gltAddVectors(const GLTVector3 vFirst, const GLTVector3 vSecond, GLTVector3 vResult);
void gltSubtractVectors(const GLTVector3 vFirst, const GLTVector3 vSecond, GLTVector3 vResult);
void gltScaleVector(GLTVector3 vVector, const GLfloat fScale);
GLfloat gltGetVectorLengthSqrd(const GLTVector3 vVector);
GLfloat gltGetVectorLength(const GLTVector3 vVector);
void gltNormalizeVector(GLTVector3 vNormal);
void gltGetNormalVector(const GLTVector3 vP1, const GLTVector3 vP2, const GLTVector3 vP3, GLTVector3 vNormal);
void gltCopyVector(const GLTVector3 vSource, GLTVector3 vDest);
GLfloat gltVectorDotProduct(const GLTVector3 u, const GLTVector3 v);
void gltVectorCrossProduct(const GLTVector3 vU, const GLTVector3 vV, GLTVector3 vResult);
I guess I'm somehow not linking the header file correctly, but apart from including the gltools.h in my main file I don't know what else I can do. Any help with this problem will be appreciated! thanks
Share on other sites
What's the exact error you get? Have you added the source file containing the implementation of the functions in gltools.h to the project?
Share on other sites
It would help if you posted the actual linker error...
Share on other sites
Ok, the linker error only says:
[Linker Error] undefined reference to 'gltTransformPoint'
[Linker Error] undefined reference to 'gltTransformPoint'
[Linker Error] undefined reference to 'gltRotationMatrix'
There are no line references.
Quote:
Have you added the source file containing the implementation of the functions in gltools.h to the project?
I went back to gltools.h, and at the bottom found the text:
/////////////////////////////////////////////////////////////////////////////////////
// Functions, need to be linked to your program. Source file for function is given
After changing the extension to cpp and adding to project, i get a set of different errors.
LoadTGA.cpp: In function GLbyte* gltLoadTGA(const char*, GLint*, GLint*, GLint*, GLenum*)':
LoadTGA.cpp:48: error: GL_BGR_EXT' undeclared (first use this function)
LoadTGA.cpp:48: error: (Each undeclared identifier is reported only once for each function it appears in.)
LoadTGA.cpp:84: error: invalid conversion from void*' to GLbyte*'
LoadTGA.cpp:105: error: `GL_BGRA_EXT' undeclared (first use this function)
How would I add the source files of the implementations? I noticed that gltools.h not only references LoadTGA.c, but also a set of different files like vectorMath.c that I quoted in my first post. Would I need to add all those files to my project?? or is there a quicker way, they all happen to be located in the same folder in the cd that came with the book.
1. 1
2. 2
Rutin
20
3. 3
4. 4
5. 5
frob
13
• 9
• 14
• 10
• 9
• 17
• Forum Statistics
• Total Topics
632601
• Total Posts
3007359
×
|
{}
|
# Question #2aed0
Feb 18, 2015
The new pressure of the sample will be $\text{0.494 atm}$, or half of what the initial pressure was.
The reason this happens is that, according to Boyle's law, pressure and volume have an inverse relationship if temperature is constant.
Mathematically, this is expressed as
${P}_{1} {V}_{1} = {P}_{2} {V}_{2}$
SInce no mention of temperature is made, I assume taht it is being held constant. This implies that the pressure in the new container will be
${P}_{2} = {V}_{1} / {V}_{2} \cdot {P}_{1} = \text{1.00 L"/"2.00 L" * "0.988 atm" = "0.494 atm}$
When the volume is doubled, the pressure is halved.
|
{}
|
# Analytic solution to a nested discrete lyapunov equation?
Given is a symmetric and positive-definite solution $C$ to the discrete lyapunov equation,
$WCW^T = C - M$
where $M$ is again symmetric and positive-definite. Is there an analytic solution to
$WXW^T = X - C$
in terms of $C$? $W$ is big ($n > 1000$) and sparse.
Of course you can vectorize the equation using Kronecker products or write the solution as an infinite sum over $W^k CW^{kT}$ but I have the feeling there is a much simpler and elegant solution to it.
Thanks!
-
|
{}
|
# CONTROLLABILITY FOR TRAJECTORIES OF SEMILINEAR FUNCTIONAL DIFFERENTIAL EQUATIONS
• Jeong, Jin-Mun (Department of Applied Mathematics Pukyong National University) ;
• Kang, Yong Han (Institute of Liberal Education Catholic University of Daegu)
• Accepted : 2017.02.24
• Published : 2018.01.31
#### Abstract
In this paper, we first consider the existence and regularity of solutions of the semilinear control system under natural assumptions such as the local Lipschtiz continuity of nonlinear term. Thereafter, we will also establish the approximate controllability for the equation when the corresponding linear system is approximately controllable.
#### Acknowledgement
Supported by : National research Foundation of Korea(NRF)
#### References
1. J. P. Aubin, Un theoreme de compacite, C. R. Acad. Sci. 256 (1963), 5042-5044.
2. V. Barbu, Analysis and Control of Nonlinear In nite Dimensional Systems, Academic Press Limited, 1993.
3. A. E. Bashirov and N. I. Mahmudov, On concepts of controllability for deterministic and stochastic systems, SIAM J. Control Optim. 37 (1999), no. 6, 1808-1821. https://doi.org/10.1137/S036301299732184X
4. M. Benchohra, L. Gorniewicz, S. K. Ntouyas, and A. Ouahab, Controllability results for implusive functional differential inclusions, Rep. Math. Phys. 54 (2004), no. 2, 211-228. https://doi.org/10.1016/S0034-4877(04)80015-6
5. M. Benchohra and A. Ouahab, Controllability results for functional semilinear differ-ential inclusion in Frechet spaces, Nonlinear Anal. 61 (2005), no. 3, 405-423. https://doi.org/10.1016/j.na.2004.12.002
6. R. F. Curtain and H. Zwart, An Introduction to In nite Dimensional Linear Systems Theory, Springer-Velag, New-York, 1995.
7. J. P. Dauer and N. I. Mahmudov, Exact null controllability of semilinear integrodiffer-ential systems in Hilbert spaces, J. Math. Anal. Appl. 299 (2004), no. 2, 322-333. https://doi.org/10.1016/j.jmaa.2004.01.050
8. G. Di Blasio, K. Kunisch, and E. Sinestrari, $L^2$-regularity for parabolic partial integrod-ifferential equations with delay in the highest-order derivatives, J. Math. Anal. Appl. 102 (1984), no. 1, 38-57. https://doi.org/10.1016/0022-247X(84)90200-2
9. L. Gorniewicz, S. K. Ntouyas, and D. O'Reran, Controllability of semilinear differential equations and inclusions via semigroup theory in Banach spaces, Rep. Math. Phys. 56 (2005), no. 3, 437-470. https://doi.org/10.1016/S0034-4877(05)80096-5
10. M. L. Heard, An abstract parabolic Volterra integro-differential equation, J. Appl. Math. 17 (1981), 175-202.
11. J. M. Jeong, Y. C. Kwun, and J. Y. Park, Approximate controllability for semilinear retarded functional differential equations, J. Dynam. Control Systems 5 (1999), no. 3, 329-346. https://doi.org/10.1023/A:1021714500075
12. Y. Kobayashi, T. Matsumoto, and N. Tanaka, Semigroups of locally Lipschitz operators associated with semilinear evolution equations, J. Math. Anal. Appl. 330 (2007), no. 2, 1042-1067. https://doi.org/10.1016/j.jmaa.2006.08.028
13. N. I. Mahmudov, Approximate controllability of semilinear deterministic and stochastic evolution equations in abstract spaces, SIAM J. Control Optim. 42 (2006), 175-181.
14. K. Naito, Controllability of semilinear control systems dominated by the linear part, SIAM J. Control Optim. 25 (1987), no. 3, 715-722. https://doi.org/10.1137/0325040
15. S. Nakagiri, Controllability and identi ability for linear time-delay systems in Hilbert space, Control theory of distributed parameter systems and applications, Lecture Notes in Control and Inform. Sci., 159, Springer, Berlin, 1991.
16. A. Pazy, Semigroups of Linear Operators and Applications to partial Differential Equations, Springer-Verlag, New-York, Inc. 1983.
17. N. Sukavanam and N. K. Tomar, Approximate controllability of semilinear delay control system, Nonlinear Funct. Anal. Appl. 12 (2007), no. 1, 53-59.
18. H. Tanabe, Equations of Evolution, Pitman-London, 1979.
19. H. Triebel, Interpolation Theory, Function Spaces, Differential Operators, North-Holland, 1978.
20. L. Wang, Approximate controllability and approximate null controllability of semilinear systems, Commun. Pure Appl. Anal. 5 (2006), no. 4, 953-962. https://doi.org/10.3934/cpaa.2006.5.953
21. L. Wang, Approximate controllability for integrodifferential equations and multiple delays, J. Optim. Theory Appl. 143 (2009), no. 1, 185-206. https://doi.org/10.1007/s10957-009-9545-0
22. G. Webb, Continuous nonlinear perturbations of linear accretive operator in Banach spaces, J. Funct. Anal. 10 (1972), 191-203. https://doi.org/10.1016/0022-1236(72)90048-1
23. M. Yamamoto and J. Y. Park, Controllability for parabolic equations with uniformly bounded nonlinear terms, J. Optim. Theory Appl. 66 (1990), no. 3, 515-532. https://doi.org/10.1007/BF00940936
24. H. X. Zhou, Approximate controllability for a class of semilinear abstract equations, SIAM J. Control Optim. 21 (1983), no. 4, 551-565. https://doi.org/10.1137/0321033
|
{}
|
# Swaps Question - 722.2
#### Eustice_Langham
##### Active Member
Hi David, I have a question re the headed question. I have copied the answer below but I have the following questions re the solution:
"We can value this swap as two bonds (see upper panel below). Although it might be easier to recognize that the exchange in three months will be zero, such that only the cash flow at nine months needs to be evaluated: The floating rate pays 0.5 * $400 million * 2*[exp(0.0220/2)-1] =$4.42429 million and the fixed rate pays $6.00; so the future (and final) net cash flow exchange is$1.57571, which has a present value of $1.54993 million." My questions are as follows: 1. Why does only the 9 months CF need to be evaluated? I would have thought that the 3 months CF should be evaluated as well. 2. Why is the fixed rate of only$6.00, this is only the interest cost, if a final payment should the amount be $406m? I have calculated the final fixed payment as follows: 406m/(1+.02)^.75] 3. I notice that your solution also contains discount factors, how were they calculated and finally, the PV amount of 1.54993, can you clarify how this was arrived at? Thanks #### Nicole Seaman ##### Director of FRM Operations Staff member Subscriber Hi David, I have a question re the headed question. I have copied the answer below but I have the following questions re the solution: "We can value this swap as two bonds (see upper panel below). Although it might be easier to recognize that the exchange in three months will be zero, such that only the cash flow at nine months needs to be evaluated: The floating rate pays 0.5 *$400 million * 2*[exp(0.0220/2)-1] = $4.42429 million and the fixed rate pays$6.00; so the future (and final) net cash flow exchange is $1.57571, which has a present value of$1.54993 million."
My questions are as follows:
1. Why does only the 9 months CF need to be evaluated? I would have thought that the 3 months CF should be evaluated as well.
2. Why is the fixed rate of only $6.00, this is only the interest cost, if a final payment should the amount be$406m? I have calculated the final fixed payment as follows: 406m/(1+.02)^.75]
3. I notice that your solution also contains discount factors, how were they calculated and finally, the PV amount of 1.54993, can you clarify how this was arrived at?
Thanks
Hello @Eustice_Langham
This is one of our paid practice questions for those who have paid for an FRM study package. David has discussed this practice question extensively in the paid section of the forum here: https://forum.bionicturtle.com/thre...and-basic-interest-rate-swap-valuation.10657/. If you would like access to the paid section of the forum, you would need to purchase one of our study packages. I was not able to find a previous order from you (maybe you purchased under a different username). I was going to look up your account to see when you purchased so I could provide you with information on an extension, but I cannot find an order from you.
#### Eustice_Langham
##### Active Member
Nicole, you are correct in that I dont have a paid subscription this question ie 722.2 is a question that I downloaded from your website as its a sample question, that BT provide for free. I can send through the actual document if you would like to see this. If however its still not something that you can discuss then I understand. Thankyou
Replies
0
Views
356
|
{}
|
What Is A Strain Gauge?
A strain gauge is an electrical sensor which is used to accurately measure strain in a test piece. Strain gauges are usually based on a metallic foil pattern. The gauge is attached to the test piece with a special adhesive. As the test piece is deformed, so the adhesive deforms equally and thus the strain gauge deforms at the same rate and amount as the test piece. It’s for this reason that the adhesive must be carefully chosen. If the adhesive cracks or becomes detached from the test piece any test results will be useless.
Strain gauges are used not just for metals; they have been connected to the retina of the human eye, insects, plastics, concrete and indeed any material where strain is under investigation. Modern composite materials like carbon fibre when under development are often constructed with strain gauges between the layers of the material.
The strain gauge is effectively a resistor. As the strain increases so the resistance increases.
In a basic sense a strain gauge is simply a long piece of wire. Gauges are mostly made from copper or aluminium (Figure 1). As the wire in the strain gauge is mostly laid from end to end, the strain gauge is only sensitive in that direction.
When an electrical conductor is stretched within the limits of its elasticity it will become thinner and longer. It is important to understand that strain gauges actually deform only a very small amount, the wire is not stretched anywhere near its breaking point. As it becomes thinner and longer it’s electrical characteristics change. This is because resistance is a function of both cable length and cable diameter.
The formula for resistance in a wire is
$Resistance\,in\,Ohms (R) = \frac{{\rho}L}{\alpha}$
where $\rho$ is resistivity in ohm metres, $L$ is length in metres and $\alpha$ in m2.
For example, the resistance of a copper wire which has a resistivity of 1.8 x 10-8 ohm metres, is 1 metre long and has a cross sectional area of 2mm2 would be
$R = \frac{1.8*10^{-8} *1}{0.002^2} = \frac{0.000000018}{0.000004} = 0.0045\Omega$
Resistivity is provided by the manufacturer of the material in question and is a measurement of how strongly the material opposes the flow of current. It is measured in ohm metres.
If in our example the cable was then put under certain strain its length would extended to 2 metres, as it was stretched longer it would get thinner, it’s cross sectional area would decrease. In this example to 0.5 mm2 the resistance now would be
$R = \frac{1.8*10^{-8} *2}{0.0005^2} = \frac{0.000000036}{0.00000025} = 0.144\Omega$
As can clearly be seen the resistance is now different, but the resistances in question are very small. This example shows only the difference when the characteristics of the copper wire have changed. It is not practically possible to stretch and extend a piece of copper wire by these amounts. The example merely shows how resistance changes with respect to length and cross sectional area and demonstrates that strain gauges, by their very nature, exhibit small resistance changes with respect to strain upon them.
These small resistance changes are very difficult to measure. So, in a practical sense, it is difficult to measure a strain gauge, which is just a long wire. Whatever is used to measure the strain gauges resistance will itself have its own resistance. The resistance of the measuring device would almost certainly obscure the resistance change of the strain gauge.
Figure 2: A Wheatstone bridge Figure 3: With shunt resistor
The solution to this problem is to use a Wheatstone bridge to measure the resistance change. A Wheatstone bridge is a device used to measure an unknown electrical resistance. It works by balancing two halves of a circuit, where one half of the circuit includes the unknown resistance. Figure 2 shows a classical Wheatstone bridge, Rx represents the strain gauge.
Resistors R2, R3 and R4 are known resistances. Normally, $120\Omega$, $350\Omega$ or $1000\Omega$ are used depending on the application. Knowing the supply voltage and the returned signal voltage it’s possible to calculate the resistance of Rx very accurately.
For example if R2, R3 and R4 are $1000\Omega$ and if the measured signal voltage between measurement points A and B was 0 Volts then the resistance of Rx is
$\frac{R3}{R4} = \frac{Rx}{R2}\,or\,Rx=\frac{R3}{R1}*R2$
For our example we get
$Rx = \frac{1000\Omega}{1000\Omega} * 1000\Omega = 1000 \Omega$
This implies a perfectly balanced bridge. In practice, because the strain gauge goes through different strain levels its resistance changes, the measured signal level between measurement points A and B is not zero. This is not a problem when using a system like the Prosig P8000 as it can accurately measure the voltage between measurement points A and B.
It is necessary to know the relationship between resistance and voltage. Only then can the measured voltage be related to a resistance and, hence, a strain value.
Figure 3 shows the addition of another resistor RS, called the shunt resistor. The shunt resistor is a known fixed value, normally $126,000\Omega$.
The Shunt resistor is added for calibration purposes and is a very high precision resistor. By measuring the voltage between measurement points A and B with the shunt resistor across Rx, a voltage with the shunt resistor in place is known. Then by removing the shunt resistor, which is a known $126,000\Omega$ and measuring the voltage between measurement points A and B again, it’s possible to relate the measured voltage change to a known resistance change. Therefore the volt per ohm value is known for this particular bridge and this particular Rx.
In order to go one step further and calculate the strain from the resistance, the gauge factor must be known. This is a calibrated number provided by the manufacturer of the strain gauge. With this information the sensitivity of the whole sensor can be calculated. That is, the volt per strain is known.
Inside the P8000 the resistors used to complete the bridge are very high precision. This allows the Prosig P8000 to calculate the resistance, and therefore, strain with a high degree of accuracy.
Strain gauge readings can be affected by variations in the temperature of the strain gauge or test piece. The wire in the strain gauge will expand or contract as an effect of thermal expansion, which will be detected as a change in strain levels by the measuring system as it will manifest itself as a resistance change. In order to address this most strain gauges are made from constantan or karma alloys. These are designed so that temperature effects on the resistance of the strain gauge cancel out the resistance change of the strain gauge due to the thermal expansion of the test piece. Because different materials have different thermal properties they therefore have differing amounts of thermal expansion.
So, where temperature change during the test is an issue, temperature compensating strain gauges can be used. However this requires correctly matching the strain gauge alloy with the thermal expansion properties of the test piece and the temperature variation during the test. In certain circumstances temperature compensating strain gauges are either not practical nor cost effective. Another more commonly used option is to make use of the Wheatstone bridge for temperature compensation.
When using a Wheatstone bridge constructed of four strain gauges, it is possible to attach the four gauges in a fashion to remove the changes in resistance caused by temperature variation. This requires attaching the strain gauge Rx in the direction of interest and then attaching the remaining strain gauges, R2, R3 and R4, perpendicular to this. The piece under test however must only exhibit strain in the direction across Rx and not in the perpendicular direction.
It’s important to understand that the R2, R3 and R4 strain gauges should not be under strain, hence their direction. This means the whole bridge is subject to the same temperature variations and therefore stays balanced from a thermal expansion point of view. As the resistance changes due to temperature, all the resistances in all four gauges change by the same amount. So the voltage at measurement point A and B stays constant due to temperature fluctuations. Only the strain in the desired direction, across Rx, in the test piece affects the measured voltage readings.
The Prosig P8000 system has multi-pin inputs, these allow for the connection of strain gauges in all the various different bridge configurations.
Figure 4: Quarter bridge Figure 5: Half bridge Figure 6: Full bridge
The configurations that strain gauges can be used in are,
Quarter Bridge is the most common strain gauge configuration. As can be seen in Figure 4 it is actually a three wire configuration. The rest of the bridge as shown in Figure 2 is completed inside the Prosig P8000 system. Quarter Bridge uses three wires to allow for accurate measurement of the actual voltage across S1.
Half Bridge is not an often used strain gauge configuration. As can be seen in Figure 5 it is actually a five wire configuration. The rest of the bridge as shown in Figure 2 is completed inside the Prosig P8000 system. The main advantage of the Half Bridge configuration is that both the strain gauges S1 and S2 can be attached to the test piece, but perpendicular to each other. Which as previously discussed allows for temperature compensation.
Full bridge is used for situations where the fullest degree of accuracy is required. The Full Bridge configuration is a six wire system, as shown in Diagram-5. The Full Bridge configuration is the most accurate in terms of temperature variation because it can have two active gauges, S1 and S4. The gauges can be configured with S1 and S4 in the direction of interest on the test piece and S2 and S3 perpendicular to this. Further the voltage sense lines have no effective current flow and therefore have no voltage drop, therefore the voltage measured by the Prosig P8000 system is the actual voltage that is exciting the bridge. The reason for this requirement is that strain gauges are often on long wires and all wires have their own resistance. The Prosig P8000 system could be exciting the gauge with 5 Volts for example, but the voltage at the active part of the bridge might be 4.95 Volts because of the resistance of the wires carrying the supply voltage. This small change once measured using the sense lines it can be allowed for automatically in the strain calculations inside the data acquisition system.
Strain gauge measurements with direction
Figure 7: Strain gauge rosette
Strain Gauges can be configured in a particular pattern that allows for the calculation of the overall strain component, this is often referred to as a strain gauge rosette. As shown in Figure 7, three strain gauges are placed either very close together or in some cases on top of each other. These can be used to measure a complex strain, the strain is complex because it has both amplitude and a direction. Using the Prosig DATS software it is possible to calculate the principle component of the strain, the amplitude over time and to calculate the direction as an angle from the reference X axis over time.
The following two tabs change content below.
James Wren
Solutions Engineer and Sales & Marketing Manager at Prosig
James Wren is a Solutions Engineer and the Sales & Marketing Manager for Prosig Ltd. James graduated from Portsmouth University in 2001, with a Masters degree in Electronic Engineering. He is a Chartered Engineer and a registered Eur Ing. He has been involved with motorsport from a very early age with special interest in data acquisition. James is a founder member of the Dalmeny Racing team.
44 thoughts on “What Is A Strain Gauge?”
1. Jeremy Church
The picture of the gage at the top can be misleading by showing only 2 lead wires. One of the bigger sources of error due to temperature is the lead wires. This can be compensated for with a 3-wire lead configuration. In a quarter bridge setup there should be two lead wires from one side of gage and one from the other. These leads should all run together to experience whatever temperature delta is present. The resistance change is then canceled by the bridge circuit. It is stated that 3-wire should be used under the quarter bridge but does not state how it should be laid out. A simple jumper at the connector would do the job but not properly compensate for the temperature and introduce error.
I having a hard time to download and actual browse the pdf file. It took me forever so I finally quit
3. james Post author
Mr Church,
Your points are well made and correct sir. The picture was intentionally a 2 wire system in order to illustrate that the strain gauge at a very basic level is a very long single piece of wire, or more simply a resistor. The first part of the article attempts to explain to the reader what a strain gauge is and how they work in a very basic sense before moving on to more complex actual real world issues.
As the article tries to show in figure 4 the classical quarter bridge configuration is in fact a 3 wire system. The wires connecting to the gauge in figure 4 should be the same length and as your rightly state they should follow the same route, for the reasons you state. By following these points the bridge will be balanced by virtue of the fact the resistance of the lead wires will be the same.
Your point about the lead wiring running together is a valid point, in most cases this sort of point is glossed over, but with experience and wisdom with strain gauges these things are learnt. Thank you for sharing your knowledge with our readers.
4. mok
A strain gauge has two fixed resistors R3 and R4 of 150? each and a variable resistor R2 which is 110? at zero strain and 110.75? with the strain (R1=Rg). The gauge factor is 2.54. How to determine the strain, where the strain gauge is attached? Can you help me on this problem? Thank you sir…
5. James Wren
Hello Mok,
It really sounds like your trying to do it all in one giant step. Perhaps you should break the problem down into smaller chunks.
A strain gauge is like a resistor, a strain gauge bridge is made of 4 resistive elements.
All the resistors in the bridge should be of the same value, 120 Ohms is often used in industry. The resistors must be the same value to balance the bridge. There are other techniques to balance a bridge, but for clarity in this case we’ll assume the bridge must be balanced by the four resistors having the same resistance.
When you have setup your bridge you should attach the active strain gauge (assuming you have only one active element in your bridge) to the area where you are interested in knowing the strain. I am afraid we cannot offer advice about where to attach your gauge.
You should then be able to read back a value of zero volts from your bridge, then when your material under test has some forced applied, which produces a strain in the area your gauge is attached, you’ll see the voltage from the bridge change to something other than zero. This voltage change is proportional to the strain in that gauge.
You simply then use the bridge and gauge factor, supply voltage, output voltage and non-deformed and deformed gauge resistance values to calculate the strain.
6. Alex
What could cause large spikes (+ and -, some to infinity) in strain gauge readings? I am using strain gauges to measure the force required to cycle through exercise bikes at different levels of resistance. The resulting graphs reveal trends of required force but there are so many spikes and variations it is not accurate enough. The gauges are electrically grounded. They are subjected to vibration during the testing.
7. James Wren Post author
Hello Alex,
Thank you for asking a question on our blog.
Strain gauges are effectively wires and long wires at that. They can behave like aerials and pick up electrical signals from any other sources, for example mains electricity at 50Hz is quite a common source of noise.
The sample rate you choose is very important. You must use a sample rate that reflects what you’re looking for. If you are studying the human body and motion on a bicycle we are talking a few Hz at most. For example, an effective bandwidth of 10Hz would need a sample rate of 24Hz. If you sample higher you will simply be collecting noise, you will gain no useful information below 10Hz.
Now perhaps 24Hz is not high enough for your application or is not practically possible. In this case you should consider using a low pass filter on your captured data. This will remove the effects of the high frequency noise you are seeing. The result will be the dynamic strain you are looking to study.
8. Nick
Hello!
I would like to ask you a question. Is it possible to measure vibrations with the strain gauges? I do not want to use an accelerometer because they are too big for my application (I wanted something like the surface bonded strain gauges).
9. James Wren Post author
Hello Nick,
Thank you for asking a question on our blog.
You pose an interesting question.
Strain and Acceleration are not commonly compatible types.
I am sceptical that you will be successful in this endeavour.
Acceleration, Displacement and Velocity are all related. For example an accelerometer will traditionally measure displacement and convert it to acceleration internally. The strain in the material is not really related to the acceleration.
A Strain Gauge will measure the strain in the material it is adhered to. This is not necessarily the acceleration of the component.
There may be a relationship between strain and vibration. You could measure the strain and draw a conclusion on the possible acceleration level. But you would need to first measure and categorise the relationship between the strain and vibration.
In my experience I would advise against it and try to find a way to use an accelerometer for your application.
10. ayesha
hi
i am doing mechanical engg.in final year now.we are trying to fabricate a dynamometer to measure torque using electrical strain gauges.but we are facing the problem of slip rings.that we cannot afford them.could you please suggest any idea how to use strain gauges on a rotating shaft to measure torque with any altrnative to slip rings or any other techinique.thanx
11. James Wren Post author
Hello Ayesha,
Thanks for asking a question on our blog.
I think you may have hit a technical barrier there, you have to pass the signals through a medium that allows for the mechanical rotation, if you just used cables they would soon become twisted and fail.
I have colleagues who have used wireless sensors, but these are even more expensive.
I would suggest that you need to find a mechanism in your budget.
12. Stephen Barnes
Thanks for putting this together! I noticed a small detail in the beginning. Resistivity has units of resistance * length. Normally it’s listed as ohm*meters (or micro-ohm*centimeters). The formula you listed says ohms per meter which is incorrect.
13. James Wren Post author
Hello Stephen,
Thanks for making a note on our blog.
You are quite correct, we have updated the article, thank you for pointing this out.
If you have any further comments or questions, please feel free to share them.
14. Jon Wilson
Excellent article explaining how strain gages work in bridge circuits.
Many strain gages, used in transducers, are silicon chips doped to optimize their strain constants. They have much greater sensitivity than metal gages, but also are more difficult to match and to temperature compensate. Most recent transducers use four strain gages plus various compensation resistors in the bridge circuit.
15. pankaj
hello.
i am doing electronics engg. i am working on real time experiment of weight measurement using strain gauge using labview software and data acquitation cards. please let me know how strain gauge works in this experiment.
16. James Wren Post author
Hi Pankaj,
It sounds like your using the strain gauges as part of a weigh scale.
Basically the strain gauges are put under load by the item your measuring, this changes their resistance values. The load puts the material the gauges are on under strain.
When you put a known mass, like 1 kg for example, on the scale you monitor the voltage change and therefore the resistance change in the elements of the bridge, hence you have a known voltage change for a known mass. Thus you can calculate how the voltage will change for any mass and therefore you have a linear sensitivity in volts per kg or unit.
1. Bruce Hefford
Dear James,
Sorry to be pedantic, but the second equation for resistance of a piece of copper wire appears to be incorrect. The wire is stretched to 2m and its diameter decreases to 0.5mm^2, which equates to 0.0005m^2, not 0.005m^2. So the resistance is 0.144 Ohms, not 0.00144 Ohms.
Yours sincerely,
Bruce Hefford
1. James Wren Post author
Hello Bruce,
You are quite correct, thank you for pointing this out.
I have corrected the article.
The formula is only an example and is not actually correct in any case it just intends so show the process in action rather than any hard results.
I don’t think being correct is being pedantic!
17. Teer
Hello,
I am working with strain gauge measurement on the electrical steel lamination, any effect with strain if while I am measuring the lamination vibrate ?
Thanks.
1. James Wren Post author
Hello Teer,
Thank you for asking a question on our blog.
Your question is quite simple, if there is a piece of metal which is vibrating does this vibration induce any strain in the metal?
The answer is not so simple, it might be and it might not be inducing some strain. It depends on the vibration and if the metal is supported well or not, the size of the metal mass and so on.
The basic rule is this is the metal is bending in anyway then there will be some stress/strain.
I would expect any piece of any metal that is vibrating to move around as it’s own structure and therefore I would expect various amounts and directions of stress and strain in the material.
18. selva
Sir am studying final year electronics engineering.I am in interest of doing project with strain gauge.My doubt is “how to measure an input value and output value of the strain gauge and also how to use in lab view “.I need a full explanation about this.pls help me…..
1. James Wren Post author
Hello Selva,
Thank you for asking a question on our blog.
With regards of how to measure the input and output of a gauge you would need a wheatstone bridge as detailed in this article, I would have suggested the 6 wire version for you. The 4 resistors, at least 1 of which being your strain gauge, would accept an input voltage to the excitation of the bridge and the output voltage would be proportional to the strain in your gauge, once calibrated. These steps are detailed in the above article.
With regards on how to measure in LabVew, we would struggle to assist. Prosig software and Prosig hardware are designed to be user friendly and straight forward to use, you simply use the equipment and obtain your results. I believe with National Instruments (NI) equipment you have to design the data capture front end and then design and build the signal processing software, all for yourself. We would recommend using Prosig equipment to someone in your position.
19. MECHRI
Hellow Mr. James Wren
Please, I would like to know eatch kind of strain will be mesured by the strain gauges :
ENG. Strain or the True strain.
Beginner
Regards
1. James Wren Post author
Hello Mechri,
Thank you for asking a question on our blog.
I think your asking if a strain gauge measures strain accurately or not.
The answer is quite simple, a strain gauge measures the strain in the surface of the material. It does this measuring the displacement of the surface of that material with respect to itself. Does the part under test get longer or shorter, essentially.
The strain is assumed to be uniform through the material under test. In practise however the strain may not be uniform in the material.
But it is not possible to measure the strain inside the material unless the strain gauge is put into the material when the part is created.
For example on a modern Formula One racing car strain gauges are embedded into the layers of carbon fibre as they are formed. Then it is possible for the engineers to measure the strain levels inside the material as well as the surface.
Additionally there is the issue of strain measurement and temperature variation, as strain gauges are effectively resistor, but this is discussed in the article above.
20. Arash
For electrical steel laminates (specially grain oriented laminates), it’s a bit tricky to polish the surface to mount strain gages. These samples are very thin, and by sanding you can significantly change the thickness of the laminates right at the spot where you are mounting the gage. How do you prepare the surface of electical steel laminates for good bonding of strain gages?
1. James Wren Post author
Hello Arash,
Thank you for posting on our blog.
In those situations we would simply not polish the surface of the material.
Sounds simple doesn’t it, you know yourself that the mounting of strain gauges is a little more like an art than a process that is right or wrong.
If the polishing process will change the thickness of the material substantially then we would look to consider to other ways to mount the gauge, another type of adhesive or another type of gauge for example.
If dealing with laminates you could also consider producing the material with the strain gauges in the material, on one of the layers of the laminate, I accept this is rarely possible though.
21. karthik
Hello.
what kind of wires can we use to connect the lead wires of strain gauge. we use 2 mm gauge length strain gauge ?? And how can we decide the sampling rate so the we don’t pic the noise ??
1. James Wren Post author
Hello Karthik,
Thank you for asking a question on our blog.
Generally you would want to use wires with a low resistance, generally they are about $0.5\Omega$ per metre. Something with a gauge of about $0.1mm^2$ would be fine.
Additionally they must be insulated as humidity and moisture can effect the values measured greatly.
With regards to noise you should sample as low as to capture what you need and no higher, if you sample higher then you will of course detect more noise.
So consider what frequency content you require, then sample at least 2.5 time faster than that.
22. Neda Nsh
now, I have a question, I should design a sensor who can mesure 2 forces fx, fy and moment, but I have some vibration around these forces and moment,(you konw,there is a cantilever beam and all of the forces and moment effect on that) does strain gage sense the vibration, how can I do that? do you have any idea? because I serached alot about these problem..but… I used a strain gage in direction of longitude and one in latitude for mesuring fx, fy,M, I will be glad if some one helps me?
1. James Wren Post author
Hello Neda,
Thank you for asking a question on our blog.
Strain is defined as the deformation of a solid material due to stress, further stress is force per area.
A strain gauge will measure the strain in a material, usually at the surface where the gauge is mounted. If the vibration you mention causes a strain in the material then you’ll see the effect of the vibration in the strain signals. You can not measure vibration directly with a strain gauge.
I’d suggest using an accelerometer to measure the vibration and a strain gauge to measure the strain, which you can then relate to stress if you so desire.
23. mukund
Sir,
I want to know about the role of gauge factor in strain calculation.we are using 3 element strain gauge and quarter bridge configuration at data logger side.
1. James Wren Post author
Hello Mukund,
Thank you for asking a question on our blog.
Before I begin on the subject of the gauge factor I think we have to clarify, you always have 4 elements to a bridge, if you have a quarter bridge configuration it means that 1 element is outside the front end and the other 3 elements are internal to the front end.
That is why it is called strain gauge completion, the completion is inside the front end.
A quarter bridge (1/4) configuration will have 3 elements inside the front end to perform the completion.
A half bridge (1/2) configuration will have 2 elements inside the front end to perform the completion.
A full bridge configuration will have 0 elements inside the front end as the whole bridge is completed outside of the front end.
Now going back to your original question,
The gauge factor is the ratio of the relative change in resistance to the actual mechanical strain of the gauge in question. So the gauge factor is key to correctly calculating the actual strain in the material under test as in reality the measurement system is measuring the change in resistance and therefore the balance or unbalanced changing state of the bridge.
24. Daniel Jones
Hi,
Great blog, I have been doing some research and it looks like a fundamental parameter of the strain gauge is the sensitivity to strain, which is the gauge factor. I believe the gauge factor is defined as the ratio of change in resistance to the change in length (strain). The gauge factor for metallic strain gages is typically around 2, or so it seems.
Can you comment on this please?
1. James Wren Post author
Hi Daniel,
We are glad to hear your finding our blog useful, it is very pleasing to hear when people are able to post and give us their feedback.
The basic’s of what you have said are correct, so well done on the research! With regards to metallic gauges, I believe that is a general rule of thumb which holds for almost all situations, of course there are always exceptions!
Feel free to post back if you have further questions at all.
25. Alfred
Hi,
I have some questions, does strain gauge have minimum strain range? in which the strain gauge cannot give any response with gauge strain smaller than that minimum. And what’s the difference in accuracy between a 2-element gauge(plane and cross) and two single gauge.
I guess some small vibration of sample may not deform the gauge, so is there any data or specifications of strain gauge which relate to that?
1. James Wren Post author
Hello Alfred,
Thank you for posting on our blog.
Strain gauges are easy saturated generally speaking. That means they can only usually measure from no strain up to a particular level of strain, after that level of strain is reached they just report the same reading the whole time. They do tend to be very good at low levels though.
To be honest it depends more on the material properties than the strain gauge. How much with the gauge deform, well how much will the material deform!
You ask about accuracy with different types of gauge, the accuracy is the same in short. However compensation is not the same, and that is key for strain gauges, the plane and across as you call it is for temperature compensation, the simple straight gauge has no temperature compensation, so if the temperature is going to change during the test, then the method with compensation will be more accurate.
With regards to vibration, your asking the wrong question, your asking if a gauge can detail vibrations, you should be asking, does the material deform under certain vibrations? If so by how much?
Would a gauge then be able to measure that?
26. Nikhil Gupta
Hi,
Please tell me the option in DATS which is used to remove spike in strain gauge signal.
Thanks,
Nikhil.
1. James Wren Post author
Hello Nikhil,
Thanks for posting a question on our blog, we are always happy to hear from new readers.
There are many, many tools in DATS for the automated removal of spikes, there is even a function called ‘Spike Removal’ for just this purpose.
How to do it without an advanced signal processing package like DATS is more complex. There are many ways to remove spikes and which to use really depends on the data type and the characteristics of the spikes that are to be removed.
Can you let us know some more details of what your trying to do?
27. James Wren Post author
Hello Deepak,
Thanks for asking a question on our blog.
Fundamentally strain gauges measure the change in surface tension of a material.
So really you have to ask yourself a more structured question.
Does the surface tension of the pressure container change with pressure of the gas/fluid inside?
The answer is almost always yes. But you would have to understand the structure of the container and the pressures you’re trying to analyse over. Perhaps small pressure changes will not make detectable changes to the container surface tension.
Vibration is more complex, as a basic concept, as stated above if the surface tension of a material changes, then you can measure it with a strain gauge.
So would a vibrating material having a changing surface tension?
This site uses Akismet to reduce spam. Learn how your comment data is processed.
1. We welcome any feedback, questions or comments
CLOSE
Optimization WordPress Plugins & Solutions by W3 EDGE
|
{}
|
We are thinking about sunsetting FlexMath in summer of 2018. If you will like to continue using it, please get in touch with us at support@ck12.org
Student Teacher
Optional Group Code given to you by the group creator
Optional first name
Optional last name
UNIT
6
6.3: Find the Quotient of Powers
Students will simplify exponential expressions over the operation of division. Simplified expressions will use only positive exponents.
$$\frac{x^7}{x^2}$$
Students will simplify exponential expressions over the operation of division. Simplified expressions will use positive or negative exponents.
$$\frac{x^5 y^3}{x^9 y^4}$$
Students will simplify exponential expressions over the operation of division. Simplified expressions will have coefficients and use positive or negative exponents.
$$\frac{4x^{10} y^7}{2x^{11} y^2}$$
|
{}
|
# Year 2020 Resolution Log
The high level goals and expectations for the year 2020.
created: modified: status: in-progress reading time: 20 mins
# Preamble
A New Year’s resolution is a common western tradition where, at the start of the new calendar year, a commitment to lasting change is established. These resolutions typically include changing an undesired behaviour (quitting smoking), accomplishing a personal goal (going to the gym consistently), or otherwise improving their life (become less offended by others).
By documenting my resolutions, I intend to have an accountable record to refer to when I review the events of the year 2020. The resolutions that are proposed minimize dependence on external factors, and therefore remain primarily within my realm of control.
## Master’s Degree
By the end of 2020, I aim to have a significant portion of my master’s degree thesis completed. I will adhere to the thesis preparation, requirements, and deadlines set by the University of Alberta Faculty of Graduate Studies and Research.
If possible, my final oral examination (thesis defense) will have been scheduled/completed.
## At Least One App
By the end of the year, at least one high quality mobile application will be available to download from the popular mobile application distribution platforms (Google Play Store, Apple App Store).
## Commitment to Health and Wellness
Throughout the year, I aim to commit to the following lifestyle changes:
1. Continue on the Fast-5 daily intermittent fasting schedule. Currently, I have successfully fasted since December 22nd, 2019 with an eating window between 2pm and 7pm.
2. Maintain morning fitness routine with Bonnie. I aim to complete at least one ten minute FitnessBlender exercise per day, at least five days a week. Two days of arbitrary flex time per week are provided.
3. Sustain current bouldering (rock climbing) activity at the University. The Wilson Climbing Centre is free for students and walking distance from my lab. While completing my MSc. thesis, I will attend the climbing wall two times per week, if available.
# Rubric
Resolution Unsatisfactory Satisfactory Good Excellent
Master’s Degree Thesis largely incomplete. Significant effort required (>3 months) for written dissertation. Thesis requires major revision. Between 1-3 months of effort is necessary for dissertation pass and oral examination. Thesis requires minor revision. Less than one month of effort is necessary for degree completion. Thesis complete. Defense scheduled or completed. Convocation scheduled.
Mobile App No mobile application is published on any distribution platform. One application is published onto the major distribution platforms. No other criteria satisfied. One or more applications have been published. Applications have generally positive user feedback, and reviews. More than one application has been published. Applications have positive user feedback, reviews, and monetary income.
Health & Wellness Over 90 inexcusable skips or absences to the Fast-5, FitnessBlender, and Climbing Centre activities. Between 60 and 90 inexcusable combined skips or absences. Between 30 and 60 inexcusable combined skips or absences . Less than 30 inexcusable skips or absences.
## Health and Wellness Penalty Calculation
Scores are calculated every Sunday. Skipping one full week of fasting, fitness blender, and climbing would incur a penalty of:
$$7_\text{Fast} + 5_\text{Fitness} + 2_\text{Climb} = 14_\text{Penalty Points}$$
Although inexcusable absences itself is subjective, the intention is to distinguish days where I did not perform a health and wellness activity due to laziness.
# Resolution Log
I will update this post every week until the end of 2020.
## January
January 2020 penalty summary:
$$\text{Fast} = 1 + 1 \text{; Fitness} = 0 \text{; Climb} = 2$$
It is eye opening how long recovery from a cold takes.
### Week 1: Wed, Jan 1 - Sun, Jan 5
Fast-5 accomplished each day, climbed on Thursday, Jan 2nd. Skipped one day of Fitness Blender on Sunday Jan 5. Most difficult part of this journey is the fast, especially when in social situations where other people are eating.
### Week 2: Mon, Jan 6 - Sun, Jan 12
Skipped two days of Fitness Blender. Fast-5 failed on Sunday due to dinner with girlfriend’s family I ate outside of the eating window, roughly ~830pm dinner finished. Climbed on Monday and on Friday. Weighed self, roughly 69kg (152lb).
### Week 3: Mon, Jan 13 - Sun, Jan 19
I’ve been run down with a terrible cold this week. What had begun on Friday, January 17th as an unusually pervasive sore throat had turned into a weekend of progressively becoming weaker. A full blown hallucinogenic nightmare of chills, cold sweats, and phlegm occurred by Monday. That being said, given the time period specified in the header, I was able to climb twice, I never broke the fast, and I skipped two days of Fitness Blender (within my 5 days per week criteria).
### Week 4: Mon, Jan 20 - Sun, Jan 26
Most of this week was spent in cold/sickness recovery. Tuesday, I had a meeting at the CN tower to discuss the ECG research. Friday, the only day I went to the lab, I was able to reverse engineer the ECG traces from a SQL database dump. I broke my fasting window Friday the 24th due to Chinese New Year evening dinners at my grandmother’s. Also no wall climbing was done this week due to illness.
### Week 5: Mon, Jan 27 - Sun, Feb 2
I did not adhere to my fast on Saturday February 1st. I also did not climb at all this week with no valid excuse other than laziness and poor time management. I performed fitness blender activities at least five times this week.
## February
### Week 6: Mon, Feb 3 - Sun, Feb 9
I had a moment of weakness this Saturday morning and indulged in two hotdogs during a grocery run to Costco. Additionally, this Sunday was my Grandmother’s birthday dinner. I continued eating until well after the 7pm window close. I missed three days of Fitness blender. I climbed twice this week.
### Week 7: Mon, Feb 10 - Sun, Feb 16
Many lapses in the fasting window occurred this week. Friday, Valentines Day Dinner; Saturday, Post V-Day Dinner Hot Pot with friends, Sunday, Dinner with Bonnie’s family. I was able to climb twice this week and I performed Fitness Blender every day.
Speaking to a friend about my fasting window and the strictness which I apply my rubric, I was offered a suggestion to begin recording deviation of time from the window, rather than just marking failure cases. One behavior I see is if I break the window, I no longer have incentive to continue the fast for the remainder of the day. By having more fine grained logging of my eating habits, it may provide more insight into the effectiveness of the fast.
### Week 8: Mon, Feb 17 - Sun, Feb 23
The climbing wall is closed from Feb 15 to Feb 26 due to Adaptability wall construction. I did not track my eating windows this week, but noted three days in which I had not followed my fasting window schedule of 2-7pm. I fitness blended daily, with the exception of Thursday.
### Week 9: Mon, Feb 24 - Sun, Mar 1
Climbed twice this week. Performed daily fitness blender exercises. Maintained consistent fasting window, eating only between 2-7pm.
Starting to allow myself to be less strict about fasting as results are very apparent (target weight achieved, have abs again). Social eating (late dinners in particular) are very problematic for the fasting window. Most of times my breakfast meal is delayed until 3pm to compensate. May stop tracking fasting schedule and just delay meals as habit? Problematic for year-long rubric/schedule.
## March
### Week 10: Mon, Mar 2 - Sun, Mar 8
Ate past 7pm on Sunday, March 8th. Otherwise nominal.
### Week 11: Mon, Mar 9 - Sun, Mar 15
Deferred to Week 12
### Week 12: Mon, Mar 16 - Sun, Mar 22
Last two weeks have been quite a blur. The COVID-19 pandemic seemed to arrive quite suddenly. On Week 11, I had climbed on the Monday, but was too worried to go climb since.
I had a difficult Friday, Mar 20th. My trusted Dell XPS 13 laptop popped a capacitor and I no longer had a ready to go research laptop. The next day, after taking the day to self reflect and reprioritize I recalled the importance of being mindful, of having stillness and being focused.
At the beginning of the year, I wrote down a rubric outlining grading criteria with respect to my Master’s Degree, the Climbing/Fasting/Health and wellness, and the Mobile app. I began to feel very discouraged and very nit-picky about tracking my dietary fasting windows and climbing. However, I remain satisfied with the long term habits that I have developed with respective to my eating and with exercise. Every morning, with tolerance of 1 day per week, I do fitness blender morning exercises with my girlfriend. I also remain mindful most days that I do not eat until well into the afternoon. Admittedly, it is not always between 2-7pm, I have limited my evening snacking and pushed my breakfast meal out considerably past my previous limits. I believe that this is the right step for longevity, and the original rigour for keeping to the strict 2-7pm window is no longer appropriate.
I would like to try lists again. I have begun using a task/list application to track necessary todo items. I will try to check off as many of them as I can. Every week, rather than a passive recap of eating windows, I would like to self-reflect on how the lists approach is working. One thing that was surprising to me was how quickly I was able to get my Linux setup up and running again, and how approachable it was when using the list as a guideline. I will work on how best to phrase the todo items mindfully.
As for the rubric, I am no longer very serious about it, but I intend to reflect back and estimate on good faith how I fared. The metrics for MSc. Thesis and the mobile app are still very applicable, as a deliverable is generated (whereas fitness & health is more a routine)
### Week 13: Mon, Mar 23 - Sun, Mar 29
I’ve found that keeping a todo list that I can check off has been very useful in keeping me accountable for my work. Keeping each task roughly equal to 1 hour’s worth of work appears to be the secret. If the task takes more than an hour to complete, it should be broken down further. Occasionally, I allow myself to write down tasks that I accomplished but have not recorded ahead of time just to have the satisfaction of marking it off of the list.
As a result of the todo task list, I’ve made progress in the ECG classification problem posed by the Physionet 2020 challenge organizers. I will submit my new model this week, as well as update the research log in the wiki.
In terms of eating, I am satisfied with my current schedule, which has naturally shifted from a ‘Fast-5’ to a ‘Lean-8’, where my allowed eating time starts around 11am and ends around 7pm. I have not been keeping track of it diligently.
### Week 14: Mon, Mar 30 - Sun, Apr 5
This week has been less productive than I would have liked. I experimented with not taking my morning daily ADHD medication just to see the impact on my day to day activities. I found that I was less able prioritize and focus on my tasks. It was difficult for me to plan detailed, itemized tasks for the next days. I felt like I had greater difficulty in getting out of bed, that I was sleeping in considerably more, and it was more difficult for me to get into the flow of productive work.
That being said, I was still able to make progress in the ECG research. I submitted one successful attempt at the PhysioNet challenge using the newer feature extraction methods. Will update again next week.
## April
### Week 15: Mon, Apr 6 - Sun, Apr 12
Perhaps the most important thing to getting things done is not a framework, like lists, spreadsheets, tracking software, but rather the people that you surround yourself with that push and motivate you to become a better individual. I’ve seem to hit a little bit of a plateau, using the todo lists to dictate my daily tasks. It has become difficult to adequately plan the next day’s activities, other than setting a general “Do 1 hour of research”, and then actually accomplishing 4 hours. One of the challenges with specific, deliverable based tasks is it is hard to know what needs doing before the discovery of what the problem is. Most of research is simply churn, hyperparameter tuning, and waiting.
On an aside, I’ve shaven my head bald. I figured that since COVID-19 related restrictions are not going to ease up within the next week, I would take hair matters into my own hands and clip all of it off. It was beginning to get difficult to manage. I am amazed at how people with long hair don’t drive themselves crazy with the itching and shedding.
My daily fitness blender habits and lean-8 meal times are still going strong. The lean-8 is easier to enforce than the fitness blender, but for that I am grateful to have a motivating partner who starts the exercise each morning.
### Week 16: Mon, Apr 13 - Sun, Apr 19
This list/todo approach to task completion has been underutilized this week. Instead, three solid days were dedicated towards ECG classification research with promising results and figures. Lots of additional experimentation was done given new label-wise plots of F1-scores.
One blocker: my MacBookPro battery has begun to swell. Before, where my four laptop feet once rested against the table, now the bottom of the laptop bulges out. I’ve ordered a new battery, but Amazon’s reduced capacity means that it would arrive in early May (est. May 12th). So much of my productivity depends on reliable access to technology. Having these laptop issues is a major source of recent anxiety.
### Week 17: Mon, Apr 20 - Sun, Apr 26
I really enjoy having the todo-list as a fallback, even if I don’t use it with careful intention. When there are long running tasks (that recur weekly, for instance) it imposes a much needed sense of routine and order that I find missing in my life, especially since the ‘norm’ of post self-isolation has not fully settled in yet. I am thankful that I have kombucha that I need to make every two weeks. I am grateful that I have structured finances and some assets that I need to manage each Monday. I appreciate that the PhysioNet 2020 CinC abstract deadline is Friday, May 1st.
As an aside, this week I’ve rediscovered Factorio- which has consumed far too much of my life. I need to show a little bit more restraint when playing that game, perhaps setting more strict timers (limiting myself to 2 hours per day at most, pending the task list). Additionally, I found myself stuck in this mental loop.
To choose to do nothing is also a choice, which means you cannot truly do nothing
I will complete the abstract, send it off for revision, and work on the opt web application this week.
### Week 18: Mon, Apr 27 - Sun, May 3
Despite making progress in the ECG classification research problem, I’ve come to realize a few disappointing facets of my research journey. I am hitting this wall now of seeing marginal improvements with my classification model, despite adding in various new manual feature extraction techniques. I am using traditional signal feature extraction approaches and appear to be hitting a bottleneck of sorts- my supervisor and my PhD colleague have different opinions on the usefulness of hyperparameter search. My supervisor believes it to be a promising next step, while the PhD colleague insists that we are missing some silver bullet of feature extraction/signal representation that will improve performance. Myself- I see value in both, and it is a toss-up, I do think that setting up some AutoML/hyperparameter tuning experiment using random/genetic search would yield more long-term potential.
This Sunday (May 4th), I’ve hit a bit of a mental and emotional pain point. I watched the DeepMind documentary on AlphaGo, the reinforcement learning program that defeated the top Go player in the world 4 games to 1. I was once again reminded that the start of my research journey was not fueled by maximizing an score on some classification challenge, but to explore the fundamental inner workings of deep neural networks and how a self-learning agent can be used to improve itself over time. Although I think it is too late for me to switch my research gears again, I feel as if I need to make some more space for this learning- that it is inexcusable for me not to pursue this given the time and space gifted to me from the COVID-19 pandemic.
I have made small improvements to opt, namely adding in some stubs for live view user authentication and Create/Read/Update/Delete. One blocker is that Gigalixir’s free plan, which I graciously use to deploy a prototype version of my application, does not support postgresql citext extension for case insensitive text datatypes.
This week, I aim to have an AutoML experiment running for ECG hyperparameter selection per classification label. I also aim to add in the Topological Data Analysis Arrhythmia Detection features, even though I am not using deep neural networks in my approach. I strongly want to resume my Reinforcement Learning (RL) journey by continuing where I left off in the Coursera lectures.
## May
### Week 19: Mon, May 4 - Sun, May 10
I’ve setup all the necessary feature extraction and parameter passing meta to enable a full AutoML experiment using microsoft/nni. Unfortunately, despite searching for three full days (3 days of non-stop experiment running), the hyperparameter search seems to give only a minor improvement to what I already had prior to the search. The only thing that’s concrete is that XGBoost’s dart booster seems to give better results than gbtree. Other tree parameters, like max_depth, learning_rate, and gamma, still vary a lot with no statistically significant findings. The amount of effort required to port over prior work’s feature extraction instructions was non-negligible and consumed the majority of my productive time this week.
I watched a few lecture videos of RL but did not finish a week’s worth of material like I had originally anticipated. I also did not work on my Phoenix framework web application. The tasks list has been kept to a minor flow of maintenance tasks.
Because the official phase of the Physionet classification challenge begins this week, I will be continue working on improvements for ECG feature extraction and classification.
### Week 20: Mon, May 11 - Sun, May 17
I have moved away from XGBoost classification of ECG records back to deep learning approaches using PyTorch and PyTorch Lightning. Otherwise this week has been fairly uneventful in terms of research. I’ve setup a workflow for streaming my desktop environment to various streaming services through Open Broadcaster Software (OBS), in addition to recompiling nginx on my home server to support RTMP video.
This week, I will review the updated dataset released by the Physionet Challenge organizers, as the last week’s submission had an error in the data files. Additionally, I will continue with the classification/transformers architecture implementation.
### Week 21: Mon, May 18 - Sun, May 24
I have setup a working multi-label classification PyTorch lightning experiment such that the thresholds are searched for optimizing over the F-measure. The loss is Binary Cross Entropy with Logits. Forward pass during evaluation calculates probabilities using the Sigmoid function. Treating the multi-label classification problem as a set of binary classifiers was very unstable, either resulting in all negative classes, or frozen to outputting a single value despite changing inputs. This is not resolved by weighting the loss function according to the imbalanced dataset.
I am still investigating the use of the transformer model architecture for deep learning. Potential new research direction? Lead classification of signal into 12-lead ecg categories? Forcasting other lead signals given a subset of the 12 leads in ECG?
I played way too much Factorio this week.
### Week 22: Mon May 25 - Sun, May 31
I updated my ECG classification experiments to support an arbitrary number of labels, due to the new task of mapping to ~76 SNOMEDCT codes. A lot of churn, due to the changing challenge requirements, meant that some assumptions I made about the machine learning experimental setup are no longer valid. I am rerunning the experiments with a larger number of parameters- the intuition is perhaps the transformers and recurrent neural networks are not learning the distribution of the data due to not enough parameters (the SimpleCNN model has over 650k parameters, now the LSTM>Transformer has roughly 250k).
Additional rework of the caching logic needed to occur due to out of memory issues. I spent too much time trying to initialize a Kubernetes cluster. Most of my issues were due to the CRI-O runtime not playing nicely with default settings.
## June
### Week 23: Mon, Jun 1 - Sun, Jun 7
None of my ECG classification experiments have good results using the transformer architecture, despite lots of hyperparameter search. I gave a Edmonton Python Meetup talk on hyperparameter search frameworks on Monday, Jun 8th. We received notification that our abstract was accepted into CinC 2020, and now I need to figure out a way to incorporate the traditional machine learning approaches with the transformer architectures (that are not working anyways).
Feeling particularly anxious and frustrated about the research lack of progress and abstract not matching current methodology dilemma.
### Week 24: Mon, Jun 8 - Sun, Jun 14
I began writing up the four page conference paper, not using the deep learning approaches, but instead using gradient boosting tree ensembles on manually engineered features inspired by signal processing and natural language processing domains. Additionally, rather than constructing figures for the paper using a image editing program (such as Inkscape), I opted to try out PlantUML and found their state diagrams more than adequate for my needs. For generating the PDF from the tex files, latexmk, a package recommended to me by Eddie, replaced my old setup of Tectonic and texlive.
The Physionet challenge organizers released over 40,000 records that require analysis. I will do a quick summarization of the data before proceeding with my classification methods (replicating old traditional ML approaches, then continuing to explore the transformer architectures).
### Week 25: Mon, Jun 15 - Sun, Jun 21
I’ve been having some experimental setup issues, now that the number of files within the dataset files have increased. These issues include RuntimeError: received 0 items of ancdata, which are referenced briefly in closed PyTorch issues. I took a break from resolving this issue, but I suspect it will be resolved after increasing the number of file descriptors. My MBP appears to still have the issue with the sporadic shutdown and restarts, despite the new battery and the multiple NVRAM/SRAM resets. I purchased for myself a new Lenovo Yoga C740, which will arrive in the mail by the 26th.
### Week 26: Mon, Jun 22 - Sun, Jun 28
In terms of number of weeks, we are half way through the year. I’ve spent a lot of time setting up a new Lenovo C740 laptop, and running to various issues. As an aside, in the blog post I mentioned that the display brightness hotkeys do not work, but they have started working (unsure what update fixed this). I have updated my ECG outlier detection approach to be more robust to bad data, utilizing median filtering prior to bandpass filtering to reduce harmonic noise from large peaks.
## July
### Week 27: Mon, Jun 29 - Sun, Jul 5
I discovered a Python library for neurophysiological signal processing called NeuroKit2. There was a minor issue that I caught, and I submitted a pull request that was successfully merged into their development branch. It appears to be a promising feature extractor, taking roughly 1 second per record to extract a variety of heart rate variability metrics. The default signal cleaning function is superior to my raw bandpass filtering and median filtering approaches.
I will finish up writing my ensemble classifier, using these new extracted features.
### Week 28: Mon, Jul 6 - Sun, Jul 12
I made a few code quality pull requests to the NeuroKit2 library this week, notably refactoring all of the printed warnings to using the builting warnings module. Additionally, I have a set of classifiers trained (100 experiments), with corresponding plots and feature importances for the team to review. I will write up a bunch of tests to ensure scoring function quality, as well as write some sample re-weighting logic to make use of the non-zero weights across the different labels. Then, I will calculate the overall challenge scores using both approaches (raw labels, including label probabilities). I still need to integrate tsfresh features into the 12-lead ECG record in a sane manner.
|
{}
|
##### Practice Final B12
bsmith
Does anyone know how to do this problem? i dont even know where to begin
Lilyhui: March 19, 2015, 5:29 p.m.
You can draw a horizontal line, the left endpoint is 0 and the right endpoint is 3. The prob of landing on the left of 1 on the line is 1/3 ( this would be your probability) and you throw the marble 9 times (this is n ). E[X] =np which is 9 x(1/3) =3
bsmith: March 19, 2015, 9:19 p.m.
thank u lily hui <3
|
{}
|
main-content
Über dieses Buch
This book constitutes the proceedings of the 17th International Symposium on String Processing and Information Retrieval, SPIRE 2010, held in Los Cabos, Mexico, in October 2010.
The 26 long and 13 short papers presented were carefully reviewed and selected from 109 submissions. The volume also contains 2 invited talks. The papers are structured in topical sections on crowdsourcing and recommendation; indexes and compressed indexes; theory; string algorithms; compressions; querying and search user experience; document analysis and comparison; compressed indexes; and string matching.
Inhaltsverzeichnis
Querying the Web Graph
(Invited Talk)
This paper focuses on using hyperlinks in the ranking of web search results. We give a brief overview of the vast body of work in the area; we provide a quantitative comparison of the different features; we sketch how link-based ranking features can be implemented in large-scale search engines; and we identify promising avenues for future research.
Marc Najork
Incremental Algorithms for Effective and Efficient Query Recommendation
Query recommender systems give users hints on possible
interesting queries
relative to their information needs. Most query recommenders are based on static knowledge models built on the basis of past user behaviors recorded in query logs. These models should be periodically updated, or rebuilt from scratch, to keep up with the possible variations in the interests of users. We study query recommender algorithms that generate suggestions on the basis of models that are updated continuously, each time a new query is submitted. We extend two state-of-the-art query recommendation algorithms and evaluate the effects of continuous model updates on their effectiveness and efficiency. Tests conducted on an actual query log show that contrasting model aging by continuously updating the recommendation model is a viable and effective solution.
Daniele Broccolo, Ophir Frieder, Franco Maria Nardini, Raffaele Perego, Fabrizio Silvestri
Fingerprinting Ratings for Collaborative Filtering — Theoretical and Empirical Analysis
We consider fingerprinting methods for collaborative filtering (CF) systems. In general, CF systems show their real strength when supplied with enormous data sets. Earlier work already suggests sketching techniques to handle massive amounts of information, but most prior analysis has so far been limited to non-ranking application scenarios and has focused mainly on a theoretical analysis. We demonstrate how to use fingerprinting methods to compute a
family
of rank correlation coefficients. Our methods allow identifying users who have similar rankings over a certain set of items, a problem that lies at the heart of CF applications. We show that our method allows approximating rank correlations with high accuracy and confidence. We examine the suggested methods empirically through a recommender system for the Netflix dataset, showing that the required fingerprint sizes are even smaller than the theoretical analysis suggests. We also explore the of use standard hash functions rather than min-wise independent hashes and the relation between the quality of the final recommendations and the fingerprint size.
Yoram Bachrach, Ralf Herbrich
On Tag Spell Checking
Exploiting the cumulative behavior of users is a common technique used to improve many popular online services. We build a tag spell checker using a graph-based model. In particular, we present a novel technique based on the graph of tags associated with objects made available by online sites such as Flickr and YouTube. We show the effectiveness of our approach on the basis of an experimentation done on real-world data. We show a precision of up to 93% with a recall (i.e., the number of errors detected) of up to 100%.
Franco Maria Nardini, Fabrizio Silvestri, Hossein Vahabi, Pedram Vahabi, Ophir Frieder
Compressed Self-indices Supporting Conjunctive Queries on Document Collections
We prove that a document collection, represented as a unique sequence
T
of
n
terms over a vocabulary Σ, can be represented in
nH
0
(
T
) +
o
(
n
)(
H
0
(
T
) + 1) bits of space, such that a conjunctive query
t
1
∧ ⋯ ∧
t
k
O
(
δ
is the instance difficulty of the query, as defined by Barbay and Kenyon in their SODA’02 paper, and
H
0
(
T
) is the empirical entropy of order 0 of
T
. As a comparison, using an inverted index plus the adaptive intersection algorithm by Barbay and Kenyon takes
$O(k\delta\log{\frac{n_M}{\delta}})$
, where
n
M
is the length of the shortest and longest occurrence lists, respectively, among those of the query terms. Thus, we can replace an inverted index by a more space-efficient in-memory encoding, outperforming the query performance of inverted indices when the ratio
$\frac{n_M}{\delta}$
is
ω
(log|Σ|).
Diego Arroyuelo, Senén González, Mauricio Oyarzún
String Retrieval for Multi-pattern Queries
Given a collection
$\mathcal D$
of string documents
$\{d_1,d_2,...,d_{|\mathcal D|}\}$
of total length
n
, which may be preprocessed, a fundamental task is to retrieve the most relevant documents for a given query. The query consists of a set of
m
patterns {
P
1
,
P
2
, ...,
P
m
}. To measure the relevance of a document with respect to the query patterns, we may define a score, such as the number of occurrences of these patterns in the document, or the proximity of the given patterns within the document. To control the size of the output, we may also specify a threshold (or a parameter
K
), so that our task is to report all the documents which match the query with score more than threshold (or respectively, the
K
documents with the highest scores).
When the documents are strings (without word boundaries), the traditional inverted-index-based solutions may not be applicable. The single pattern retrieval case has been well-solved by [14,9]. When it comes to two or more patterns, the only non-trivial solution for proximity search and common document listing was given by [14], which took
$\tilde{O}(n^{3/2})$
space. In this paper, we give the first linear space (and partly succinct) data structures, which can answer multi-pattern queries in
$O(\sum |P_i|) + \tilde {O}(t^{1/m} n^{1-1/m})$
time, where
t
is the number of output occurrences. In the particular case of two patterns, we achieve the bound of
$O(|P_1| + |P_2| + \sqrt{nt}\log^2 n)$
. We also show space-time trade-offs for our data structures. Our approach is based on a novel data structure called the
weight-balanced wavelet tree
, which may be of independent interest.
Wing-Kai Hon, Rahul Shah, Sharma V. Thankachan, Jeffrey Scott Vitter
Colored Range Queries and Document Retrieval
Colored range queries are a well-studied topic in computational geometry and database research that, in the past decade, have found exciting applications in information retrieval. In this paper we give improved time and space bounds for three important one-dimensional colored range queries — colored range listing, colored range top-
k
queries and colored range counting — and, thus, new bounds for various document retrieval problems on general collections of sequences. Specifically, we first describe a framework including almost all recent results on colored range listing and document listing, which suggests new combinations of data structures for these problems. For example, we give the fastest compressed data structures for colored range listing and document listing, and an efficient data structure for document listing whose size is bounded in terms of the high-order entropies of the library of documents. We then show how (approximate) colored top-
k
queries can be reduced to (approximate) range-mode queries on subsequences, yielding the first efficient data structure for this problem. Finally, we show how a modified wavelet tree can support colored range counting in logarithmic time and space that is succinct whenever the number of colors is superpolylogarithmic in the length of the sequence.
Travis Gagie, Gonzalo Navarro, Simon J. Puglisi
Range Queries over Untangled Chains
We present a practical implementation of the first adaptive data structure for orthogonal range queries in 2D [Arroyuelo et al., ISAAC 2009]. The structure is static, requires only linear space for its representation, and can even be made implicit. The running time for a query is
$O(\lg k\lg n + \min(k,m)\lg n + m)$
, where
k
is the number of non-crossing monotonic chains in which we can partition the set of points, and
m
is the size of the output. The space consumption of our implementation is 2
n
+
o
(
n
) words. The experimental results show that this structure is competitive with the state of the art. We also present an alternative construction algorithm for our structure, which in practice outperforms the original proposal by orders of magnitude.
Francisco Claude, J. Ian Munro, Patrick K. Nicholson
Multiplication Algorithms for Monge Matrices
In this paper we study algorithms for the max-plus product of Monge matrices. These algorithms use the underlying regularities of the matrices to be faster than the general multiplication algorithm, hence saving time. A non-naive solution is to iterate the SMAWK algorithm. For specific classes there are more efficient algorithms. We present a new multiplication algorithm (MMT), that is efficient for general Monge matrices and also for specific classes. The theoretical and empirical analysis shows that MMT operates in near optimal space and time. Hence we give further insight into an open problem proposed by Landau. The resulting algorithms are relevant for bio-informatics, namely because Monge matrices occur in string alignment problems.
Luís M. S. Russo
Why Large Closest String Instances Are Easy to Solve in Practice
We initiate the study of the smoothed complexity of the
Closest String
problem by proposing a semi-random model of Hamming distance. We restrict interest to the optimization version of the
Closest String
problem and give a randomized algorithm, we refer to as
CSP-Greedy
, that computes the closest string on smoothed instances up to a constant factor approximation in time
O
(ℓ
3
), where ℓ is the string length. Using smoothed analysis, we prove
CSP-Greedy
achieves a
$\left( ( 1 + \frac{\epsilon e}{2^n})\right)^{\ell}$
-approximation guarantee, where
ε
> 0 is any small value and
n
is the number of strings. These approximation and runtime guarantees demonstrate that
Closest String
instances with a relatively large number of input strings are efficiently solved in practice. We also give experimental results demonstrating that
CSP-greedy
runs extremely efficiently on instances with a large number of strings. This counter-intuitive fact that “large”
Closest String
instances are easier and more efficient to solve gives new insight into this well-investigated problem.
Christina Boucher, Kathleen Wilkie
A PTAS for the Square Tiling Problem
The Square Tiling Problem was recently introduced as equivalent to the problem of reconstructing an image from patches and a possible general-purpose indexing tool. Unfortunately, the Square Tiling Problem was shown to be
$\cal{NP}$
-hard. A 1/2-approximation is known.
We show that if the tile alphabet is fixed and finite, there is a Polynomial Time Approximation Scheme (PTAS) for the Square Tiling Problem with approximation ratio of
$(1-{\epsilon\over 2\log n})$
for any given
ε
≤ 1.
Amihood Amir, Alberto Apostolico, Gad M. Landau, Oren Sar Shalom
On the Hardness of Counting and Sampling Center Strings
Given a set
S
of
n
strings, each of length ℓ, and a non-negative value
d
, we define a
center string
as a string of length ℓ that has Hamming distance at most
d
from each string in
S
. The
#Closest String
problem aims to determine the number of unique center strings for a given set of strings
S
and input parameters
n
, ℓ, and
d
. We show
#Closest String
is impossible to solve exactly or even approximately in polynomial time, and that restricting
#Closest String
so that any one of the parameters
n
, ℓ, or
d
is fixed leads to an FPRAS. We show equivalent results for the problem of efficiently sampling center strings uniformly at random.
Christina Boucher, Mohamed Omar
Counting and Verifying Maximal Palindromes
A palindrome is a symmetric string that reads the same forward and backward. Let
pals
(
w
) denote the set of maximal palindromes of a string
w
in which each palindrome is represented by a pair (
c
,
r
), where
c
is the center and
r
is the radius of the palindrome. We say that two strings
w
and
z
are pal-distinct if
pals
(
w
) ≠
pals
(
z
). Firstly, we describe the number of pal-distinct strings, and show that we can enumerate all pal-distinct strings in time linear in the output size, for alphabets of size at most 3. These results follow from a close relationship between maximal palindromes and parameterized matching. Secondly, we present a linear time algorithm which finds a string
w
such that
pals
(
w
) is identical to a given set of maximal palindromes.
Tomohiro I, Shunsuke Inenaga, Hideo Bannai, Masayuki Takeda
Identifying SNPs without a Reference Genome by Comparing Raw Reads
Next generation sequencing (NGS) technologies are being applied to many fields of biology, notably to survey the polymorphism across individuals of a species. However, while single nucleotide polymorphisms (SNPs) are almost routinely identified in model organisms, the detection of SNPs in non model species remains very challenging due to the fact that almost all methods rely on the use of a reference genome. We address here the problem of identifying SNPs without a reference genome. For this, we propose an approach which compares two sets of raw reads. We show that a SNP corresponds to a recognisable pattern in the de Bruijn graph built from the reads, and we propose algorithms to identify these patterns, that we call
mouths
. We outline the potential of our method on real data. The method is tailored to short reads (typically Illumina), and works well even when the coverage is low where it reports few but highly confident SNPs. Our program, called
KisSnp
http://alcovna.genouest.org/kissnp/
.
Pierre Peterlongo, Nicolas Schnel, Nadia Pisanti, Marie-France Sagot, Vincent Lacroix
Dynamic Z-Fast Tries
We describe a dynamic version of the
z-fast trie
, a new data structure inspired by the research started by the van Emde Boas trees [12] and followed by the development of y-fast tries [13]. The dynamic z-fast trie is a very simple, uniform data structure: given a set
S
of
n
variable-length strings, it is formed by a standard compacted trie on
S
(with two additional pointers per node), endowed with a dictionary of size
n
− 1. With this simple setup, the dynamic z-fast trie provides predecessors/successors in time
O
(log max{|
x
|,|
x
+
|,|
x
− |}) (
x
±
is the successor/predecessor of
x
in
S
) for strings of length linear in the machine-word size
w
. Prefix queries are answered in time
O
(log|
x
| +
k
), and range queries in time
O
(log max{|
x
|,|
y
|,|
x
− |,|
y
+
|} +
k
), where
k
is the number of elements in the output and
x
(and
y
) represent the input of the prefix (range) queries. Updates are performed within the same bounds in expectation (or with high probability using an appropriate dictionary). We then show a simple modification that makes it possible to handle strings of length up to 2
w
; in this case, predecessor/successor queries and updates are supported in
O
(|
x
|/
w
+ log max{|
x
|,|
x
+
|,|
x
− |}) time, (and
O
(|
x
|/
B
+ log max{|
x
|,|
x
+
|,|
x
− |}) I/Os in the cache-oblivious model) with high probability. The space occupied by a dynamic z-fast trie, beside that necessary to store
S
, is just of 12
n
pointers,
n
integers and, in the "long string" case,
O
(
n
) signatures of
O
(
w
) bits each.
Djamal Belazzougui, Paolo Boldi, Sebastiano Vigna
Improved Fast Similarity Search in Dictionaries
We engineer an algorithm to solve the approximate dictionary matching problem. Given a list of words
$\mathcal{W}$
, maximum distance
d
fixed at preprocessing time and a query word
q
, we would like to retrieve all words from
$\mathcal{W}$
that can be transformed into
q
with
d
or less edit operations. We present data structures that support fault tolerant queries by generating an index. On top of that, we present a generalization of the method that eases memory consumption and preprocessing time significantly. At the same time, running times of queries are virtually unaffected. We are able to match in lists of hundreds of thousands of words and beyond within microseconds for reasonable distances.
Daniel Karch, Dennis Luxen, Peter Sanders
Training Parse Trees for Efficient VF Coding
We address the problem of improving variable-length-to- fixed-length codes (VF codes), which have favourable properties for fast compressed pattern matching but moderate compression ratios. Compression ratio of VF codes depends on the parse tree that is used as a dictionary. We propose a method that trains a parse tree by scanning an input text repeatedly, and we show experimentally that it improves the compression ratio of VF codes rapidly to the level of state-of-the-art compression methods.
Takashi Uemura, Satoshi Yoshida, Takuya Kida, Tatsuya Asai, Seishi Okamoto
Algorithms for Finding a Minimum Repetition Representation of a String
A string with many repetitions can be written compactly by replacing
h
-fold contiguous repetitions of substring
r
with (
r
)
h
. We refer to such a compact representation as a
repetition representation string
or RRS, by which a set of disjoint or nested tandem arrays can be compacted. In this paper, we study the problem of finding a
minimum RRS
or MRRS, where the size of an RRS is defined to be the sum of its component letter sizes and the sizes needed to describe the repetitions (·)
h
which are defined as
w
R
(
h
) using a repetition weight function
w
R
. We develop two dynamic programming algorithms to solve the problem. One is CMR that works for any repetition weight function, and the other is CMR-C that is faster but can be applied only when the repetition weight function is constant. CMR-C is an
O
(
w
(
n
+
z
))-time algorithm using
O
(
n
+
z
) space for a given string with length
n
, where
w
and
z
are the number of distinct primitive tandem repeats and the number of their occurrences, respectively. Since
w
=
O
(
n
) and
z
=
O
(
n
log
n
) in the worst case, CMR-C is an
O
(
n
2
log
n
)-time
O
(
n
log
n
)-space algorithm, which is faster than CMR by ((log
n
)/
n
)-factor.
Atsuyoshi Nakamura, Tomoya Saito, Ichigaku Takigawa, Hiroshi Mamitsuka, Mineichi Kudo
Faster Compressed Dictionary Matching
Given a set
${\cal D}$
of
d
patterns, the dictionary matching problem is to index
${\cal D}$
such that for any query text
T
, we can locate the occurrences of any pattern within
T
efficiently. When
${\cal D}$
contains a total of
n
characters drawn from an alphabet of size
σ
, Hon et al. (2008) gave an
$nH_k({\cal D}) + o(n \log \sigma)$
-bit index which supports a query in
O
(|
T
| (log
ε
n
+ log
d
) +
occ
) time, where
ε
> 0 and
$H_k({\cal D})$
denotes the
k
th order entropy of
${\cal D}$
. Very recently, Belazzougui (2010) proposed an elegant scheme, which takes
n
log
σ
+
O
(
n
) bits of index space and supports a query in optimal
O
(|
T
| +
occ
) time. In this paper, we provide connections between Belazzougui’s index and the XBW compression of Ferragina et al. (2005), and show that Belazzougui’s index can be slightly modified to be stored in
$nH_k({\cal D}) + O(n)$
bits, while query time remains optimal; this improves the compressed index by Hon et al. (2008) in both space and time.
Wing-Kai Hon, Tsung-Han Ku, Rahul Shah, Sharma V. Thankachan, Jeffrey Scott Vitter
Relative Lempel-Ziv Compression of Genomes for Large-Scale Storage and Retrieval
Self-indexes
– data structures that simultaneously provide fast search of and access to compressed text – are promising for genomic data but in their usual form are not able to exploit the high level of replication present in a collection of related genomes. Our ‘RLZ’ approach is to store a self-index for a base sequence and then compress every other sequence as an LZ77 encoding relative to the base. For a collection of
r
sequences totaling
N
bases, with a total of
s
point mutations from a base sequence of length
n
, this representation requires just
$nH_k(T) + s\log n + s\log \frac{N}{s} + O(s)$
bits. At the cost of negligible extra space, access to ℓ consecutive symbols requires
$\O(\ell + \log n)$
time. Our experiments show that, for example, RLZ can represent individual human genomes in around 0.1 bits per base while supporting rapid access and using relatively little memory.
Shanika Kuruppu, Simon J. Puglisi, Justin Zobel
Standard Deviation as a Query Hardness Estimator
In this paper a new
Query Performance Prediction
method is introduced. This method is based on the hypothesis that different score distributions appear for ‘hard’ and ‘easy’ queries. Following we propose a set of measures which try to capture the differences between both types of distributions, focusing on the dispersion degree among the scores. We have applied some variants of the classic standard deviation and have studied methods to find out the most suitable size of the ranking list for these measures. Finally, we present the results obtained performing the experiments on two different data-sets.
Joaquín Pérez-Iglesias, Lourdes Araujo
Using Related Queries to Improve Web Search Results Ranking
There are numerous queries for which search engine results are not satisfactory. For instance, the user may submit an ambiguous or miss-spelled query; or there might be a mismatch between query and document vocabulary, or even character set in some languages. Different automatic methods for query rewriting / refinement have been proposed in the literature, but little work has been done on how to combine the results of these rewrites to find relevant documents. In this paper, we review some techniques efficient enough to be computed online and we discuss their respective assumptions. We also propose and discuss a new model that is theoretically more appealing while still computationally very efficient. Our experiments show that all methods manage to improve the ranking of a leading commercial search engine.
Georges Dupret, Ricardo Zilleruelo-Ramos, Sumio Fujita
Evaluation of Query Performance Prediction Methods by Range
During the last years a great number of Query Performance Prediction methods have been proposed. However, this explosion of prediction method proposals have not been paralleled by an in-depth study of suitable methods to evaluate these estimations. In this paper we analyse the current approaches to evaluate Query Performance Prediction methods, highlighting some limitations they present. We also propose a novel method for evaluating predictors focused on revealing the different performance they have for queries of distinct degree of difficulty. This goal can be achieved by transforming the prediction performance evaluation problem into a classification task, assuming that each topic belongs to a unique type based on their retrieval performance. We compare the different evaluation approaches showing that the proposed evaluation exhibits a more accurate performance, making explicit the differences between predictors for different types of queries.
Joaquín Pérez-Iglesias, Lourdes Araujo
Mining Large Query Induced Graphs towards a Hierarchical Query Folksonomy
The human interaction through the web generates both implicit and explicit knowledge. An example of an implicit contribution is searching, as people contribute with their knowledge by clicking on retrieved documents. Thus, an important and interesting challenge is to extract semantic relations among queries and their terms from query logs. In this paper we present and discuss results on mining large query log induced graphs, and how they contribute to query classification and to understand user intent and interest. Our approach consists on efficiently obtaining a hierarchical clustering for such graphs and, then, a hierarchical query folksonomy. Results obtained with real data provide interesting insights on semantic relations among queries and are compared with conventional taxonomies, namely the ODP categorization.
Alexandre P. Francisco, Ricardo Baeza-Yates, Arlindo L. Oliveira
Finite Automata Based Algorithms for the Generalized Constrained Longest Common Subsequence Problems
The
Longest Common Subsequence
(LCS) problem is a classic and well-studied problem in computer science. Given strings
S
1
,
S
2
and
P
, the generalized constrained longest common subsequence problem (GC-LCS) for
S
1
and
S
2
with respect to
P
is to find a longest common subsequence of
S
1
and
S
2
, which contains (excludes)
P
as a subsequence (substring). We present finite automata based algorithms with time complexity O(
r
(
n
+
m
) + (
n
+
m
) log(
n
+
m
) ) for a fixed sized alphabet, where
r
,
n
and
m
are the lengths of
P
,
S
1
and
S
2
respectively.
Effat Farhana, Jannatul Ferdous, Tanaeem Moosa, M. Sohel Rahman
Restricted LCS
The
Longest Common Subsequence
(LCS) of two or more strings is a fundamental well-studied problem which has a wide range of applications throughout computational sciences. When the common subsequence must contain one or more
constraint strings
as subsequences, the problem becomes the
Constrained LCS
(CLCS) problem. In this paper we consider the
Restricted LCS
(RLCS) problem, where our goal is finding a longest common subsequence between two or more strings that does not contain a given set of
restriction strings
as subsequences. First we show that in case of two input strings and an arbitrary number of restriction strings the RLCS problem is NP-hard. Afterwards, we present a dynamic programming solution for RLCS and we show that this algorithm implies that RLCS is in FPT when parameterized by the total length of the restriction strings. In the last part of this paper we present two approximation algorithms for the hard variants of the problem.
Zvi Gotthilf, Danny Hermelin, Gad M. Landau, Moshe Lewenstein
Extracting Powers and Periods in a String from Its Runs Structure
A breakthrough in the field of text algorithms was the discovery of the fact that the maximal number of runs in a string of length
n
is
O
(
n
) and that they can all be computed in
O
(
n
) time. We study some applications of this result. New simpler
O
(
n
) time algorithms are presented for a few classical string problems: computing all distinct
k
th string powers for a given
k
, in particular squares for
k
= 2, and finding all local periods in a given string of length
n
. Additionally, we present an efficient algorithm for testing primitivity of factors of a string and computing their primitive roots. Applications of runs, despite their importance, are underrepresented in existing literature (approximately one page in the paper of Kolpakov & Kucherov, 1999). In this paper we attempt to fill in this gap. We use Lyndon words and introduce the Lyndon structure of runs as a useful tool when computing powers. In problems related to periods we use some versions of the Manhattan skyline problem.
Maxime Crochemore, Costas Iliopoulos, Marcin Kubica, Jakub Radoszewski, Wojciech Rytter, Tomasz Waleń
On Shortest Common Superstring and Swap Permutations
The Shortest Common Superstring (SCS) is a well studied problem, having a wide range of applications. In this paper we consider two problems closely related to it. First we define the
Swapped Restricted Superstring
(SRS) problem, where we are given a set
S
of
n
strings,
s
1
,
s
2
, ...,
s
n
, and a text
T
=
t
1
t
2
...
t
m
, and our goal is to find a swap permutation
π
: {1, ...,
m
} →{1, ...,
m
} to maximize the number of strings in
S
that are substrings of
t
π
(1)
t
π
(2)
...
t
π
(
m
)
. We then show that the
SRS
problem is
NP-Complete
. Afterwards, we consider a similar variant denoted
SRSR
, where our goal is to find a swap permutation
π
: {1, ...,
m
} →{1, ...,
m
} to maximize the total number of times that the strings of
S
appear in
t
π
(1)
t
π
(2)
...
t
π
(
m
)
(we can count the same string
s
i
as a substring of
t
π
(1)
t
π
(2)
...
t
π
(
m
)
more than once). For this problem, we present a polynomial time exact algorithm.
Zvi Gotthilf, Moshe Lewenstein, Alexandru Popa
A Self-Supervised Approach for Extraction of Attribute-Value Pairs from Wikipedia Articles
Wikipedia is the largest encyclopedia on the web and has been widely used as a reliable source of information. Researchers have been extracting entities, relationships and attribute-value pairs from Wikipedia and using them in information retrieval tasks. In this paper we present a self-supervised approach for autonomously extract attribute-value pairs from Wikipedia articles. We apply our method to the Wikipedia automatic infobox generation problem and outperformed a method presented in the literature by 21.92% in precision, 26.86% in recall and 24.29% in F1.
Wladmir C. Brandão, Edleno S. Moura, Altigran S. Silva, Nivio Ziviani
Temporal Analysis of Document Collections: Framework and Applications
As the amount of generated information increases so rapidly in the digital world, the concept of time as a dimension along which information can be organized and explored becomes more and more important. In this paper, we present a temporal document analysis framework for document collections in support of diverse information retrieval and seeking tasks. Our analysis is not based on document creation and/or modification timestamps but on extracting time from the content itself. We also briefly sketch some scenarios and experiments for analyzing documents from a temporal perspective.
Omar Alonso, Michael Gertz, Ricardo Baeza-Yates
Text Comparison Using Soft Cardinality
The classical set theory provides a method for comparing objects using cardinality and intersection, in combination with well-known resemblance coefficients such as Dice, Jaccard, and cosine. However, set operations are intrinsically crisp: they do not take into account similarities between elements. We propose a new general-purpose method for comparison of objects using a soft cardinality function that show that the soft cardinality method is superior via an auxiliary affinity (similarity) measure. Our experiments with 12 text matching datasets suggest that the soft cardinality method is superior to known approximate string comparison methods in text comparison task.
Sergio Jimenez, Fabio Gonzalez, Alexander Gelbukh
Hypergeometric Language Model and Zipf-Like Scoring Function for Web Document Similarity Retrieval
The retrieval of similar documents in the Web from a given document is different in many aspects from information retrieval based on queries generated by regular search engine users. In this work, a new method is proposed for Web similarity document retrieval based on generative language models and meta search engines. Probabilistic language models are used as a random query generator for the given document. Queries are submitted to a customizable set of Web search engines. Once all results obtained are gathered, its evaluation is determined by a proposed scoring function based on the Zipf law. Results obtained showed that the proposed methodology for query generation and scoring procedure solves the problem with acceptable levels of precision.
Felipe Bravo-Marquez, Gaston L’Huillier, Sebastián A. Ríos, Juan D. Velásquez
Dual-Sorted Inverted Lists
Several IR tasks rely, to achieve high efficiency, on a single pervasive data structure called the
inverted index
. This is a mapping from the terms in a text collection to the documents where they appear, plus some supplementary data. Different orderings in the list of documents associated to a term, and different supplementary data, fit widely different IR tasks. Index designers have to choose the right order for one such task, rendering the index difficult to use for others.
In this paper we introduce a general technique, based on
wavelet trees
, to maintain a single data structure that offers the combined functionality of two independent orderings for an inverted index, with competitive efficiency and within the space of one
compressed
inverted index. We show in particular that the technique allows combining an ordering by decreasing term frequency (useful for ranked document retrieval) with an ordering by increasing document identifier (useful for phrase and Boolean queries). We show that we can support not only the primitives required by the different search paradigms (e.g., in order to implement any intersection algorithm on top of our data structure), but also that the data structure offers novel ways of carrying out many operations of interest, including space-free treatment of stemming and hierarchical documents.
Gonzalo Navarro, Simon J. Puglisi
CST++
Let
A
be an array of
n
elements taken from a totally ordered set. We present a data structure of size 3
n
+
o
(
n
) bits that allows us to answer the following queries on
A
in constant time, without accessing
A
: (1) given indices
i
<
j
, find the position of the minimum in
A
[
i
..
j
], (2) given index
i
, find the first index to the left of
i
where
A
is strictly smaller than at
i
, and (3) same as (2), but to the right of the query index. Based on this, we present a new compressed suffix tree (CST) with
O
(1)-navigation that is smaller than previous CSTs. Our data structure also provides a new (practical) approach to compress the LCP-array.
Enno Ohlebusch, Johannes Fischer, Simon Gog
Succinct Representations of Dynamic Strings
The
rank
and
select
operations over a string of length
n
from an alphabet of size
σ
have been used widely in the design of succinct data structures. In many applications, the string itself must be maintained dynamically, allowing characters of the string to be inserted and deleted. Under the word RAM model with word size
$w=\Omega(\lg n)$
, we design a succinct representation of dynamic strings using
$nH_0 + o(n)\cdot\lg\sigma + O(w)$
bits to support
rank, select, insert
and
delete
in
$O(\frac{\lg n}{\lg\lg n}(\frac{\lg \sigma}{\lg\lg n}+1))$
time. When the alphabet size is small, i.e. when
σ
=
O
(
polylog
(
n
)), including the case in which the string is a bit vector, these operations are supported in
$O(\frac{\lg n}{\lg\lg n})$
time. Our data structures are more efficient than previous results on the same problem, and we have applied them to improve results on the design and construction of space-efficient text indexes.
Meng He, J. Ian Munro
Computing Matching Statistics and Maximal Exact Matches on Compressed Full-Text Indexes
Exact string matching is a problem that computer programmers face on a regular basis, and full-text indexes like the suffix tree or the suffix array provide fast string search over large texts. In the last decade, research on compressed indexes has flourished because the main problem in large-scale applications is the space consumption of the index. Nowadays, the most successful compressed indexes are able to obtain almost optimal space and search time simultaneously. It is known that a myriad of sequence analysis and comparison problems can be solved efficiently with established data structures like the suffix tree or the suffix array, but algorithms on compressed indexes that solve these problem are still lacking at present. Here, we show that
matching statistics
and
maximal exact matches
between two strings
S
1
and
S
2
can be computed efficiently by matching
S
2
backwards against a compressed index of
S
1
.
Enno Ohlebusch, Simon Gog, Adrian Kügel
The Gapped Suffix Array: A New Index Structure for Fast Approximate Matching
Approximate searching using an index is an important application in many fields. In this paper we introduce a new data structure called the gapped suffix array for approximate searching in the Hamming distance model. Building on the well known filtration approach for approximate searching, the use of the gapped suffix array can improve search speed by avoiding the merging of position lists.
Maxime Crochemore, German Tischler
Parameterized Searching with Mismatches for Run-Length Encoded Strings
(Extended Abstract)
Two strings
y
and
y′
of equal length over respective alphabets Σ
y
and Σ
y
are said to
parameterized match
if there exists a bijection
π
: Σ
y
→Σ
y
such that
π
(
y
) =
y′
, i.e., renaming each character of
y
according to its corresponding element under
π
yields
y′
. (Here we assume that all symbols of both alphabets are used somewhere.) Two natural problems are then
parameterized matching
, which consists of finding all positions of some text
x
where a pattern
y
parameterized matches a substring of
x
, and
approximate parameterized matching
, which seeks, at each location of
x
, a bijection
π
maximizing the number of parameterized matches at that location.
Alberto Apostolico, Péter L. Erdős, Alpár Jüttner
Fast Bit-Parallel Matching for Network and Regular Expressions
In this paper, we extend the SHIFT-AND approach by Baeza-Yates and Gonnet (CACM 35(10), 1992) to the matching problem for network expressions, which are regular expressions without Kleene-closure and useful in applications such as bioinformatics and event stream processing. Following the study of Navarro (RECOMB, 2001) on the extended string matching, we introduce new operations called Scatter, Gather, and Propagate to efficiently compute
ε
-moves of the Thompson NFA using the Extended SHIFT-AND approach with integer addition. By using these operations and a property called the bi-monotonicity of the Thompson NFA, we present an efficient algorithm for the network expression matching that runs in
O
(
ndm
/
w
) time using
O
(
dm
) preprocessing and
O
(
dm
/
w
) space, where
m
and
d
are the length and the depth of a given network expression,
n
is the length of an input text, and
w
is the word length of the underlying computer. Furthermore, we show a modified matching algorithm for the class of regular expressions that runs in
O
(
ndm
log(
m
)/
w
) time.
Yusaku Kaneta, Shin-ichi Minato, Hiroki Arimura
String Matching with Variable Length Gaps
We consider string matching with variable length gaps. Given a string
T
and a pattern
P
consisting of strings separated by variable length gaps (arbitrary strings of length in a specified range), the problem is to find all ending positions of substrings in
T
that match
P
. This problem is a basic primitive in computational biology applications. Let
m
and
n
be the lengths of
P
and
T
, respectively, and let
k
be the number of strings in
P
. We present a new algorithm achieving time
O
((
n
+
m
)log
k
+
α
) and space
O
(
m
+
A
), where
A
is the sum of the lower bounds of the lengths of the gaps in
P
and
α
is the total number of occurrences of the strings in
P
within
T
. Compared to the previous results this bound essentially achieves the best known time and space complexities simultaneously. Consequently, our algorithm obtains the best known bounds for almost all combinations of
m
,
n
,
k
,
A
, and
α
. Our algorithm is surprisingly simple and straightforward to implement.
Philip Bille, Inge Li Gørtz, Hjalte Wedel Vildhøj, David Kofoed Wind
Approximate String Matching with Stuck Address Bits
A string
S
∈ Σ
m
can be viewed as a set of pairs
$\{ (s_i , i) \mid s_i\in S,\ i\in \{ 0,\ldots, m-1\} \}$
. We follow the recent work on
and consider approximate pattern matching problems arising from the setting where errors are introduced to the location component (
i
), rather than the more traditional setting, where errors are introduced to the content itself (
s
i
). Specifically, we continue the work on string matching in the presence of address bit errors. In this paper, we consider the case where bits of
i
may be stuck, either in a consistent or transient manner. We formally define the corresponding approximate pattern matching problems, and provide efficient algorithms for their resolution.
Amihood Amir, Estrella Eisenberg, Orgad Keller, Avivit Levy, Ely Porat
Erratum to: Range Queries over Untangled Chains
In the original version of this paper the bound cited in the abstract is incorrect. It should read ”The running time for a query is
O
(lg
k
lg
n
+
k
′lg
n
+
m
), where
k
is the number of non-crossing monotonic chains in which we can partition the set of points,
k
′ leq
k
is a value dependent on the query, and
m
is the size of the output.”
This bound also appears at the end of Section 2.
Francisco Claude, J. Ian Munro, Patrick K. Nicholson
Backmatter
Weitere Informationen
|
{}
|
# Is there a way of defining the notion of a variable mathematically?
I know that the notion of "set" is one that cannot be defined mathematically since it is the fundamental data type that is used to define everything else (and the definition which says that "sets" are the objects in any model of set theory is to me circular since models are defined in terms of sets).
It seems to me that there is another fundamental concept just like "set", namely the notion of a "variable". Is this true?
• The word "variable" means many different things, and many of these meanings are non-rigorous. What do you mean by "variable"? – gspr Apr 18 '12 at 19:13
• We called a certain theory "Set theory" and the objects in models of this theory "sets". It would make no sense to say that "sets are the objects in a universe of a model of Kreplach theory". :-) – Asaf Karagila Apr 18 '12 at 19:46
• There was a notion of variable quantity, abbreviated as variable, in the $18$th century, and well into the $19$th. In principle the notion has vanished, has been replaced by function, which is a set of ordered pairs such that $\dots$. But the word variable is still around, just like the symbol $\infty$ is still around, and we all still have an appendix. – André Nicolas Apr 18 '12 at 20:43
• Thinking about it, I'm not sure [set-theory] actually fits this question. It is about "variable" and not about "set". – Asaf Karagila Apr 18 '12 at 21:37
• @Andre: I don't think the notion of "function" has replaced the notion of a "variable quantity" any more than the notion of "set" has replaced the notion of a "mathematical object". – Hurkyl Apr 18 '12 at 23:27
In pure mathematics there is no such thing as variable, there are only constants. Consider equality $a = 5$. This supposed "variable" isn't variable at all!
On the other hand, there are those curious letters in the formulas, what do they mean you ask? Well, those are informal expressions that describe the objects we reason about, however, they do not have any precise meaning, they are not formal. It doesn't contradict that the description of the object might be perfectly fine, after all you use natural language to describe the notion of set, don't you?
Still, there is a way of formalizing this and the domain which happens to deal with such problems is called semantics, where there actually is something that is called a variable, but all the formal derivations are usually long, tedious and cumbersome. Moreover semantics is more about computer science, where the precise meaning of an expression is important for the computer that is to evaluate it (it doesn't know anything about our informal notion of variable, so we need to explain everything in the tiniest details).
In mathematics we deal with those informal expressions and "pattern-match" them with suitable cases. If you do it properly, everyone knows what do you mean (i.e. what function you want to define, etc.) so there is no need to overformalize it.
I know what I wrote looks more like a peculiar fairytale than a concrete answer, but that's the way I understand it. Hope that helps, even if only a bit ;-)
• I'd think you'd want to talk about syntax before (or instead of) semantics. – Hurkyl Apr 18 '12 at 23:10
• @Hurkyl Yeah, you are right, but I just wanted to sketch the situation, besides, where is the semantics, the syntax also follows ;-) Still, I think that talking about syntax without semantics in this case is close to useless, and this the reason I skipped it. – dtldarek Apr 18 '12 at 23:16
• I think I understand what you are getting at. If we take an informal expression with variables and make it formal ala metamath, we would get some string of symbols involving those variables, which is just a string in some formal language. Thus, in the language of metamath, a variable is any greek and roman letter that appears in a string of that language. – echoone Apr 19 '12 at 3:43
• @echoone Exactly ;-) – dtldarek Apr 19 '12 at 10:19
dtldarek has given an answer from one point of view. Let me offer another.
A variable in mathematics often means an element in something (a ring, a group, a vector space, ...) which can be specialized to some more specific value.
In modern algebra, this notion becomes formalized in various ways, one of which is by the notion of the free object on (wikipedia say "over", but my own experience is that it is more common to speak of the free object "on") a particular set of variables.
If you haven't seen it before, this notion will probably seem quite abstract (like a lot of formalism the first time you see it!). But it actually provides a rather precise formal match with the intuitive notion of a variable.
Added in response to the OP's comment: E.g. the polynomial ring $\mathbb C[x_1,\ldots,x_n]$ is the free commutative $\mathbb C$-algebra in the variables $x_1,\ldots,x_n$. If $A$ is any other commutative $\mathbb C$-algebra (e.g. $\mathbb C$ itself), then giving a homomorphism $\mathbb C[x_1,\ldots,x_n] \to A$ is the same as choosing $n$ elements $a_1,\ldots,a_n \in A$ ("the values of the variables") and declaring $x_1\mapsto a_1,\ldots, x_n \mapsto a_n$.
This illustrates the general principal that the free object (in some particular context) on the variables $x_1,\ldots,x_n$ is an object in which no relations are imposed between the elements $x_1,\ldots,x_n$, and so it can be mapped to any other object (of the appropriate sort) just by choosing values of the variables $x_1,\ldots,x_n$ in that object.
Variations of this point of view are how idea about variables and equations between them are implemented in contemporary algebraic geometry, for example.
• I do not see the connection. Can you give me a concrete example? – echoone Apr 19 '12 at 3:48
• @echoone: Dear echoone, I have added an example to my answer. Regards, – Matt E Apr 19 '12 at 4:19
• Thanks for the example. However, I am not convinced this clarifies what a variable is. Calling the symbol $x$ a variable when defining a polynomial ring $F[x]$ does not define what a variable is. – echoone Apr 20 '12 at 2:47
• @echoone: Dear echoone, If you say so. Your question asked whether the notion of variable could be defined. My answer describes a central piece of formalized mathematics which captures the intuition of the classical notion of variable as it is traditionally used in algebra. If you're asking about something else, that's fine; I've probably misinterpreted the question. Regards, – Matt E Apr 20 '12 at 3:21
• @echoone: I think Matt E‘s answer is spot on. In a sense asking what a variable is, is no different than asking what a set is. Of course you can try to describe with everyday examples what sets are, just like Euclide tried to define a point as something without parts. But in a mathematical foundation like ZFC, sets are a primitive notion, not defined in terms of other notions. Similarly in synthetic Euclidean geometry points are a primitive notion. In the same spirit Matt‘s answer suggest that variables might be seen as a primite notion. Elements of some ring etc. – Michael Bächtold Jan 25 '18 at 20:55
Most uses of "variable" should really be replaced by "undetermined", either in the sense "this is a fixed value, but I don't care to give it's value right now" or (in equations) "this is a fixed value, which we don't know except that it satisfies...". The first sense is what Matt E's answer describes formally.
I have a different take. I think there is quite a satisfactory definition of variable that works at least in basic algebra, calculus, and analysis (and probably other settings too). It is the same as the definition of "random variable" -- you just have to forget the "random" part. Let me explain.
First, mathematically, a variable is a function. What kind of functions deserve to be called "variables"? Often, we talk about "real" or "complex" variables. This refers to the range of the function. Variables are often valued in "numbers" (for "numeric" variables) often meaning something like a field (thus "real variables", and "complex variables"), but sometimes in set $$\{0, 1\}$$ for binary variables, or even in more complex mathematical object, like vector space for "vector variables" etc. etc. The domain of the function is slightly more subtle issue -- actually several related issues. They are related to 1) the ability of functions to restrict and 1) ability of functions to "pull back" (compose). The first gives us ability -- which, as Matt E points out in his answer, is critical -- to specialize the variables -- meaning restrict the functions to subdomains. The second gives us ability to transfer variables from one space (domain) to another one, which maps into it. Thus when we talk about a variable, we are at all times talking about a function, but we often change the domain on which we consider this function in the middle of the argument, without being explicit about it. (I think this is what makes the notion of variable useful, but also confusing.)
Stepping aside from formal-ish mathematics, here is how we use "variables" in mathematical modeling of systems in other subjects (physics, chemistry, economics, what have you). We are presented with a "system" (a robot, a gas, an economy, etc.) which has a set of states (modeled as some abstract set $$S$$). These states have numerical characteristics (joint angle of joint number 2 in radians, pressure at location P in Pascals, average price of gas in Paris in euros per liter, etc.), which are -- you guessed it -- variables. Thus a variable $$v$$ is a function from space of possible states $$S$$ to "a number". Of course there are "vector variables" (like "velocity of the robot's center of mass"), and "categorical" or "binary" ones ("what party does the president of the US belong to") etc. here as well. [This is, as I promised, the same as a "random variable" except we don't require either the set of states or the set of values to have any extra structure (no sigma algebras here) and correspondingly don't require $$v$$ to be "measurable".]
Often some particular collection of variables $$v_1, v_2, \ldots$$ is sufficient for our purposes -- it describes the state of our system well enough, so that for the purposes of the analysis that we intend to do we don't need to know anything more than values of those variables (this is akin to a "sufficient statistic" or a "faithful representation"). Then we may think of the system in question as being completely captured by these variables, and consider instead of the system its "model" aka the image $$V(S)$$ of the joint map $$V=(v_1, v_2, \ldots)$$. There are two things that happen now:
1) Any variable "on the system" $$v:S\to X$$ that is of interest is assumed to be a "pull back" of $$w:V(S)\to X$$ via $$V$$, that is, be of the form $$v(p)=w(V(p))$$ for some $$w$$ (this is what it means for the system to be completely captured by $$V$$).
and
2) Some of the variables on $$V(S)$$ (and hence on $$S$$) can be obtained by restricting to $$V(S)$$ a "global" variable. Here a global variable is variable defined on the "ambient space "$$R$$ -- the product of ranges of $$v_i$$s (if each $$v_i$$ is valued in, say, $$\mathbb{R}$$, and there are finitely many of them, then this $$R$$ is $$\mathbb{R}^n$$). Certainly each $$v_i$$ is of this form -- it is a restriction of the projection that takes $$(x_1, x_2, \ldots)$$ to $$x_i$$. These projections are now called "coordinate functions" or just "coordinates".
When we talk about variables, we sometimes refer to functions with domain $$S$$, sometimes with domain $$V(S)$$ and sometimes to the global variables with domain $$R$$. (Confusion between these plagues a number of discussions of calculus of variation, for example; I believe physicists sometimes refer to calculations with "global" variables as being "off shell" and ones with the restricted variables as being "on shell", but do not trust me on this point too much).
Let's consider some ramifications of this view in algebra/calculus/analytic geometry.
1) Equations: We have variables $$x$$, $$y$$ and $$z$$ that are global, but also restrict to variables on the system in question. When we write something like $$y=x^2$$ (or $$x^2=1$$) we are specifying a subdomain of $$R$$, the set of points $$p$$ (in $$R$$ or in $$V(S)$$ as the case may be) where $$y(p)=x(p)^2$$ (or $$x(p)^2=1$$, respectively) that is the (largest) subdomain on which $$y$$ and $$x^2$$ restrict to the same function ($$x^2$$ and $$1$$ restrict to the same function). "Solving" $$x^2=1$$ amounts to specifying this subdomain in a different format, usually by giving simple criterion for membership, like listing all elements ($$x=\{1, -1\}$$, where we implicitly use the coordinate functions, and really mean $$\{p| x(p)=1 \text{ or } x(p)=-1\}$$).
Aside: After a discussion of differentials (which are also functions, but again often simultaneously on tangent spaces of $$R$$ and on restrictions to tangent spaces of submanifolds (curves)), one can view ordinary differential equations like $$dy=ydx$$ in the similar way - as asking for submanifolds (curves) on tangent spaces to which the restrictions of the differentials are equal as functions.
2) Dependent and independent variables: Some $$V(S)$$ have the property that they are (at least locally) in bijection with (some open subset of) the images under sub-collection $$(v_{i_1}, v_{i_2},\ldots)$$ of (usually coordinate) variables. Then we can consider these $$v_{i_j}$$s as "independent variables" and all other variables as dependent variables, meaning functions (locally) obtained by pulling back functions of $$v_{i_j}$$s. Of course many $$V(S)$$ have multiple such descriptions: a circle $$x^2+y^2=25$$ near $$(3,4)$$ is locally in bijection with it's projection to the $$x$$ coordinate AND with it's projection to the $$y$$ coordinate -- we can view either $$x$$ OR $$y$$ as "independent variable" on the circle near $$(3,4)$$.
3) Relatedly, "a change of variables" or "change of coordinates" is then a change of which variables $$v_i$$ are used to represent $$S$$, often by composing $$V$$ with some "coordinate change map" from $$R$$ to some other range.
Etc.
|
{}
|
## GUPTA MECHANICAL
IN THIS WEBSITE I CAN TELL ALL ABOUT TECH. TIPS AND TRICKS APP REVIEWS AND UNBOXINGS ALSO TECH. NEWS .............
# [Solution] Departments (Easy Version) CodeChef Solution
## Problem
This is the easy version of the problem. The only difference between the easy and hard versions is that, in the easy one, additionally, there exists a solution with at least $2$ employees in every department.
ChefCorp has $N$ employees. Each employee belongs to exactly one of the $M$ departments. Each department is headed by exactly one of the employees belonging to that department.
The management structure of ChefCorp is as follows:
• Each employee of a department (including its head) is in contact with every other employee of that department.
• The head of each department is in contact with the heads of all the other departments.
For example, let $N = 7$$M = 3$ and employees $1, \textbf{2}, 3$ belong to the first department, employees $\textbf{4}, 5$ belong to the second department and employees $\textbf{6}, 7$ belong to the third department (employees in bold represent the heads of the departments). The following pairs of employees are in contact with each other: $(1, 2), (1, 3), (2, 3), (4, 5), (6, 7), (2, 4), (4, 6), (2, 6)$.
However, due to some inexplicable reasons, ChefCorp loses all its data on the departments. Given the total number of employees $N$ and every pair of employees who are in contact with each other, can you help recover the number of
Solution Click Below:- 👉
👇👇👇👇👇
departments and the employees belonging to each of the departments?
### Input Format
• The first line contains a single integer $T$ — the number of test cases. Then the test cases follow.
• The first line of each test case contains a single integer $N$ — the total number of employees in ChefCorp.
• The second line of each test case contains a single integer $K$ — the number of pairs of employees in contact with each other.
• $K$ lines follow. The $i^{th}$ of these $K$ lines contain two space-separated integers $u_i$ and $v_i$, denoting that employee $u_i$ is in contact with $v_i$.
### Output Format
• For each test case, output the following:
• In the first line output $M$ — the number of departments.
• In the second line, output $N$ space-separated integers $D_1, D_2, \ldots, D_N$ $(1 \leq D_i \leq M)$ — denoting that the $i^{th}$ employee belongs to the $D_i^{th}$ department.
• In the third line, output $M$ space-separated integers $H_1, H_2, \ldots, H_M$ $(1 \leq H_i \leq N, H_i \neq H_j$ when $i \neq j)$ — denoting that the $H_i^{th}$ employee is the head of the $i^{th}$ department.
If there are multiple answers, output any. It is guaranteed that at least one solution always exists.
|
{}
|
How does fermion degeneracy affect elastic scattering in astrophysical compact objects?
+ 6 like - 0 dislike
179 views
As I currently understand it, absorption processes or inelastic scattering involving fermions in a degenerate gas are strongly inhibited, because to change the fermion energy (or create a new fermion) requires it to be placed in a vacant energy state, which in turn means the fermion must be close enough to the Fermi energy that it can attain such a state.
My question here is how are elastic scattering processes affected? And I want to consider two possible cases.
1. The scattered particle (the one with a much lower mass) is a fermion in a degenerate gas. An example would be an electron in the interior of a white dwarf scattering off a carbon nucleus.
2. The scattering (more massive) particle is a fermion in a degenerate gas. So examples here could include neutrino scattering from degenerate nucleons inside a neutron star or perhaps even photon scattering off degenerate nucleons.
Does the fact that the degenerate fermion must change its momentum, even though its energy stays (almost) the same, inhibit these processes?
This post imported from StackExchange Physics at 2017-05-08 20:26 (UTC), posted by SE-user Rob Jeffries
recategorized May 8, 2017
+ 2 like - 0 dislike
Take any system which has states labelled by $i$ and occupation numbers labelled by $f_i$. I.e. $f_i$ is the expectation number of particles in the given state and is normalized not to one but as $\sum_i f_i = N$. Now let this system have a probability of transition per unit time from the state $i$ to another one $j$ denoted by $P_{ij}$. This means that $f_i P_{ij}$ particles will jump to $j$ from $f_i$ in unit time. On the other hand, $f_j P_{ji}$ particles will jump from $j$ to $i$ in the same time. Hence, we can characterize the time derivative of the occupation number as
$$\frac{d f_i}{d t} = \sum_j (P_{ji} f_j - P_{ij}f_i)$$
In many systems, particularly reversible ones we have $P_{ij}=P_{ji}$.
Imagine, however, that we are dealing with identical quantum particles, there we can derive from the permutation symmetries that the transition rates have to be modified as
$$\frac{d f_i}{d t} = \sum_j [P_{ji} f_j(1\pm f_i) - P_{ij}f_i(1 \pm f_j)]$$
where the plus is for Bose statistics and the minus for Fermi statistics. In particular, you can see that for fermions the probability of transition from $j$ to $i$ is zero if the state is already occupied.
The specific case of a gas in astrophysics can be modelled by the Boltzmann equation
$$\partial_t f + \partial_p H \partial_x f - \partial_x H \partial_p f = \delta f_{coll}$$
where $f(p,x)$ now stands for the occupation number in a phase-space cell of volume $\sim \hbar$ at $p,x$, and $H$ is some effective single-particle Hamiltonian which represents the free drifting of the microscopic particles in the macroscopic fields. The right-hand side is the collision term which (within a certain approximation) modulates the behaviour of the occupation number due to collision between particles.
Particles following Boltzmann statistics would have
$\delta f_{coll}(p,x) = \int Q(p,q \to p',q') [f(p,x) f(q,x)- f(p',x)f(q',x) ] dq' dq dp'$
where $Q(p,q \to p',q')$ is a scattering matrix computed for two particles scattering off each other while being alone in the universe. However, in analogy with the previous part of this answer, fermionic particles have
$\delta f_{coll}(p,x) = \int Q(p,q \to p',q')$
$\left[ f(p) f(q)(1-f(p'))(1-f(q')) - f(p')f(q')(1-f(p))(1-f(q)) \right] dq' dq dp'$
where I have written $f(q,x) \to f(q)$ for brevity.
I.e., the probability of scattering into a state occupied with density $f(p,x)$ will be modulated by a $1-f$ factor. If $f$ is close to one, this scattering will be essentially forbidden. In strongly degenerate gases this means that we can essentially neglect any scattering outcome in the Fermi phase-space surface, be it elastic or non-elastic.
answered May 10, 2017 by (1,635 points)
edited May 12, 2017 by Void
If you post this on Physics SE you will likely receive a bounty. Your conclusion is clear enough, but I have just a couple of questions about the notation. Pardon my non-theoretical physicist's lack of knowledge here.
Looking at your initial equation I guess the RHS is actually some sort of term representing a small amount of scattering? Then I have to admit I don't understand what the second and third terms represent on the LHS. Could you fill me in?
Finally I guess Q is a transition probability assuming the initial state is full and the final state empty right?
@RobJeffries I expanded the post and reposted on SE, I think now it should be much clearer. Let me know if it makes more sense now.
Thanks, yes it is. I need to go and do some work to understand the origin of terms 2/3 in your Boltzmann equation. Anyway, I guess this reasonably explains why neutron stars are transparent to the neutrinos they produce, since $f \sim 1$ for most possible final momentum states of the neutron involved in any scattering.
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysi$\varnothing$sOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
{}
|
# C++ - Functions
A function is a block of statements which executes only when it is called somewhere in the program. A function provides re-usability of same code for different inputs, hence saves time and resources. There are some in-built functions in C++ and one of the common function is main() which is used to execute code. Users can also create their own function which is also termed as user-defined functions.
## Create Function
In C++, creating a function starts with defining return type of function followed by function's name and parenthesis containing function's parameter(s), if it has any. At last, it contains block of statements which is also called body of the function. Please see the syntax below:
### Syntax
//Defining function
return_type function_name(parameters)
{
statements;
}
return_type: A function can return value(s). The return_type is the data type of the value the function returns. If a function does not return anything in that case void is used as return_type.
### Example: A function with no parameter
In the below example, a function called MyFunction is created to print Hello World!. The function requires no parameters to run and has no return type, hence void keyword is used.
#include <iostream>
using namespace std;
void MyFunction(){
cout<<"Hello World!."<<"\n";
}
int main (){
MyFunction();
return 0;
}
Output
Hello World!.
## Call Function
After defining the function, it can be called anywhere in the program with it's name followed by parenthesis containing function's parameter(s), if it has any and semicolon (;). As in the above example, the function is called inside main() function using the following statement:
MyFunction();
## Function Declaration
In C++, declaration of a function starts with return type of function followed by function's name and parenthesis containing function's parameter(s), if it has any. Please see the syntax below:
### Syntax
//declaration of a function
return_type function_name(parameters);
If the function is defined after it is called in the program, the C++ program will raise an exception. To avoid such situation, the function is declared before calling it, and the function can be defined anywhere in the program.
### Example:
In the below example, the function called MyFunction is defined after it is called in the program. Since the function is defined after it is called in the program, program will raise an exception. To avoid such situation, function must be declared before calling it.
#include <iostream>
using namespace std;
//function declaration
void MyFunction();
int main (){
//function calling
MyFunction();
return 0;
}
//function definition
void MyFunction(){
cout<<"Hello World!."<<"\n";
}
Output
Hello World!.
## Parameter
A parameter (or also known as argument) is a variable which is used to pass information inside a function. In above example, the function does not has any parameter. But a user can create a function with single or multiple parameters. Value of a parameter can be further used by the function to achieve desired result.
### Example: A function with parameters
In the below example, the function called MyFunction is created which requires two integer numbers as parameters and print sum of two numbers in desired style. Please note that, the function returns nothing hence void is used as return type.
#include <iostream>
using namespace std;
void MyFunction(int x, int y);
int main (){
int a = 15, b = 10;
MyFunction(a, b);
return 0;
}
void MyFunction(int x, int y){
cout<<"Sum of "<<x<<" and "<<y<<" is: "<<x+y<<"\n";
}
Output
Sum of 15 and 10 is: 25
## Function to Return Values
A function can be used to return values. To achieve this, user must have to define return type in definition and declaration of the function. In the below example, the return type is int.
#include <iostream>
using namespace std;
int MyFunction(int x, int y);
int main (){
int a = 15, b = 10;
int sum = MyFunction(a, b);
cout<<sum<<"\n";
return 0;
}
int MyFunction(int x, int y){
return x+y;
}
Output
25
## Default Parameter Value
Default value can be assigned to a parameter at the time of creating function. When the function is called without parameter then it uses default value.
### Example:
In the below example, default parameter value is used to perform sum operation on two, three and four integer numbers using same function called MyFunction. Here, default value is set while declaring the function. The same can be done while defining the function also when function declaration is not used in the program.
#include <iostream>
using namespace std;
int MyFunction(int p, int q, int r=0, int s=0);
int main (){
int a = 15, b = 10, c = 5, d = 1;
cout<<MyFunction(a,b)<<"\n";
cout<<MyFunction(a,b,c)<<"\n";
cout<<MyFunction(a,b,c,d)<<"\n";
return 0;
}
int MyFunction(int p, int q, int r, int s){
return p+q+r+s;
}
Output
25
30
31
## Recursive function
A function which can call itself is known as recursive function. A recursive function generally ends with one or more boundary conditions.
### Example:
A function for calculating factorial using recursion method is described below.
#include <iostream>
using namespace std;
int factorial(int x);
int main (){
cout<<factorial(3)<<"\n";
cout<<factorial(5)<<"\n";
return 0;
}
int factorial(int x){
if(x==0)
{return 1;}
else
{return x*factorial(x-1);}
}
Output
6
120
|
{}
|
# Lumberjack clothing
## Redirected from Lumberjack clothes
29,515pages on
this wiki
The set of Lumberjack clothing is dropped in pieces by Undead Lumberjacks in the Temple Trekking minigame. Lumberjack clothing, when worn, gives extra Woodcutting experience for cutting logs. This includes experience gained from chopping ivy, obtaining lumber from a Woodcutting skill plot, and cutting boards for jobs at the sawmill. This does include experience from the Evil Tree Distraction and Diversion, but not for finishing a sawmill job. Level 44 Woodcutting is required to wear Lumberjack clothing. The pieces of Lumberjack clothing are as follows:
Item XP Boost
Lumberjack hat 1%
Lumberjack top 1%
Lumberjack legs 1%
Lumberjack boots 1%
Sub-total 4%
Set bonus 1%
Full set 5%
A full lumberjack set can be stored in the armour case in the Costume Room of a player-owned house.
When using the whole set, the experience per log can be calculated by using the formula $1.05 \times (\text{XP per log})$, rounding down to the nearest tenth of an experience point.
The 5% experience boost does stack with bonus experience.
## ObtainingEdit
Lumberjack clothing is obtained during the Temple Trekking and Burgh de Rott Ramble minigame. One of the events players may encounter during Temple Trekking is an area where there is a broken bridge. This event is where the clothing can be obtained. In this event, undead lumberjacks may come out of the water (note, sometimes no lumberjacks appear and trees must instead be used to repair the bridge). They drop planks, which can be used to repair the bridge, but they can also occasionally drop pieces of the Lumberjack set. One or more pieces can be obtained per event.
After the 17 August 2011 revamp, the undead lumberjacks will drop the pieces in order, starting with the boots, then the hat, legs, and top. Missing pieces will be dropped before you get duplicates of the set. The pieces also became far more common drops after this update. All of the Lumberjack items can be obtained on any of the routes. The pieces are dropped in a cycle and additional sets can be obtained once you have completed one.
The fastest way to obtain the Lumberjack set is to guide somebody who has the ability to tell what lies ahead. This gives 3 opportunities to find the broken bridge event at each intersection. Rolayne Twickit has an advantage of being able to see ahead at only level 20, whereas the other NPCs only gain this ability after their third level tier.
Alternatively one can take any person through Route One (the easiest route). This enables the player to skip past most combat events.
## TriviaEdit
• The description for the clothes reads, "You'll certainly be alright wearing these", referring to The Lumberjack Song in the British comedy sketch television show Monty Python's Flying Circus. This is one of many references to Monty Python found in RuneScape.
• On the first day of release, there was a glitch where players were not able to receive the boots as a drop. It was much easier to get the other clothing pieces before this was fixed.
• In the past, Lumberjack clothing failed to apply its experience boost when chopping ivy. This has been fixed. Now it incorporates the extra experience directly into the total experience earned by chopping.
• The outfit received, as with all the other experience-boosting sets existing at that point, an increase to the boost it provided, from 2.5% to 5%. This update occurred on 26 September 2012, but went official on 28 September alongside the 4 new experience-boosting sets.
|
{}
|
1. May 11, 2009
A movie theatre sells tickets for $8.50 each. The manager is considering raising the prices but knows that for every 50 cents the price is raised, 20 fewer people go to the movies. The equation R = -40c^2 = 720c describes the relationship between the cost of tickets, c dollars, and the amount of revenue, R dollars, that the theatre makes. What price should the theatre charge to maximize revenue? I believe what I need to do is find the maximum vertex of the parabola in order to solve the equation. So I did the following: R = -40c^2 - 720c = -40(c^2 - 18c) = -40(c^2 - 18c + 9^2 - 9^2) <-- complete the square = -40(c^2 - 18c + 81 - 81) = -40[(c^2 - 9)^2 - 81) = -40(c^2 - 9)^2 + 3240 Which would give me a vertex (9, 3240) but this does not make sense to me, I am not sure what I am looking for to be honest. I believe that the maximum price would be$9.00 to have a revenue of $3240, is this correct and I am just second guessing? 2. May 11, 2009 ### symbolipoint You seem to be thinking in the right direction, although I did not analyze your work in detail. One spot of confusion is what you say, equation R = -40c^2 = 720c describes the relationship between the cost of tickets, c dollars, and the amount of revenue, R dollars, that the theatre makes", does not make sense. OOOOHHH, you mean -40c^2 - 720c = R, this could be better. 3. May 11, 2009 ### nickjer You pulled out a negative but you left the 2nd term negative as well. Double check the equation you were given, because you miswrote it in the problem, and it could have a mistake when you first started solving it. 4. May 12, 2009 ### Imperil Now I am fairly confused as it really does not make sense to me. I double checked the equation and I was correct in my work that it is the following: R = -40c^2 - 720c After correcting my mistake (that was pointed out by nickjer) I now have the following: R = -40c^2 - 720c = -40(c^2 + 18c) = -40(c^2 + 18c + 81 - 81) <-- complete the square = -40[(c^2 + 9)^2 - 81] = -40(c^2 + 9)^2 + 3240 Which would give a vertex of (-9, 3240) which makes no sense to me in the context of the question. I am really not sure where to go from here. 5. May 12, 2009 ### gabbagabbahey Surely, this equation should be R=-40c^2+720c instead! 6. May 12, 2009 ### Imperil I have triple checked and it is definitely -720c which is why I am confused. 7. May 12, 2009 ### gabbagabbahey It must be a typo! If the equation were -40c^2-720c , then if you charged$1.00 per ticket, you would have a revenue of -\$760.00; but revenue is always a positive quantity.
I would assume that the equation is supposed to be -40c^2+720c and just ask your instructor about it when you see him/her.
8. May 12, 2009
### Imperil
I thought this exact same thing but figured maybe I was thinking about it wrong! Thanks for your help, I just contacted my teacher by email regarding this. It is a key problem in my correspondence that I need to hand in, so I am shocked they included this typo.
|
{}
|
The task is to compare them and check if they are similar. back of one's hand. These cherries are not as sweet as those. If not, many low-level textbooks have helpful if somewhat inauthentic texts with lines like “John usually wears a tie but today he is wearing a tutu” that students can use to help them divide the time expressions into two categories by meaning and use. Not {the person} that I have known is very different from I don't know him. It might seem like "mute" is the right word here, given that it means not being able to speak. be (like) water off a duck's back. Guest post by Ashleigh from My Business English Coach.. Do you run or work for a small business or startup? stream Explanation of the English phrase "Not that I know of. Are you enlightened? D�\��^A�r �������F{S�^��ُ�r@��f�$,�0���͐��t�? endobj “To be a fan of something” is an expression lots of people use and most people understand. the first one is correct however I'm not very sure about 2 and 3 but 2 is definitely wrong a better way to say this is water is self-service here|I agree with @_chilwolaegi. 1 0 obj bad penny, always turns up (comes back) like a. be (like) a bird in a gilded cage. Common Phrases and Expressions in English. My understanding is that 'tip of the iceberg' has a negative "hidden" connotation. Used in writing only for effect.) You probably wonât sound like a native speaker if you use these phrases instead of âI donât knowâ: Havenât the foggiest: pretty uncommon but VERY British Search me: found in books, not often in conversation Youâve got me there: used sarcastically usually Havenât the faintest: Markâs auntie says this, but we donât hear it much I havenât got a clue: this oneâs pretty long try âI have no clueâ or âno clueâ instead, Your email address will not be published. It’s not my thing. She’s not a fan of sushi. Notes: can be used alone âdunnoâ or in a sentence: I dunnoâ¦, Notes: use this if youâve at least tried to help a little bit, âbeats me, Iâm sorry I couldnât do anything to help, I tried.â, Notes: more formal, but still used commonly; I donât recall is very similar to I donât know, however, with I canât recall, you might be asking for someone to jog your memory, Notes: when you really donât know or donât care, and you canât even guess; you can also say âNo clue. adjective not alike; not capable of comparison. be all like. This site uses Akismet to reduce spam. En faire tout un fromage. 4 0 obj We include 12 common words and phrases that youâll actually hear in normal conversation, plus we let you know about some uncommon or cliche phrases you might find in English textbooks! distant. Similarly, expressions like "cat's out of the bag" and "for the birds" have nothing to do with animals when Americans say them. We include 12 common words and phrases that you’ll actually hear in normal conversation, plus we let you know about some uncommon or cliche phrases you might find in English textbooks! Do you like singing? Back to Square One Going back to the beginning; a popular saying that means a person has to start over, similar … Sorry, your blog cannot share posts by email. If you’ve ever spent time learning new words and phrases, only to forget them or find out you can’t actually use them in real situations, this section is for you. Definitions by the largest Idiom Dictionary. 2 0 obj contrary. Let me make sure I understand exactly what’s being asked… 17. Examples: Input : exp1 = "- (a+b+c)" exp2 = "-a-b-c" Output : Yes Input : exp1 = "- (c+b+a)" exp2 = "-c-b-a" Output : Yes Input : exp1 = "a-b- (c-d)" exp2 = "a-b-c-d" Output : No. He doesn't have a laptop computer, at least to Jana or to Mindy. avoid someone/something like the plague. 1. stream Definition of like in the Idioms Dictionary. Here are some the most useful business English expressions to help you rock small business talk, get you business English fluent and express yourself (like a boss) this year.. Not all business phrases are created equal, some are used much more than others. I’… No idea.â, Notes: âno offense, butâ¦. Here are 21 American phrases sure to come in handy. Well, I see things rather differently…. But to say “I’m not a big fan” is not as common. I see what you are getting at, but…. avoid somebody/something like the plague. 5 0 obj "• Answer, not that I know of. 12 Common English Expressions for I don’t know, Notes: more formal, but still used commonly; I, : more common in the US; Itâs not that they, tell you, they just donât know; you can also say âCanât help you with that (piece of information)â, 10 Common English Expressions on Insults and Feeling Uncomfortable, 14 Common English Expressions for Sleep or Feeling Sleepy. 10 years ago. We had to walk farther than we wanted to. To suggest another or ask a language related question, click here. The following English idioms and expressions use the word 'like.' Required fields are marked *. endstream any phrases similar to did you know? Modals also have an effect on the grammar of the verb phrase; after a modal, the infinitive form (verb name) is used. I'm not eating those mushrooms you found in the woods. do you have an iota of information on the subject? not the end of the world idiom. It means “to not like.” For example: Horror movies are not my cup of tea. Simply answering "no" makes it seem that you're completely sure that "no" is the correct answer. Apple; Android; Kindle; Windows; Windows Phone; Free Tools. endobj The phrase I am looking for is going to be used in the paragraph below: #2 - ; Idiomatic Expressions (H) List of common English idioms that start with H. Hatchet Job. I can tell you that … 21. You could then ask them to add other expressions not in the texts to the same two categories. Jokes aside, this expression is very common among Spanish speakers, and it is so similar to its English counterpart that it would be a pity not to take advantage of this. just tell is this expression right? ": Use this phrase when you want to answer "no" to someone's question, but you're not 100% sure that "no" is really the correct answer. Summary Chart of Modals and Similar Expressions Auxiliary may might should ought to had better be supposed to must have to have got to ... must not Mary isn't in class. “I don’t know the first thing about how to … Meaning: You are already familiar with the procedure. Learning new expressions is only as good as what you can actually use in real life. Most questions answered within 4 hours. antonymous. endobj must be I have to go to class today. It’s not a common expression, but it’s related to the book & movie “One Flew Over the Cuckoo’s Nest.” A ‘cuckoo’ is “a type of bird that lays its eggs in the nests of other birds and that has a call that sounds like its name.” [Merriam-Webster’s Learner’s Dictionary] Learn more about the Idioms Course #1 - for all I know This expression means "according to the information I currently have" but implies uncertainty, like you don't actually have much information. 12 Common English Expressions for I don’t know. Save my name, email, and website in this browser for the next time I comment. 6 0 obj Another word for expression. *insert offensive statement here*â often said in a rude way because the speaker is frustrated or annoyed with the question, but sometimes itâs fair to use it if you shouldnât be expected to know the answer; usually continuing with WHY you donât know the answer is helpful, too, Notes: not as far as I knowâ¦. Of course, it means we do not like something very much. endobj What does similar expression mean? Modals are part of a verb phrase; they give more information about the main verb by qualifying it in some way. not the least idiom. different. Always try to learn new … Your email address will not be published. not take your eyes off sb/sth idiom. It (be, not) not that I know of. 1 0. Here are a few common phrases and expressions used by native speakers of English in their everyday speech. This comes from … Then you can give students an expression and ask them to rearrange it so that the "answer" is the same. 3 Answers. Expressions consist of lowercase alphabets, ‘+’, ‘-‘ and ‘ ( )’. <>/ExtGState<>/XObject<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/MediaBox[ 0 0 595.4 841.8] /Contents 4 0 R/Group<>/Tabs/S/StructParents 0>> Copyright High Level English © All rights reserved. For example: I’m not a big fan of the new James Bond. "Not that I know of. Nothing can fluster even the most unflappable professional like being asked a question to which they don’t know the answer. Get your advanced English listening lesson with audio, worksheets and transcripts! �A��D(��J5�ǩ�t}=��������R� Some phrases and expressions you learn at school or in old English textbooks include boring and overused phrases called cliches. Learn how your comment data is processed. contradictory. Here are a few common phrases and expressions used by native speakers of English in their everyday speech. endobj I completely disagree…. Lead 12 Simple Phrases That Are Guaranteed to Make People Like You More A dozen simple, easy-to-remember phrases, and they'll almost always make other people react positively to you. ca('m su*ng," however, I guess that it came from the French, during their colonization of Vietnam. Meaning: Would absolutely not allow myself to do this; Example: I wouldn’t be caught dead wearing a coat that color. 8 0 obj Of course they are — in their own special way! Find more ways to say expression, along with related words, antonyms and example phrases at Thesaurus.com, the world's most trusted free thesaurus. In this case, jus t use as…as .) Given two expressions in the form of strings. 7 0 obj <> not take something lying down idiom. An Arm and a Leg Something that’s very expensive; an idiom meaning the price having to be paid is costly. <> Definitions by the largest Idiom Dictionary. After googling, I’m wondering … 20. "Water is self-service" or "Water is self-service here" sounds best. (Place the -s after the verb as the 3rd person singular marker.) <>>> Literal meaning: to make a whole cheese about it What it really means: to make a big deal of something If you make a giant fuss about something, the French would say, en faire tout un fromage, or you’re making a whole cheese about it. They That’s one way of looking at it, however…. �;��$i@b����S�L�9����։��߮���"~��|��iMp^��L�' Check the meaning of words you don't know by clicking on the dictionary logo . disparate. Post was not sent - check your email addresses! But if you answer "I don't know", it seems like you don't have any answer at all. The movie was similar to "Rocky". 25 PHRASES FOR EXPRESSING DISAGREEMENT I see your point, but…. Notify me of follow-up comments by email. It seems like they just don’t care about the customers anymore.” #4 – not know the first thing about. She sick. <> Definition of similar in the Idioms Dictionary. It looks like a good idea. "Water is self-service" or "Water is self-service here" sounds best. Welcome to our Daily English Listening Practice with this week’s series: Today we talk about Common English Expressions to replace the phrase âI donât knowâ. Favorite Answer "Are you aware..?" Common Phrases and Expressions in English. Today we talk about Common English Expressions to replace the phrase “I don’t know”. (Adj) %���� Here are my four tips to actually learning new business English expressions (and using them! endobj endobj not that idiom. avoid like the plague, to. I want something other than those. dissimilar. B – Expressions. Answer Save. Our last expression is similar to the previous one, and means the same thing: “I don’t like…” or “I am not interested in…” For example: Cooking is not … (Use one comparative form. like phrase. I’m not sure I go … Don’t ask her to go to the Japanese restaurant! Notes: if youâve already helped someone with one set of information, or you would like to help but you canât, you can use this phrase. Appearing incompetent or uninformed is … Expressions about Cheese . Each idiom or expression has a definition and two example sentences to help your understanding of these common idiomatic expressions with 'like.' Similar - Idioms by The Free Dictionary ... More from Idioms and phrases. A good start is to talk through different strategies to calculate expressions where "like terms" will make it easier to do the calculation, such as $3 + 2\times3 + 3\times7$. <> not take kindly to sth idiom. Let’s start the ball rolling with two expressions about cheese. Not that I know of definition: You say ' Not that I know of ' when someone has asked you whether or not something is... | Meaning, pronunciation, translations and examples <> For example, if you are talking to someone who you don’t know very well or if you are talking … Notes: you can use this when youâre not sure of the answer (or donât know at all), but you might be able to find the person who can help them; I’m not the one to ask, but I think I can find someone to help you. not the done thing idiom. Because I’m not sure, I should … 19. \$.' endobj There are certain situations where you might not want to use them. 9 0 obj I feel comfortable/can answer or respond to this part… 18. 10 years ago. antithetical. You do not need any specific instructions in order to start looking for agujas in Spanish. From Longman Dictionary of Contemporary English not that I know of not that I know of used to say that you think the answer is ‘no’ but there may be facts that you do not know about ‘Did he call earlier?’ ‘Not that I know of.’ → know Examples from the Corpus not that I know of • "Did anyone call for me?" They could be poisonous for all I know. (Use the adjective form of "similar" in this expression "similar to".) Modals and Related Expressions. ���� JFIF ` �� ZExif MM * J Q Q �Q � �� ���� C Right or Wrong depends on the context. You know the drill. be a sandwich short of a picnic = be a little bit crazy or stupid If you’re short of something, you’re missing something – in this case, some intelligence or common sense! – Cardinal Jul 5 '16 at 9:33. <> Mobile Apps. if something does not know, can he say "Not that I have known" – Sohail Jul 5 '16 at 9:30. Visit Insider's homepage for more stories . Listen to the audio clips for information and pronunciation. 6. But no - it's actually "moot point." Ask a question for free Get a free answer to a quick problem. Are you sure? – No, it’s not my cup of tea. not that I know of idiom. similar phrase. not take sth lying down idiom. I bet you didn't know is also one. Example: When you leave, shut off all the lights and lock the room with the safe. To suggest another or ask a language related question, click here. avoid like the plague. 15. If somebody doesn’t know the first thing about something, it means they know absolutely nothing about it; they are completely unfamiliar with it. %PDF-1.5 It (belong) the sign. Whether they're related to food, football, or feline friends, American idioms can be colorful — and confusing to people visiting from abroad. Each idiom or expression has a definition and two example sentences to help the comprehension of these common idiomatic expressions with 'as ... as'. Zig When One Should Be Zagging "What the heck, are Americans even speaking English?" I do not know the origin of the Vietnamese phrase "bi. Anonymous. 3 0 obj Umm, I’m not sure about that…. ",#(7),01444'9=82. lol, NO. Check the meaning of words you don't know by clicking on the dictionary logo . So, how do you make sure that you are learning new business phrases in a way that you will be able to remember them and use them in the future? What does like expression mean? Similar to the common phrase “as easy as pie.” 5. x����jA���u4Bz�g���ஊ��B�AD�dB�����=�AЙS�0_U�7L�ŶZ����¼��X Synonyms for not known include unidentified, anonymous, unnamed, nameless, unknown, unfamiliar, incognito, mysterious, unmarked and innominate. his. (barbie = barbecue) a nosy- parker / nosey-parker= a nosy person “Don’t be such a nosy-parker. 6. Relevance. Learning is like a 10-run ladder, and I’m on rung # … 16. Iâm pretty sure the answer is yes, Notes: I donât know the first thing about⦠*insert topic here*; I have no experience in what youâre asking about, Notes: Sometimes people answer their own questions by guessing, if you really donât know the answer you can tell them that their guess is as good as yours, Notes: couldnât tell ya: more common in the US; Itâs not that they wonât tell you, they just donât know; you can also say âCanât help you with that (piece of information)â, Notes: rhetorical question, can be used for crazy or weird questions. Meaning: Fail in a spectacular way; Example: After the new model had to be recalled due to the diesel emissions scandal, the entire brand went down in flames. <> “She’s alright, but she’s a sandwich short of a picnic if you know what I mean.” Other similar expressions are “a few bricks short of a load” or “a sausage short of a barbie”. Peer Mohamed M K. Lv 5. I am looking for phrases that would be similar in meaning to 'tip of the iceberg,' but has a positive connotation. You Know the Drill. Online English Video Lesson: 30 Terms and Phrasal Verbs for Technology, English Video Lesson: 20 English Phrasal Verbs for Going Shopping and Clothes, Free English Video Lesson: Phrasal Verbs for Spending Time with Friends, English Video Lesson: Phrasal Verbs at Work, Small Talk: Introducing Yourself in English. They are almost a little bit commanding like you are saying, I know what’s best for the other person. ): 1. 10. (see also like, you know. the first one is correct however I'm not very sure about 2 and 3 but 2 is definitely wrong a better way to say this is water is self-service here|I agree with @_chilwolaegi. Like - Idioms by The Free Dictionary ... when said frequently, marks the speaker as speaking in a very casual or slangy mode. Iâm pretty sure the answer is no; as far as I knowâ¦. I did a little research, and found out that the French also have the same expression "faire porter des cornes."
|
{}
|
# What are the sequences?
What are the sequences and arithmetic progressions? Sequences are also called the progressions. If the numbers are generated according to a certain rule such that each consecutive number has a specific relation with its neighbors then they are said to form a sequence. You can consider sequences as list of ordered pairs. Sequences are represented by {an} where n is an integer. The value of n can be any integer or from natural number (1,2,3,….) which is called the domain of sequence. Each number in the sequence is called term.
So, if we are having an expression for a sequence, we can generate its terms by substituting different values of n. Consider the following sequence.
b_{n}=2+(-1)^{n}
Starting n form 1 we can generate the sequence like
b1=2+(-1)1=1
b2=2+(-1)2=3
b3=2+(-1)3=1
b4=2+(-1)4=3 and so on. So in sequence form it is written as: 1,3,1,3,…….
## What are the finite and infinite sequences?
If the last or final term of the sequence is known then they are called the finite sequences and if the last term of sequence is not known then they are called infinite sequences.
### Examples
finite sequence: {1,2,3,…..10}
infinite sequence: {5,10,15,20,……}
### What is the Arithmetic Progression?
Arithmetic progression is the type of the sequences, where two consecutive numbers have same common difference. For example if you look at the following sequence you can see that the common difference is same for any two consecutive terms.
3,6,9,12,15,……
here 6-3=9-6=12-9=15-12=3. So, the common difference is 3
If you the correspondence established between the sequence terms then it will look like
### How to generate the nth term of the Arithmetic Progression (AP)?
As we know that for the arithmetic progression, common difference is same. So for generating the nth term of sequence, we will use this concept.
a2-a1=d à a2= a1+d
a3-a2=d à a3= a2+d ——–(i) Substitute the value of a2 from above equation.
à a3= (a1+d )+d= a1+2d
Similarly
a3= a1+2d
an= a1+(n-1)d
This is the expression of nth general term of a sequence. Which means if we have the first term of A.P and the difference d, we can find any other term of the AP.
## Selective problems solution of chapter no. 6 F.Sc Part 1
### Solution:
Since
Since a5=16 = a1+4d
Also
A20=46 = a1+19d
So
a1+4d=16
a1+19d=46
Solving them simultaneously, we will get
15d=30 à d=2
So,
a1+4(2)=16
a1+8=16
and a1= 2
Hence,
a12= a1+11d
=2+11(2)
=24
#### Solution
Here we have to find n actually.
a1 = 11
an= a1+(n-1)d=68
so substitute the value of a1 and d, we get
68=11+(n-1)(3)
68-11=3(n-1)
57=3(n-1)
19=n-1
n=20
So, 68 is the 20th term of A.P
### If 1/a, 1/b, and 1/c are in A.P then show that common difference is (a-c)/2ac
Solution
Since they are in A.P, so common difference should be same
Similarly,
d=1/c-1/b
Substitute the value of b here, you will get
d=(a-acd-c)/ac
acd=a-acd-c
or
acd+acd=a-c
2acd=a-c
or
d=(a-c)/2ac
Hence proved.
### What is the arithmetic mean?
Any number A is said to be an arithmetic mean between two numbers a and b if a,A, b are in A.P.
Since a,A,b are in A.P
So,
d=A-a and d=b-A
by comparing both, we get
A-a=b-A
or
A=(a+b)/2
If three consecutive terms are in A.P, then the middle term is actually the arithmetic mean between the extreme terms.
If an-1,an and an+1 are in A.P then, we can write the general term as:
a_{n}=\frac{a_{n-1}+a_{n+1}}{2}
### How to find n Arithmetic means between two given numbers?
This is very simple. Lets us consider that there are A1, A2, A3, A4,……..An arithmetic means present between two numbers a and b such that
a, A1, A2, A3, A4,……..An ,b
If they all are in A.P and a1 = a and an+2 = b
then
$$\begin{gathered} a_{n+2}=b=a_{1}+(n+2-1) d \\ b=a_{1}+(n+1) d \\ d=\frac{b-a_{1}}{n+1} \end{gathered}$$
Here $a_{1}=a$
So
d=\frac{b-a}{n+1}
Hence
A1=a+
d=\frac{b-a}{n+1}
=
$$\begin{gathered} A_{1}=\frac{[a(n+1)+b-a]}{n+1} \\ A_{1}=\frac{[(a n+a)+b-a]}{n+1} \\ A_{1}=\frac{a n+a+b-a}{n+1} \\ A_{1}=\frac{a n+b}{n+1} \end{gathered}$$
Similarly,
$$\begin{gathered} A_{2}=a+2\left(\frac{b-a}{n+1}\right)=\frac{(n-1) a+2 b}{n+1} \\ A_{3}=a+3\left(\frac{b-a}{n+1}\right)=\frac{(n-2) a+3 b}{n+1} \\ A_{n}=a+n\left(\frac{b-a}{n+1}\right)=\frac{(n-(n-1)) a+n b}{n+1} \\ A_{n}=a+n\left(\frac{b-a}{n+1}\right)=\frac{a+n b}{n+1} \end{gathered}$$
### Proof
From the above we have determined the values of n arithmetic means between numbers a and b. They are equal to
\begin{gathered}
A_{1}=\frac{a n+b}{n+1} \\
A_{2}=a+2\left(\frac{b-a}{n+1}\right)=\frac{(n-1) a+2 b}{n+1} \\
A_{3}=a+3\left(\frac{b-a}{n+1}\right)=\frac{(n-2) a+3 b}{n+1} \\
A_{n}=a+n\left(\frac{b-a}{n+1}\right)=\frac{(n-(n-1)) a+n b}{n+1} \\
A_{n}=a+n\left(\frac{b-a}{n+1}\right)=\frac{a+n b}{n+1}
\end{gathered}
Now lets find their sum, that is
A1 +A2 +A3+A4+An
$$a+\frac{b-a}{n+1}+a+2\left(\frac{b-a}{n+1}\right)+a+3\left(\frac{b-a}{n+1}\right)+\cdots \ldots .++n\left(\frac{b-a}{n+1}\right)$$
So,
$$\begin{gathered} n a+\left(\frac{b-a}{n+1}\right)[1+2+3+\cdots .+n] \\ n a+\left(\frac{b-a}{n+1}\right) \frac{n(n+1)}{2} \\ =n a+\frac{n(b-a)}{2} \\ =\frac{2 a+n b-n a}{2} \\ =\frac{a n+n b}{2} \\ =\frac{n(a+b)}{2} \end{gathered}$$
which is the desired result.
|
{}
|
# Fluid Mechanics for MAP/Differential Analysis of Fluid Flow
## Differential relations for a fluid particle
We are interested in the distribution of field properties at each point in space. Therefore, we analyze an infinitesimal region of a flow by applying the RTT to an infinitesimal control volume, or , to a infinitesimal fluid system.
## Conservation of mass
The differential control volume dV and the mass flux through its surfaces
The conservation of mass according to RTT
${\displaystyle \displaystyle {\frac {\partial }{\partial t}}\int _{CV}\rho \ dV+\int _{CS}\rho \;{\vec {U}}\cdot {\vec {n}}\ dA=0}$
or in tensor form
${\displaystyle \displaystyle {\frac {\partial }{\partial t}}\int _{CV}\rho \ dV+\int _{CS}\rho \;U_{i}n_{i}\ dA=0}$
The differential volume is selected to be so small that density ${\displaystyle \displaystyle (\rho )}$ can be accepted to be uniform within this volume. Thus the first integral in 1 is:
${\displaystyle \displaystyle {\frac {\partial }{\partial t}}\int _{CV}\rho \ dV\approx {\frac {\partial \rho }{\partial t}}dx_{1}dx_{2}dx_{3}={\frac {\partial \rho }{\partial t}}dV}$
The flux term (second integral term) in the equation of conservation of mass can be analyzed in groups:
${\displaystyle \displaystyle \int _{CS}\rho U_{i}n_{i}\ dA=\int _{CS\,x_{1}}\rho U_{i}n_{i}\;dA+\int _{CS\,x_{2}}\rho U_{i}n_{i}\;dA+\int _{CS\,x_{3}}\rho U_{i}n_{i}\;dA}$
Let's look to the surfaces perpendicular to ${\displaystyle \displaystyle x_{1}-axis}$
${\displaystyle \displaystyle \int _{CS\,x_{1}}\rho U_{i}n_{i}\ dA=-\rho U_{1}dx_{2}dx_{3}+\left[\rho U_{1}+{\frac {\partial \left(\rho U_{1}\right)}{\partial x_{1}}}dx_{1}\right]dx_{2}dx_{3}}$
${\displaystyle \displaystyle ={\frac {\partial \left(\rho U_{1}\right)}{\partial x_{1}}}dx_{1}dx_{2}dx_{3}={\frac {\partial \left(\rho U_{1}\right)}{\partial x_{1}}}dV}$
Similarly, the flux integrals through surfaces perpendicular to ${\displaystyle \displaystyle x_{2}-axis}$ and ${\displaystyle \displaystyle x_{3}-{\textrm {axis}}}$ are
${\displaystyle \displaystyle \int _{CS\,x_{2}}\rho U_{i}n_{i}\ dA={\frac {\partial \left(\rho U_{2}\right)}{\partial x_{2}}}dV,}$
${\displaystyle \displaystyle \int _{CS\,x_{3}}\rho U_{i}n_{i}\ dA={\frac {\partial \left(\rho U_{3}\right)}{\partial x_{3}}}dV.}$
${\displaystyle \displaystyle \int _{CS}\rho U_{i}n_{i}\ dA={\frac {\partial \left(\rho U_{i}\right)}{\partial x_{i}}}dV}$
The conservation of mass equation becomes:
${\displaystyle \displaystyle {\frac {\partial \rho }{\partial t}}dV+{\frac {\partial \left(\rho U_{i}\right)}{\partial x_{i}}}dV=0}$
Droping the ${\displaystyle \displaystyle dV}$, we reach to the final form of the conservation of mass:
${\displaystyle \displaystyle {\frac {\partial \rho }{\partial t}}+{\frac {\partial \left(\rho U_{i}\right)}{\partial x_{i}}}=0}$
This equation is also called continuity equation. It can be written in vector form as:
${\displaystyle \displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \left(\rho {\vec {U}}\right)=0\ \ {\text{where}}\ \ \nabla ={\frac {\partial }{\partial x_{1}}}{\vec {e_{1}}}+{\frac {\partial }{\partial x_{2}}}{\vec {e_{2}}}+{\frac {\partial }{\partial x_{3}}}{\vec {e_{3}}}\ \ {\text{gradient operator}}}$
For a steady flow, continuity equation becomes:
${\displaystyle \displaystyle {\frac {\partial \left(\rho U_{i}\right)}{\partial x_{i}}}=0}$
.
For incompresible flow, i.e. ${\displaystyle \displaystyle \rho ={\text{constant}}}$:
${\displaystyle \displaystyle {\frac {\partial U_{i}}{\partial x_{i}}}=0\ \ {\text{i.e.}}\ \ {\frac {\partial U_{1}}{\partial x_{1}}}+{\frac {\partial U_{2}}{\partial x_{2}}}+{\frac {\partial U_{3}}{\partial x_{3}}}=0}$
### Example
For a two dimensional, steady and incompressible flow in ${\displaystyle \displaystyle x_{1}x_{2}}$ plane given by:
${\displaystyle \displaystyle \displaystyle U_{1}=Ax_{1}}$
Find how many possible ${\displaystyle \displaystyle U_{2}}$ can exist.
${\displaystyle \displaystyle \rho {\frac {\partial U_{i}}{\partial x_{i}}}=0}$
in two diemnsions
${\displaystyle \displaystyle {\frac {\partial U_{1}}{\partial x_{1}}}+{\frac {\partial U_{2}}{\partial x_{2}}}=0\ \ \rightarrow \ \ {\frac {\partial U_{2}}{\partial x_{2}}}=-{\frac {\partial U_{1}}{\partial x_{1}}}}$
Thus,
${\displaystyle \displaystyle {\frac {\partial U_{2}}{\partial x_{2}}}=-A}$
This is an expression for the rate of change of ${\displaystyle \displaystyle U_{2}}$ velocity while keeping
${\displaystyle \displaystyle x_{1}}$ constant. Therefore the integral of this equation reads
${\displaystyle \displaystyle \displaystyle U_{2}=-Ax_{2}+f(x_{1})}$
Thus, any function ${\displaystyle \displaystyle f(x_{1})}$ is allowable.
### Example
Compressible and unsteady flow inside a piston
Consider one-dimensional flow in the piston. The piston suddenly moves with the velocity ${\displaystyle \displaystyle V_{p}}$ . Assume uniform ${\displaystyle \displaystyle \rho (t)}$ in the piston and a linear change of velocity ${\displaystyle \displaystyle U_{1}}$ such that ${\displaystyle \displaystyle U_{1}=0}$ at the bottom (${\displaystyle \displaystyle x_{1}=0}$) and ${\displaystyle \displaystyle U_{1}=V_{p}}$ on the piston (${\displaystyle \displaystyle x_{1}=L}$), i.e.
${\displaystyle \displaystyle U_{1}={\frac {x_{1}}{L}}V_{p}}$
Obtain a function for the density as a function of time.
The conservation of mass equation is:
${\displaystyle \displaystyle {\frac {\partial \rho }{\partial t}}+{\frac {\partial \left(\rho U_{i}\right)}{\partial x_{i}}}=0}$
For one-dimensional flow and uniform ${\displaystyle \displaystyle \rho }$, this equaiton simplifies to
${\displaystyle \displaystyle {\frac {\partial \rho }{\partial t}}+\rho {\frac {\partial U_{1}}{\partial x_{1}}}=0}$
${\displaystyle \displaystyle {\frac {\partial \rho }{\partial t}}=-\rho {\frac {\partial U_{1}}{\partial x_{1}}}=-\rho {\frac {V_{p}}{L}}}$
${\displaystyle \displaystyle \displaystyle L=L_{0}+V_{p}t}$
${\displaystyle \displaystyle {\frac {\partial \rho }{\partial t}}={\frac {d\rho }{dt}}=-\rho {\frac {V_{p}}{L_{0}+V_{p}t}}}$
${\displaystyle \displaystyle \int _{\rho _{0}}^{\rho }{\frac {d\rho }{\rho }}=\int _{0}^{t}-{\frac {V_{p}}{L_{0}+V_{p}t}}dt}$
${\displaystyle \displaystyle ln\left({\frac {\rho }{\rho _{0}}}\right)=ln\left({\frac {L_{0}}{L_{0}+V_{p}t}}\right)}$
${\displaystyle \displaystyle \rho (t)=\rho _{0}\left({\frac {L_{0}}{L_{0}+V_{p}t}}\right)}$
The same problem can be solved by using the integral approach with a deforming control volume.
## The differential equation of linear momentum
The differential control volume dV and the flux of ${\displaystyle \displaystyle \rho \;U_{i}}$ (momentum per unit volume in i-direction) through the surfaces perpendicular to ${\displaystyle \displaystyle x_{j}}$ axis
The integral equation for the momentum conservation is
${\displaystyle \displaystyle \sum F_{i}={\frac {\partial }{\partial t}}\int _{CV}\rho \;U_{i}\;dV+\int _{CS}\rho \;U_{i}\;U_{j}\;n_{j}dA}$
For the first integral we assume ${\displaystyle \displaystyle \rho }$ and ${\displaystyle \displaystyle U_{i}}$ are uniform within dV, and dV is so small that:
${\displaystyle \displaystyle {\frac {\partial }{\partial t}}\int _{CV}\rho \;U_{i}\;dV\approx \ {\frac {\partial }{\partial t}}(\rho \;U_{i})dx_{1}dx_{2}dx_{3}}$
Analyze the flux of the ${\displaystyle \displaystyle \rho U_{i}}$ momentum terms through the faces perpendicular to the axis:
${\displaystyle \displaystyle \int _{CS}\rho \;U_{i}\;U_{j}\;n_{j}\;dA=\int _{CS\,x_{1}}\rho \;U_{i}\;U_{j}\;n_{j}\;dA\ +\int _{CS\,x_{2}}\rho \;U_{i}\;U_{j}\;n_{j}\;dA\ +\int _{CS\,x_{3}}\rho \;U_{i}\;U_{j}\;n_{j}\;dA}$
First consider the flux of ${\displaystyle \displaystyle \rho \;U_{i}}$ (momentum per unit volume in i-direction) through the surfaces perpendicular to ${\displaystyle \displaystyle x_{1}}$ axis:
${\displaystyle \displaystyle \int _{CS\,x_{1}}\rho \;U_{i}\;U_{j}\;n_{j}\;dA=-\rho \;U_{i}\;U_{1}\;dx_{2}dx_{3}\;+\left[\rho \ U_{i}\ U_{1}\ dx_{2}\ dx_{3}+{\frac {\partial }{\partial x_{1}}}(\rho \ U_{i}\ U_{1})\ dx_{1}\right]dx_{2}\ dx_{3}}$
${\displaystyle \displaystyle ={\frac {\partial }{\partial x_{1}}}(\rho \;U_{i}\;U_{1})dV}$
Similarly, the momentum flux through the surfaces in other directions read
${\displaystyle \displaystyle \int _{CS\,x_{2}}\rho \;U_{i}\;U_{j}\;n_{j}\;dA={\frac {\partial }{\partial x_{2}}}(\rho \;U_{i}\;U_{2})dV}$
,
${\displaystyle \displaystyle \int _{CS\,x_{3}}\rho \;U_{i}\;U_{j}\;n_{j}\;dA={\frac {\partial }{\partial x_{3}}}(\rho \;U_{i}\;U_{3})dV}$
.
Rearranging the equation for ${\displaystyle \displaystyle \sum F_{i}}$ we obtain:
${\displaystyle \displaystyle \sum F_{i}=\left[{\frac {\partial }{\partial t}}(\rho \ U_{i})+{\frac {\partial }{\partial x_{j}}}(\rho \ U_{i}\ U_{j})\right]dV}$
We can simplify further:
${\displaystyle \displaystyle \sum F_{i}=\left[U_{i}{\frac {\partial \rho }{\partial t}}+\rho {\frac {\partial U_{i}}{\partial t}}+U_{i}{\frac {\partial }{\partial x_{j}}}(\rho \;U_{j})+\rho \;U_{j}\;{\frac {\partial U_{i}}{\partial x_{j}}}\right]dV}$
${\displaystyle \displaystyle \sum F_{i}=U_{i}\underbrace {\left[{\frac {\partial \rho }{\partial t}}\;+{\frac {\partial }{\partial x_{j}}}(\rho \;U_{j})\right]} _{continuity\ equation=0}dV\;+\rho \underbrace {\left[{\frac {\partial U_{i}}{\partial t}}\;+U_{j}{\frac {\partial U_{i}}{\partial x_{j}}}\right]} _{{\frac {DU_{i}}{Dt}};subtantial\ derivative\ of\ U_{i}}dV}$
Hence
${\displaystyle \displaystyle \sum F_{i}=\rho {\frac {DU_{i}}{Dt}}dV}$
Let's look to the forces on the exposed on the diffrential control volume:
${\displaystyle \displaystyle {\frac {dP_{i}}{dt}}=\sum F_{i}=dF_{body\;i}+dF_{surface\;i}=\rho {\frac {dU_{i}}{dt}}dV}$
Here, only gravitational force is considered as a body force. Thus,
${\displaystyle \displaystyle dF_{body\;i}=\rho \;dV\;g_{i}}$
Differential surface forces
Surface forces are the stresses acting on the control surfaces. ${\displaystyle \displaystyle F_{s}}$ can be resolved into three components. ${\displaystyle \displaystyle dF_{n}}$ is normal to dA. ${\displaystyle \displaystyle dF_{t}}$ are tangent to dA:
${\displaystyle \displaystyle \sigma _{n}=\lim _{dA\rightarrow 0}{\frac {dF_{n}}{dA}}}$
${\displaystyle \displaystyle \sigma _{t}=\lim _{dA\rightarrow 0}{\frac {dF_{t}}{dA}}}$
${\displaystyle \displaystyle \sigma _{n}}$ is a normal stress whereas ${\displaystyle \displaystyle \sigma _{t}}$ is a shear stress. The shear stresses are also designated by ${\displaystyle \displaystyle \tau }$.
Stresses on the surface of differential control volume
Thus, the surface forces are due to stresses on the surfaces of the control surface.
${\displaystyle \displaystyle \sigma _{ij}=\left[{\begin{matrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\end{matrix}}\right]}$
The positive stress directions
We define the positive direction for the stress as the positive coordinate direction on the surfaces (e.g. on ABCD) for which the outwards normal is in the positive coordinate direction . If the outward normal represents the negative direction (A'B'C'D'), then the stresses are considered positive if directed in the negative coordinate directions.
The stresses on the surface ${\displaystyle \displaystyle (\sigma _{ij})}$ are the sum of pressure plus the viscous stresses which arise from motion with velocity gradients:
${\displaystyle \displaystyle \sigma _{ij}=\left[{\begin{matrix}-P&0&0\\0&-P&0\\0&0&-P\end{matrix}}\right]+\left[{\begin{matrix}\tau _{11}&\tau _{12}&\tau _{13}\\\tau _{21}&\tau _{22}&\tau _{23}\\\tau _{31}&\tau _{32}&\tau _{33}\end{matrix}}\right]=\left[{\begin{matrix}-P+\tau _{11}&\tau _{12}&\tau _{13}\\\tau _{21}&-P+\tau _{22}&\tau _{23}\\\tau _{31}&\tau _{32}&-P+\tau _{33}\end{matrix}}\right]}$
${\displaystyle \displaystyle p}$ has a minus sign since the force due to pressure acts opposite to the surface normal.
Stresses on the surface of the differential control volume in the x1 direction
Let us look to the differential surface force in the ${\displaystyle \displaystyle x_{1}}$ direction:
${\displaystyle \displaystyle dF_{surface\ 1}={\frac {\partial \sigma _{11}}{\partial x_{1}}}\;dx_{1}dx_{2}dx_{3}+{\frac {\partial \sigma _{21}}{\partial x_{2}}}\;dx_{1}dx_{2}dx_{3}+{\frac {\partial \sigma _{31}}{\partial x_{3}}}\;dx_{1}dx_{2}dx_{3}}$
Noting that ${\displaystyle \displaystyle dV=dx_{1}dx_{2}dx_{3}}$ and ${\displaystyle \displaystyle \sigma _{ij}=-p\delta _{ij}+\tau _{ij}}$ (4),
${\displaystyle \displaystyle dF_{surface\ 1}=\left(-{\frac {\partial P}{\partial x_{1}}}+{\frac {\partial \tau _{11}}{\partial x_{1}}}+{\frac {\partial \tau _{21}}{\partial x_{2}}}+{\frac {\partial \tau _{31}}{\partial x_{3}}}\right)dV}$
Thus in tensor form the differential surface forces in ${\displaystyle \displaystyle i}$'th direction can be written as
${\displaystyle \displaystyle dF_{surface\ i}=\left(-{\frac {\partial P}{\partial x_{i}}}+{\frac {\partial \tau _{ji}}{\partial x_{j}}}\right)dV}$
Note that ${\displaystyle \displaystyle \tau _{ij}}$ is a symmetric tensor, i.e.
${\displaystyle \displaystyle \tau _{ji}=\tau _{ij}}$
Hence, the diffential surface forces reads:
${\displaystyle \displaystyle dF_{surface\ i}=\left(-{\frac {\partial P}{\partial x_{i}}}+{\frac {\partial \tau _{ij}}{\partial x_{j}}}\right)dV}$
Inserting ${\displaystyle \displaystyle dF_{body\ i}}$ and ${\displaystyle \displaystyle dF_{surface\ i}}$ into (2),
${\displaystyle \displaystyle \rho {\frac {DU_{i}}{Dt}}dV=\rho \ g_{i}\ dV+\left(-{\frac {\partial P}{\partial x_{i}}}+{\frac {\tau _{ij}}{\partial x_{j}}}\right)\;dV}$
and canceling ${\displaystyle \displaystyle dV}$ we obtain
${\displaystyle \displaystyle \rho {\frac {DU_{i}}{Dt}}=\rho \;g_{i}-{\frac {\partial P}{\partial x_{i}}}+{\frac {\partial \tau _{ij}}{\partial x_{j}}}.}$
Expanding the substatial derivative at the left hand side,
${\displaystyle \displaystyle \rho {\frac {\partial U_{i}}{\partial t}}+\rho U_{j}{\frac {\partial U_{i}}{\partial x_{j}}}=\rho g_{i}-{\frac {\partial P}{\partial x_{i}}}+{\frac {\partial \tau _{ij}}{\partial x_{j}}}}$
We obtain the the most general form of momentum equation which is valid for any fluid (Newtonian, Non-newtonian, Compressible, etc.). It is non-linear due to the ${\displaystyle \displaystyle 2^{nd}}$ term at the LHS. Efect of Newtonian and Non-newtonian properties appears in the formulation of the viscous stresses ${\displaystyle \displaystyle \tau _{ij}}$. ${\displaystyle \displaystyle \tau _{ij}}$ will introduce also non-linearity when the fluid is non-Newtonian.
It should be noted that these formulations are based on stress conception which was thought to exist in fluids in motion. However it is known that ${\displaystyle \displaystyle \tau _{ij}}$ can be expressed as momentum transfer per unit area and time. Thus it can be considered as molecular momentum transport term. Derivations based on this concept requires a molecular approach (which is lengthy). The students should be aware that ${\displaystyle \displaystyle \tau _{ij}}$ causes momentum transport when there is a gradient of velocity.
## Linear momentum equation for Newtonian Fluid: "Navier-Stokes Equation"
For a Newtonian fluid, the viscous stresses are defined as:
${\displaystyle \displaystyle \tau _{ij}=\mu \left[{\frac {\partial U_{i}}{\partial x_{j}}}+{\frac {\partial U_{j}}{\partial x_{i}}}\right]-{\frac {2}{3}}\delta _{ij}\mu {\frac {\partial U_{k}}{\partial x_{k}}}}$
Note that derivation of this relation is beyond the scaope of this course.
Thus, the momentum equation (6) becomes
${\displaystyle \displaystyle \rho {\frac {DU_{i}}{Dt}}=\rho g_{i}-{\frac {\partial P}{\partial x_{i}}}+{\frac {\partial }{\partial x_{j}}}\left[\mu \left({\frac {\partial U_{i}}{\partial x_{j}}}+{\frac {\partial U_{j}}{\partial x_{i}}}\right)-{\frac {2}{3}}\mu \ \delta _{ij}\ {\frac {\partial U_{k}}{\partial x_{k}}}\right]}$
For a flow with constant viscosity (${\displaystyle \displaystyle \mu ={\textrm {constant}}}$):
${\displaystyle \displaystyle \rho {\frac {DU_{i}}{Dt}}=\rho g_{i}-{\frac {\partial P}{\partial x_{i}}}+\mu {\frac {\partial }{\partial x_{j}}}\left[{\frac {\partial U_{i}}{\partial x_{j}}}+{\frac {\partial U_{j}}{\partial x_{i}}}-{\frac {2}{3}}\delta _{ij}\ {\frac {\partial U_{k}}{\partial x_{k}}}\right]}$
since,
${\displaystyle \displaystyle {\frac {\partial ^{2}U_{i}}{\partial x_{j}\partial x_{i}}}={\frac {\partial ^{2}U_{i}}{\partial x_{i}\partial x_{j}}}={\frac {\partial }{\partial x_{j}}}\left({\frac {\partial U_{i}}{\partial x_{i}}}\right)={\frac {\partial }{\partial x_{i}}}\left({\frac {\partial U_{i}}{\partial x_{j}}}\right)}$
then,
${\displaystyle \displaystyle \rho {\frac {DU_{i}}{Dt}}=\rho \ g_{i}-{\frac {\partial P}{\partial x_{i}}}+\mu {\frac {\partial ^{2}U_{i}}{\partial x_{j}x_{j}}}+\mu {\frac {\partial }{\partial x_{i}}}\left({\frac {\partial U_{j}}{\partial x_{j}}}\right)-{\frac {2}{3}}\delta _{ij}\mu {\frac {\partial }{\partial x_{j}}}\left({\frac {\partial U_{k}}{\partial x_{k}}}\right)}$
For an incompressible flow
${\displaystyle \displaystyle {\frac {\partial U_{k}}{\partial x_{k}}}={\frac {\partial U_{j}}{\partial x_{j}}}=0}$
hence assuming that the viscosity is constant, it can be easily shown that the momentum equation reduces to
${\displaystyle \displaystyle \rho {\frac {DU_{i}}{Dt}}=\rho \ g_{i}-{\frac {\partial P}{\partial x_{i}}}+\mu {\frac {\partial ^{2}U_{i}}{\partial x_{j}^{2}}}}$
## Euler's equation: Inviscid flow
When the velocity gradients in the flow is negligible and/or the Reynolds number takes very high values, the viscous stresses can be neglected:
${\displaystyle \displaystyle \displaystyle \tau _{ij}=0}$
Since, the viscous stresses are proportional to viscosity:
${\displaystyle \displaystyle \tau _{ij}\propto \mu }$
for flows, where ${\displaystyle \displaystyle \tau _{ij}}$ is neglected, the flow is called frictionless or inviscid, although there is a finite viscosity of the flow. Accordingly, the linear momentum equation reduces to
${\displaystyle \displaystyle \rho {\frac {DU_{i}}{Dt}}=\rho g_{i}-{\frac {\partial P}{\partial x_{i}}}.}$
## Euler's equation in streamline coordinates
Differential control volume along streamline coordinates and the forces on it for a inviscid flow
Differential control volume along streamline coordinates and the forces on it for a inviscid flow
Euler's equation take a special form along and normal to a streamline with which one can see the dependency between the pressure, velocity and curvature of the streamline.
To obtain Euler's equation in s-direction, apply Newton's second law in s-direction in the absence of viscous forces.
${\displaystyle \displaystyle \rho dV\left[{\frac {\partial U_{s}}{\partial t}}+U_{s}{\frac {\partial U_{s}}{\partial s}}\right]=-{\frac {\partial P}{\partial s}}dV-\rho \ gsin\beta dV}$
Omitting ${\displaystyle \displaystyle dV}$ would deliver
${\displaystyle \displaystyle {\frac {DU_{s}}{Dt}}=-{\frac {1}{\rho }}{\frac {\partial P}{\partial s}}-gsin\beta }$
Since
${\displaystyle \displaystyle sin\beta \approx {\frac {dx_{2}}{ds}}={\frac {\partial x_{2}}{\partial s}}}$
then the Equler's equation along a streamline reads
${\displaystyle \displaystyle {\frac {DU_{s}}{Dt}}=-{\frac {1}{\rho }}{\frac {\partial P}{\partial s}}-g{\frac {\partial x_{2}}{\partial s}}\ \ \ (8)}$
For a steady flow and by neglecting body forces,
${\displaystyle \displaystyle {\frac {1}{\rho }}{\frac {\partial P}{\partial s}}=-U_{s}{\frac {\partial U_{s}}{\partial s}}}$
it can be seen that decrease in velocity means increase in pressure as is indicated by the Bernoulli equation.
To obtain Euler's equation in n direction, apply Newton's second law in the absence of viscous forces and for a steady flow.
${\displaystyle \displaystyle \rho dV{\frac {DU_{n}}{Dt}}=\left(P-{\frac {\partial P}{\partial n}}{\frac {dn}{2}}\right)ds\ dx_{3}-\left(P+{\frac {\partial P}{\partial n}}{\frac {dn}{2}}\right)ds\ dx_{3}-\rho gdVcos\beta }$
${\displaystyle \displaystyle \rho \ {\frac {DU_{n}}{Dt}}=\left(-{\frac {\partial P}{\partial n}}-\rho \ g\ cos\beta \right)}$
Since,
${\displaystyle \displaystyle cos\beta \approx {\frac {dx_{2}}{dn}}={\frac {\partial x_{2}}{\partial n}}}$
Then,
${\displaystyle \displaystyle {\frac {DU_{n}}{Dt}}=-{\frac {1}{\rho }}{\frac {\partial P}{\partial n}}-g\ {\frac {\partial x_{2}}{\partial n}}}$
For a steady flow, the normal acceleration of the fluid is towards the center of curvature of the streamline:
${\displaystyle \displaystyle {\frac {DU_{n}}{Dt}}=-{\frac {U_{s}^{2}}{R}}}$
Hence,
${\displaystyle \displaystyle {\frac {U_{s}^{2}}{R}}={\frac {1}{\rho }}{\frac {\partial P}{\partial n}}+g{\frac {\partial x_{2}}{\partial n}}}$
${\displaystyle \displaystyle {\frac {DU_{n}}{Dt}}=-{\frac {U_{s}^{2}}{R}}+{\frac {\partial U_{n}}{\partial t}}}$
For steady flow neglecting body forces, the Euler's equation normal to the streamline is
${\displaystyle \displaystyle {\frac {1}{\rho }}{\frac {\partial P}{\partial n}}={\frac {U_{s}^{2}}{R}}}$
which indicates that pressure increases in a direction outwards from the center of the curvature of the streamlines. In other words, pressure drops towards the center of curvature, which, consequently creates a potential difference in terms of pressure and forces the fluid to change its direction. For a straight streamline ${\displaystyle \displaystyle R\rightarrow \infty }$, there is no pressure variation normal to the streamline.
## Bernoulli equation: Integration of Euler's equation along a streamline for a steady flow
${\displaystyle \displaystyle {\frac {1}{\rho }}{\frac {\partial P}{\partial s}}+g{\frac {\partial x_{2}}{\partial s}}=-U_{s}{\frac {\partial U_{s}}{\partial s}}}$
If a fluid particle moves a distance ds, along a streamline, since every variable becomes a function of ${\displaystyle \displaystyle s}$:
${\displaystyle \displaystyle {\frac {\partial P}{\partial s}}ds=dP{\text{:Change in pressure along s,}}}$
${\displaystyle \displaystyle {\frac {\partial x_{2}}{\partial s}}ds=dx_{2}\ {\text{:Change in elevation along s,}}}$
${\displaystyle \displaystyle {\frac {\partial U_{s}}{\partial s}}ds=dU_{s}\ {\text{:Change in velocity along s.}}}$
Integration of the Euler equation between two locations, 1 and 2, along ${\displaystyle \displaystyle s}$ reads
${\displaystyle \displaystyle \int _{1}^{2}\left({{\frac {1}{\rho }}{\frac {\partial P}{\partial s}}+g{\frac {\partial x_{2}}{\partial s}}+U_{s}{\frac {\partial U_{s}}{\partial s}}}\right)ds=0}$
For incompressible flow ${\displaystyle \displaystyle \rho ={\textrm {constant}}}$ and after changing the notation as: ${\displaystyle \displaystyle U=U_{s}}$ and ${\displaystyle \displaystyle z=x_{2}}$, the integration results in
${\displaystyle \displaystyle {\frac {P_{2}-P_{1}}{\rho }}+g(z_{2}-z_{1})+{\frac {U_{2}^{2}-U_{1}^{2}}{2}}=0{\textrm {\ along\ s}}}$
or in its most beloved form:
${\displaystyle \displaystyle {\frac {P_{1}}{\rho }}+gz_{1}+{\frac {U_{1}^{2}}{2}}={\frac {P_{2}}{\rho }}+gz_{2}+{\frac {U_{2}^{2}}{2}}}$
In other words along a streamline:
${\displaystyle \displaystyle {\frac {P}{\rho }}+gz+{\frac {U^{2}}{2}}={\textrm {constant}}}$
Note that due to the assumptions made during the derivation, the following restrictions applies to this equation: The flow should be steady, incompressible, frictionless and the equation is valid only along a streamline.
### Different forms of Bernoulli equation
The common forms of Bernoulli equation are as follows:
Energy form (per unit mass)
${\displaystyle \displaystyle \underbrace {\frac {U^{2}}{2}} _{\text{kinetic energy}}+\underbrace {\frac {P}{\rho }} _{\text{pressure energy}}+\underbrace {gz} _{\text{potential energy}}=\zeta }$
Pressure form
${\displaystyle \displaystyle \underbrace {\rho {\frac {U^{2}}{2}}} _{\text{dynamic pressure}}+\underbrace {P} _{\text{static pressure}}+\underbrace {\rho gz} _{\text{geodesic pressure}}=K}$
${\displaystyle \displaystyle \underbrace {\frac {U^{2}}{2g}} _{\text{velocity head}}+\underbrace {\frac {P}{\rho g}} _{\text{pressure head}}+\underbrace {z} _{\text{geodesic head}}=k}$
### Static, stagnation and dynamic pressures
How do we measure pressure? When the streamlines are parallel to the wall we can use pressure taps.
If the measured location is far from the wall, static pressure measurements can be made by a static pressure probe. The stagnation pressure is the value obtained when a flowing fluid is decelerated to zero velocity by a frictionless flow process. The Stagnation pressure can be calculated as follows: ${\displaystyle \displaystyle {\frac {P}{\rho }}+{\frac {U^{2}}{2}}=\;{\textrm {constant}}}$ ${\displaystyle \displaystyle {\frac {P_{0}}{\rho }}+{\frac {U_{0}^{2}}{2}}={\frac {P_{1}}{\rho }}+{\frac {U_{1}^{2}}{2}}}$ when ${\displaystyle \displaystyle U_{0}^{2}=0}$ ${\displaystyle \displaystyle P_{0}=P_{1}+\rho {\frac {U_{1}^{2}}{2}},}$ where the last term is the dynamic pressure. Pitot tube used in racing car
Figure: a)Jet is impinging to the wall and stagnating at the point of impingment b)Schematics of a of a pressure tap in a channel c)Static pressure probe
If we know the pressure difference ${\displaystyle \displaystyle P_{0}-P_{1}}$, we can calculate the ${\displaystyle \displaystyle U_{1}}$ velocity.
${\displaystyle \displaystyle U_{1}={\sqrt {\frac {2\left(P_{0}-P_{1}\right)}{\rho }}}}$
The stagnation pressure is measured in the laboratory using a probe that faces directly upstream flow.
Such a probe is called a stagnation pressure probe or Pitot tube . Thus, using a pressure tap and a Pitot tube one can measure local velocity:
${\displaystyle \displaystyle P_{0}=P_{A}+\rho {\frac {U_{A}}{2}}}$
${\displaystyle \displaystyle P_{0}-P_{A}=\left(P_{A}+\rho {\frac {U_{A}^{2}}{2}}\right)-(P_{A})=\rho {\frac {U_{A}^{2}}{2}}}$
Thus, measuring ${\displaystyle \displaystyle P_{0}-P_{A}}$ one can determine ${\displaystyle \displaystyle U_{A}}$.
Pitot-static tube for velocity measurement
However, in the absence of a wall with well defined location, the velocity can be measured by a Pitot-static tube. The pressure is measured at B and C; assuming ${\displaystyle \displaystyle P_{B}=P_{C}}$. Hence, ${\displaystyle \displaystyle U_{B}={\sqrt {\frac {2(P_{0_{B}}-P_{B})}{\rho }}}.}$
The Euler's equation along a streamline is:
${\displaystyle \displaystyle -{\frac {1}{\rho }}{\frac {\partial P}{\partial s}}-g{\frac {\partial z}{\partial s}}={\frac {DU_{s}}{Dt}}=U_{s}{\frac {\partial U_{s}}{\partial s}}+{\frac {\partial U_{s}}{\partial t}}}$
along ds,
${\displaystyle \displaystyle -{\frac {1}{\rho }}{\frac {\partial P}{\partial s}}ds-g{\frac {\partial z}{\partial s}}ds=U_{s}{\frac {\partial U_{s}}{\partial s}}ds+{\frac {\partial U_{s}}{\partial t}}ds}$
hence,
${\displaystyle \displaystyle -{\frac {1}{\rho }}dP-gdz=U_{s}dU_{s}+{\frac {\partial U_{s}}{\partial t}}ds.}$
Integration between two points along a streamline is:
${\displaystyle \displaystyle -\int _{1}^{2}{\frac {dP}{\rho }}-\int _{1}^{2}{g}dz=\int _{1}^{2}{U_{s}dU_{s}}+\int _{1}^{2}{\frac {\partial U_{s}}{\partial t}}ds}$
For incompressible flow, ${\displaystyle \displaystyle \rho ={\textrm {constant}}}$, thus the integral reads
${\displaystyle \displaystyle {\frac {P_{1}-P_{2}}{\rho }}+g(z_{1}-z_{2})={\frac {U_{2}^{2}-U_{1}^{2}}{2}}+\int _{1}^{2}{{\frac {\partial U_{s}}{\partial t}}ds}}$
The unsetady Bernoulli equation involves the integration of the time gradient of the velocity between two points.:
${\displaystyle \displaystyle {\frac {P_{1}}{\rho }}+gz_{1}+{\frac {U_{1}^{2}}{2}}={\frac {P_{2}}{\rho }}+gz_{2}+{\frac {U_{2}^{2}}{2}}+\int _{1}^{2}{{\frac {\partial U_{s}}{\partial t}}ds}}$
|
{}
|
# Definition:Sequence of n Terms
## Definition
A sequence of $n$ terms is a (finite) sequence whose length is $n$.
|
{}
|
Definitions
Covariant transformation
In physics, a covariant transformation is a rule (specified below), that describes how certain physical entities change under a change of coordinate system. In particular the term is used for vectors and tensors. The transformation that describes the new basis vectors in terms of the old basis, is defined as a covariant transformation. Conventionally, indices identifying the basis vectors are placed as lower indices and so are all entities that transform in the same way. The inverse of the covariant transformation is called the contravariant transformation. Vectors transform according to the covariant rule, but the components of a vector transform according to the contravariant rule. Conventionally, indices identifying the components of a vector are placed as upper indices and so are all indices of entities that transform in the same way. The summation over all indices of a product with the same lower and upper indices, are invariant to a transformation.
A vector itself is a geometrical quantity in principle independent (invariant) of the chosen coordinate system. A vector v is given, say, in components vi on a chosen basis ei, related to a coordinate system xi (the basis vectors are tangent vectors to the coordinate grid). On another basis, say $\left\{mathbf e\right\}\text{'}_i$, related to a new coordinate system $\left\{x\text{'};\right\}^i$, the same vector v has different components $v\text{'};^i$ and
$\left\{mathbf v\right\} = sum_i v^i \left\{mathbf e\right\}_i = sum_i \left\{v\text{'};\right\}^i \left\{mathbf e\right\}\text{'}_i$
(in the so called Einstein notation the summation sign is often omitted, implying summation over the same upper and lower indices occurring in a product). With v as invariant and the $\left\{mathbf e\right\}_i$ transforming covariant, it must be that the $v^i$ (the set of numbers identifying the components) transform in a different way, the inverse called the contravariant transformation rule.
If, for example in a 2-dim Euclidean space, the new basis vectors are rotated to the right with respect to the old basis vectors, then it will appear in terms of the new system that the components of the vector look as if the vector was rotated to the left (see figure).
A vector v is described in a given coordinate grid (black lines) on a basis which are the tangent vectors to the (here rectangular) coordinate grid. The components are $v_x$ and $v_y$. In another coordinate system (dashed and red), the new basis are tangent vectors in the radial direction and perpendicular to it. They appear rotated clockwise with respect to the first basis. The covariant transformation here is a clockwise rotation. The components in red are indicated as $v_r$ and $v_phi$. If we view the new components with $v_r$ pointed upwards, it appears as if the components are rotated to the left. The contravariant transformation is an anticlockwise rotation.
Examples of covariant transformation
The derivative of a function transforms covariantly
The explicit form of a covariant transformation is best introduced with the transformation properties of the derivative of a function. Consider a scalar function f (like the temperature in a space) defined on a set of points p, identifiable in a given coordinate system $x^i,; i=0,1,...$ (such a collection is called a manifold). If we adopt a new coordinates system $\left\{x\text{'},\right\}^j, j=0,1,...$ then for each i, the original coordinate $\left\{x\right\}^i$ can be expressed as function of the new system, so $\left\{x\right\}^i\left(\left\{x\text{'},\right\}^j\right), j=0,1,...$ One can express the derivative of f in new coordinates in terms of the old coordinates, using the chain rule of the derivative, as
frac{partial f;}{partial {x',}^i} = sum_j frac{partial f;}{partial {x}^j} ; frac{partial {x}^j;}{partial {x',}^i} This is the explicit form of the covariant transformation rule. The notation of a normal derivative with respect to the coordinates sometimes uses a comma, as follows
f_{,i} stackrel{mathrm{def}}{=} frac{partial f;}{partial x^i} where the index i is placed as a lower index, because of the covariant transformation.
Basis vectors transform covariantly
A vector can be expressed in terms of basis vectors. For a certain coordinate system, we can choose the vectors tangent to the coordinate grid. This basis is called the coordinate basis.
To illustrate the transformation properties, consider again the set of points p, identifiable in a given coordinate system $x^i$ where $i=0,1,...$ (manifold). A scalar function f, that assigns a real number to every point p in this space, is a function of the coordinates $f;\left(x^0,x^1,...\right)$. A curve is a one-parameter collection of points c, say with curve parameter λ, c(λ). A tangent vector v to the curve is the derivative $df/dlambda$ along the curve with the derivative taken at the point p under consideration. Note that we can see the as an operator which can be applied to a function
$\left\{mathbf v\right\}\left[f\right] stackrel\left\{mathrm\left\{def\right\}\right\}\left\{=\right\} frac\left\{df\right\}\left\{dlambda\right\}= frac\left\{d;;\right\}\left\{dlambda\right\} f\left(c\left(lambda\right)\right)$
The parallel between the tangent vector and the operator can also be worked out in coordinates
$\left\{mathbf v\right\}\left[f\right] = sum_i frac\left\{dx^i\right\}\left\{dlambda\right\} frac\left\{partial f\right\}\left\{partial x^i\right\}$
or in terms of operators $partial/partial x^i$
{mathbf v} = frac{dx^i}{dlambda} frac{partial ;;}{partial x^i} = frac{dx^i}{dlambda} {mathbf e}_i where we have written $\left\{mathbf e\right\}_i = partial/partial x^i$, the tangent vectors to the curves which are simply the coordinate grid itself.
If we adopt a new coordinates system $\left\{x\text{'},\right\}^i, ;i=0,1,...$ then for each i, the old coordinate $\left\{x^i\right\}$ can be expressed as function of the new system, so $x^i\left(\left\{x\text{'},\right\}^j\right), j=0,1,...$ Let $\left\{mathbf e\right\}\text{'}_i = \left\{partial;\right\}/\left\{partial \left\{x\text{'},\right\}^i\right\}$ be the basis, tangent vectors in this new coordinates system. We can express $\left\{mathbf e\right\}_i$ in the new system by applying the chain rule on x. As a function of coordinates we find the following transformation
{mathbf e}'_i = frac{partial;;;}{partial {x',}^i} = frac{partial x^j;}{partial {x',}^i} frac{partial;;;}{partial x^j} = frac{partial x^j;;}{partial {x',}^i;} {mathbf e}_j which indeed is the same as the covariant transformation for the derivative of a function.
Contravariant transformation
The components of a (tangent) vector transform in a different way, called contravariant transformation. Consider a tangent vector v and call its components $v^i$ on a basis $\left\{mathbf e\right\}_i$. On another basis $\left\{mathbf e\right\}\text{'},_i$ we call the components $\left\{v\text{'},\right\}^i$, so
{mathbf v} = sum_i v^i {mathbf e}_i = sum_i {v',}^i {mathbf e}',_i in which
$v^i = frac\left\{dx^i\right\}\left\{dlambda;\right\} ;mbox\left\{ and \right\};$
{v',}^i = frac{d{x',}^i}{dlambda;;} If we express the new components in terms of the old ones, then
{v',}^i = frac{d{x',}^i}{dlambda;;} = frac{partial {x',}^i}{partial x^j} frac{dx^j}{dlambda;;} = frac{partial {x',}^i}{partial x^j} {v}^j This is the explicit form of a transformation called the contravariant transformation and we note that it is different and just the inverse of the covariant rule. In order to distinguish them from the covariant (tangent) vectors, the index is placed on top.
Differential forms transform contravariantly
An example of a contravariant transformation is given by a differential form df. For f as a function of coordinates $x^i$, df can be expressed in terms of $dx^i$. The differentials dx transform according to the contravariant rule since
$d\left\{x\text{'},\right\}^i = sum_j frac\left\{partial \left\{x\text{'},\right\}^i\right\}\left\{partial \left\{x\right\}^j;;\right\} \left\{dx\right\}^j$
Dual properties
Entities that transform covariantly (like basis vectors) and the ones that transform contravariantly (like components of a vector and differential forms) are "almost the same" and yet they are different. They have "dual" properties. What is behind this, is mathematically known as the dual space that always goes together with a given linear vector space.
Take any linear vector space T and let $\left\{mathbf e\right\}_i$ be a basis for this space. Consider a linear real function defined in this linear space. If v and w are two vectors in this vector space, then a real function f (with vectors as argument) is called a linear function if both (for any v, w and scalar α)
$f\left(\left\{mathbf v\right\}+\left\{mathbf w\right\}\right) = f\left(\left\{mathbf v\right\}\right) + f\left(\left\{mathbf w\right\}\right)$
$f\left(alpha \left\{mathbf v\right\}\right) = alpha f\left(\left\{mathbf v\right\}\right)$
A simple example is the function which assigns the value of one of its components (the so called projection function). It has a vector as argument and assigns a real number, the value of a component.
All such linear functions together form a linear space by themselves. It is called the dual space of T. One can easily see that, indeed, the sum f+g is again a linear function for linear f and g by applying f+g to a sum v + w. And that the same holds for scalar multiplication αf.
We can define a basis, called the dual basis in this space in a natural way by taking the set of linear functions mentioned above: the projection functions. So those functions ω that produce the number 1 when they are applied to one of the basis vector $\left\{mathbf e\right\}_i$. For example $\left\{omega\right\}^0$ gives a 1 on $\left\{mathbf e\right\}_0$ and zero elsewhere. Applying this linear function $\left\{omega\right\}^0$ to a vector $\left\{mathbf v\right\} =v^i \left\{mathbf e\right\}_i$, gives (using its linearity)
omega^0({mathbf v}) = omega^0(v^i {mathbf e}_i) = v^i omega^0({mathbf e}_i) = v^0 so just the value of the first coordinate. For this reason it is called the projection function.
There are as many dual basis vectors $omega^i$ as there are basis vectors $\left\{mathbf e\right\}_i$, so the dual space has the same dimension as the linear space itself. It is "almost the same space",except that the elements of the dual space (called dual vectors) transform contravariant and the elements of the tangent vector space transform covariant.
Sometimes an extra notation is introduced where the real value of a linear function σ on a tangent vector u is given as
$sigma \left[\left\{mathbf u\right\}\right] := langle sigma, \left\{mathbf u\right\}rangle$
where $langlesigma, \left\{mathbf u\right\}rangle$ is a real number. This notation emphasizes the bilinear character of the form. it is linear in σ since that is a linear function and it is linear in u since that is an element of a vector space.
Co- and contravariant tensor components
Without coordinates
With the aid of the section of dual space, a tensor of rank $\left(^r_s\right)$ is simply defined as a real-valued multilinear function of r dual vectors and s vectors in a point p. So a tensor is defined in a point. It is a linear machine: feed it with vectors and dual vectors and it produces a real number. Since vectors (and dual vectors) are defined coordinate independently, this definition of a tensor is also free of coordinates and does not depend on the choice of a coordinate system. This is the main importance of tensors in physics.
The notation of a tensor is
$T\left(sigma, ldots ,rho, \left\{mathbf u\right\}, ldots, \left\{mathbf v\right\}\right)$
;mbox{ or as }; { T^{sigma ldots rho} }_{ {mathbf u} ldots {mathbf v}} for dual vectors (differential forms) ρ, σ and tangent vectors $\left\{mathbf u\right\}, \left\{mathbf v\right\}$. In the second notation the distinction between vectors and differential forms is more obvious.
With coordinates
Because a tensor depends linearly on its arguments, it is completely determined if one knows the values on a basis $omega^i ldots omega^j$ and $\left\{mathbf e\right\}_k ldots \left\{mathbf e\right\}_l$
$T\left(omega^i,ldots,omega^j, \left\{mathbf e\right\}_k ldots \left\{mathbf e\right\}_l\right) =$
{T^{ildots j}}_{kldots l} The numbers $\left\{T^\left\{ildots j\right\}\right\}_\left\{kldots l\right\}$ are called the components of the tensor on the chosen basis.
If we choose another basis (which are a linear combination of the original basis), we can use the linear properties of the tensor and we will find that the tensor components in the upper indices transform as dual vectors (so contravariant), whereas the lower indices will transform as the basis of tangent vectors and are thus covariant. For a tensor of rank 2, we can easily verify that
A'_{i j} = frac{partial x^l}{partial {x',}^i} frac{partial x^m}{partial {x',}^j} A_{l m} covariant tensor
{A',}^{i j} = frac{partial {x',}^i}{partial x^l} frac{partial {x',}^j}{partial x^m} A^{l m} contravariant tensor
For a mixed co- and contravariant tensor of rank 2
{A',}^i_j= frac {partial {x',}^i} {partial x^l} frac {partial x^m} {partial {x',}^j} A^l_m mixed co- and contravariant tensor
Search another word or see Covariant transformationon Dictionary | Thesaurus |Spanish
|
{}
|
# Definition of Projection. Meaning of Projection. Synonyms of Projection
Here you will find one or more explanations in English for the word Projection. Also in the bottom left of the page several parts of wikipedia pages related to the word Projection and, of course, Projection synonyms and on the right images related to the word Projection.
## Definition of Projection
projection
Orthographic Or`tho*graph"ic, Orthographical Or`tho*graph"ic*al, a. [Cf. F. orthographique, L. orthographus, Gr. ?.] 1. Of or pertaining to orthography, or right spelling; also, correct in spelling; as, orthographical rules; the letter was orthographic. 2. (Geom.) Of or pertaining to right lines or angles. Orthographic or Orthogonal, projection, that projection which is made by drawing lines, from every point to be projected, perpendicular to the plane of projection. Such a projection of the sphere represents its circles as seen in perspective by an eye supposed to be placed at an infinite distance, the plane of projection passing through the center of the sphere perpendicularly to the line of sight.
## Meaning of Projection from wikipedia
- Projection, projections or projective may refer to: The display of images by a projector Map projection, reducing the surface of a three-dimensional planet...
- map projections. All map projections necessarily distort the surface in some fashion. There is no limit to the number of possible map projections. Projections...
- Psychological projection is a defence mechanism in which the human ego defends itself against unconscious impulses or qualities (both positive and negative)...
- The vector projection of a vector a on (or onto) a nonzero vector b (also known as the vector component or vector resolution of a in the direction of b)...
- equirectangular projection (also called the equidistant cylindrical projection, geographic projection, or la carte parallélogrammatique projection, and which...
- Orthographic projection (sometimes orthogonal projection) is a means of representing three-dimensional objects in two dimensions. It is a form of parallel...
- Astral projection (or astral travel) is a term used in esotericism to describe an intentional out-of-body experience (OBE) that ****umes the existence of...
- In linear algebra and functional analysis, a projection is a linear transformation P {\displaystyle P} from a vector space to itself such that P 2 = P...
- projection categories, each with its own protocol: parallel projection perspective projection Multiview projection (elevation) Axonometric projection...
- The Gall–Peters projection is a rectangular map projection that maps all areas such that they have the correct sizes relative to each other. Like any equal-area...
|
{}
|
Preprint Article Version 1 This version is not peer-reviewed
# Modeling and Analysis of Population Dynamics of Examination Malpractice Among Students
Version 1 : Received: 11 July 2018 / Approved: 11 July 2018 / Online: 11 July 2018 (11:43:03 CEST)
How to cite: Obasi, C.; Obiora, C.; Mbah, G.; Olawuyi, O. Modeling and Analysis of Population Dynamics of Examination Malpractice Among Students. Preprints 2018, 2018070196 (doi: 10.20944/preprints201807.0196.v1). Obasi, C.; Obiora, C.; Mbah, G.; Olawuyi, O. Modeling and Analysis of Population Dynamics of Examination Malpractice Among Students. Preprints 2018, 2018070196 (doi: 10.20944/preprints201807.0196.v1).
## Abstract
While examination malpractice has been a recognized social problem among students, the control of the phenomenon remains a challenge. In this study, we formulate a mathematical model describing the population dynamics of examination malpractice among students. Initial insight into the dynamics of the model is gained by analyzing some important mathematical features of the model such as the basic malpractice number. The malpractice-free equilibrium and endemic equilibrium points of the model are shown to be locally asymptotically stable when the basic malpractice number is less than unity. This result implies that examination malpractice can be totally eradicated among students when the basic malpractice number is less than unity. To understand the impact of controlling this social problem, we extend the model to incorporate awareness campaign and disciplinary measure as control strategies in curtailing the act. Our analysis reveals that incorporating control strategies have some influence in reducing examination malpractice among students. Further analysis indicates that considering both control strategies simultaneously yields a better result in reducing examination malpractice and examination malpractice will grow faster when control strategies are not introduced.
## Subject Areas
Examination malpractice; stability analysis; basic malpractice number
|
{}
|
Table 1
Amsterdam Orphanage Families’ Possession of “Asiatic” Goods by Asset Categories, 1742–1782
household total assets% with tea/coffee% with porcelain% with cotton% with silk% with chintz
Assets15 N=250 8.0 4.8 2.8 0.0 0.4
15–200 N=446 70.9 42.2 27.7 20.2 11.0
Assets200 N=216 90.8 65.6 39.9 53.2 38.1
household total assets% with tea/coffee% with porcelain% with cotton% with silk% with chintz
Assets15 N=250 8.0 4.8 2.8 0.0 0.4
15–200 N=446 70.9 42.2 27.7 20.2 11.0
Assets200 N=216 90.8 65.6 39.9 53.2 38.1
Close Modal
|
{}
|
# In the Quadratic Sieve, why restrict the factor base?
In the Quadratic Sieve, when factoring a number $N$, many descriptions and most implementations select as the factor base the set of small primes $p_j$ less than some bound $B$ restricted to having Legendre symbol $\left({N\over p_j}\right)=+1$.
Why this restriction of the factor base to primes $p_j$ such that $N$ is a quadratic residue modulo $p_j$? I wonder, because:
• Wikipedia's introductory description of the Quadratic Sieve does not use such restriction, and works; the restriction is introduced in an example, without justification (in the article as it stands now).
• Such restriction considerably lowers the density of integers having all factors in the factor base, arguably making it harder to find $x$ such that $x^2\bmod N$ has all factors in the factor base (or more generally to find $(x,k)$ with $x^2-k\cdot N$ smooth).
• A find including a factor not matching the restriction can help a successful factorization; borrowing the example in Introduction to Mathematical Cryptography section 3.6, trying to factor $N=9788111$ (by sieving $x^2\bmod N$ for $x$ starting at $\lceil\sqrt N\rceil$, by the same algorithm as in Wikipedia), it is found 20 smooths values of $x^2\bmod N$ for $x>\sqrt N$ among which it is selected (by Gaussian elimination) \begin{align} 3129^2\bmod N&=2\cdot5\cdot11\cdot23\\ 3313^2\bmod N&=2\cdot7^2\cdot17\cdot23\cdot31\\ 3449^2\bmod N&=2\cdot5\cdot7^2\cdot11\cdot17\cdot23\\ 4426^2\bmod N&=2\cdot3\cdot47^2\\ 4651^2\bmod N&=3\cdot23\cdot31^3 \end{align} where the last two relations would have been eliminated by the restriction of the factor base, since $\left({N\over 3}\right)=-1$; however, by taking the product of these relations, it is found that $$(3129\cdot3313\cdot3449\cdot4426\cdot4651)^2-(2^2\cdot3\cdot5\cdot7^2\cdot11\cdot17\cdot23^2\cdot31^2\cdot47)^2$$ is a multiple of $N$, from it is found $2741$, a nontrivial factor of $N$, by computing $$\gcd(N,3129\cdot3313\cdot3449\cdot4426\cdot4651-2^2\cdot3\cdot5\cdot7^2\cdot11\cdot17\cdot23^2\cdot31^2\cdot47)$$
Note: For the purpose of the question, please disregard the improvements of including $-1$ in the factor base; or/and allowing use of one or a few primes larger than the bound $B$; or/and (unless relevant) electing to factor $m\cdot N$ for a small multiplier $m$.
• Legendre symbole of N mod p = 0. From my knowledge in NT, Zero is excluded from QR. – Robert NACIRI May 28 '16 at 20:32
• @Robert NACIRI: I changed the notation to use $p_j$ rather than $p$, to clarify that I'm talking of primes in the factor base, not a factor of $N$. – fgrieu May 29 '16 at 9:07
I believe Carl Pomerance (the inventor of the quadratic sieve algorithm) gives a great explanation in:
Pomerance, C. (2008). Smooth numbers and the quadratic sieve. In Algorithmic Number Theory Lattices, Number Fields, Curves and Cryptography (pp. 69-81). Cambridge: Cambridge University Press.
The quote below comes from page 72.
"A number $$m$$ is smooth if all of its prime factors are small. Specifically, we say $$m$$ is $$B$$-smooth if all of its prime factors are $$\le B$$. The first observation is that if a number in our sequence is not smooth, then it is unlikely it will be used in a subsequence with product a square. Indeed, if the number $$m$$ is divisible by the large prime $$p$$, then if $$m$$ is to be used in the square subsequence, then there necessarily must be at least one other term $$m'$$ in the subsequence which also is divisible by $$p$$. (This other term $$m'$$ may be $$m$$ itself, that is, perhaps $$p^2 | m$$.) But given that $$p$$ is large, multiples of $$p$$ will be few and far between, and finding this mate for $$m$$ will not be easy. So, say we agree to choose some cut off $$B$$, and discard any number from the sequence that is not $$B$$-smooth." [emphasis added]
This is in agreement with the answer given by @fgrieu.
The condition $\genfrac(){}{}{a}{b}$=+1 doen't matter as the product of 2 QNR is a QR.In fact the idea of locking for numbers satisfying $x^2-y^2=k.n$ was firstly introduced by Gauss (1801) and was developped by Kraichick (1920) by searching for special sequences of the form $Q(x_i)=x_i^2 -n$ which factorize over small primes $\prod Q(x_i)=v^2$. In the exemple you gave the number of QNR are even, then in the product only QR factors are present. Selecting larger primes outside the factor base, need additional ressource in storage and computation and doen't give additionnal advantage in the form of Fermat difference.
• So why is it customary to restrict a factor base to QR in any serious implementation or article? – fgrieu May 27 '16 at 11:39
• @fgrieu Right! But some strategies are more or less effective than others. Recall that the main objective is that $\prod Q(x_i)=v^2$ is QR, independantly of each $x_i$ in order to satisfy $\prod x_i^2-v^2=0$ [n] – Robert NACIRI May 27 '16 at 13:46
• (As an aside, your link to Wolfram says Legendre symbol, not Jacobi symbol.) HAC 3.2.6 says "Thus the factor base need only contain those primes p for which the Legendre symbol (n/p) is 1", Doesn't that fully answer your question? – Mok-Kong Shen May 28 '16 at 7:32
• @Mok-Kong Shen: your remark helped me fix a serious mistake in the question: I had written $\left({p\over N}\right)=+1$ which requires the Jacobi symbol, rather than $\left({N\over p}\right)=+1$ which is fine with just the Legendre symbol, and is a different filter. Also, your HAC reference helped find a partial answer. Thanks! – fgrieu May 28 '16 at 15:00
Answering my own question, with some degree of uncertainty.
The basic Quadratic Sieve (as in Wikipedia's algorithm and the example quoted in the question) finds smooth integers among $x^2\bmod N$, for $x$ starting at $\lceil\sqrt N\rceil$. Until $x$ reaches $\lceil\sqrt{2N}\rceil$, it is searched smooth numbers among $x^2-N$. If this is divisible by a prime $p_j$, then $x^2\equiv N\bmod p_j$. Assuming $p_j$ is not a factor of $N$, it holds that $\left({N\over p_j}\right)=+1$. Thus the restriction of the factor base to such $p_j$ does not reduce the density of smooth integers within the integers sieved, as long as we are sieving integers of the form $x^2-N$.
In the last two relations out of six shown in the question, $x$ got so large that it is sieved values $x^2-2\cdot N$. If we used a factor base restricted to $p_j$ with $\left({N\over p_j}\right)=+1$, there would be very few finds. Hence, no such restriction is made. However, the algorithm has sieved more than $0.4\sqrt N$ values, and is thus very inefficient. In practice, if $N$ is large enough that QS is justified, and the factor base is reasonably chosen (large enough), $x$ will remain below $\lceil\sqrt{2N}\rceil$, only $x^2-N$ will be sieved, and the restriction can be made.
It seems that popular variants of the QS are looking for smooth only of the form $x^2-N$ with $x=a\cdot y+b$, for appropriately chosen $a$ and $b$; in particular, such that $a$ divides $(a\cdot y+b)^2-N$. That allows to restrict the factor base without lowering the performance of sieving. Restricting the factor base is good because it about halves the number of variables in the system of linear equations to be solved; and about halves the number of smooths to be found before a solution to such system can be found.
|
{}
|
# Introduction to Network Analysis with R
## Creating static and interactive network graphs
Over a wide range of fields network analysis has become an increasingly popular tool for scholars to deal with the complexity of the interrelationships between actors of all sorts. The promise of network analysis is the placement of significance on the relationships between actors, rather than seeing actors as isolated entities. The emphasis on complexity, along with the creation of a variety of algorithms to measure various aspects of networks, makes network analysis a central tool for digital humanities.1 This post will provide an introduction to working with networks in R, using the example of the network of cities in the correspondence of Daniel van der Meulen in 1585.
There are a number of applications designed for network analysis and the creation of network graphs such as gephi and cytoscape. Though not specifically designed for it, R has developed into a powerful tool for network analysis. The strength of R in comparison to stand-alone network analysis software is three fold. In the first place, R enables reproducible research that is not possible with GUI applications. Secondly, the data analysis power of R provides robust tools for manipulating data to prepare it for network analysis. Finally, there is an ever growing range of packages designed to make R a complete network analysis tool. Significant network analysis packages for R include the statnet suite of packages and igraph. In addition, Thomas Lin Pedersen has recently released the tidygraph and ggraph packages that leverage the power of igraph in a manner consistent with the tidyverse workflow. R can also be used to make interactive network graphs with the htmlwidgets framework that translates R code to JavaScript.
This post begins with a short introduction to the basic vocabulary of network analysis, followed by a discussion of the process for getting data into the proper structure for network analysis. The network analysis packages have all implemented their own object classes. In this post, I will show how to create the specific object classes for the statnet suite of packages with the network package, as well as for igraph and tidygraph, which is based on the igraph implementation. Finally, I will turn to the creation of interactive graphs with the vizNetwork and networkD3 packages.
## Network Analysis: Nodes and Edges
The two primary aspects of networks are a multitude of separate entities and the connections between them. The vocabulary can be a bit technical and even inconsistent between different disciplines, packages, and software. The entities are referred to as nodes or vertices of a graph, while the connections are edges or links. In this post I will mainly use the nomenclature of nodes and edges except when discussing packages that use different vocabulary.
The network analysis packages need data to be in a particular form to create the special type of object used by each package. The object classes for network, igraph, and tidygraph are all based on adjacency matrices, also known as sociomatrices.2 An adjacency matrix is a square matrix in which the column and row names are the nodes of the network. Within the matrix a 1 indicates that there is a connection between the nodes, and a 0 indicates no connection. Adjacency matrices implement a very different data structure than data frames and do not fit within the tidyverse workflow that I have used in my previous posts. Helpfully, the specialized network objects can also be created from an edge-list data frame, which do fit in the tidyverse workflow. In this post I will stick to the data analysis techniques of the tidyverse to create edge lists, which will then be converted to the specific object classes for network, igraph, and tidygraph.
An edge list is a data frame that contains a minimum of two columns, one column of nodes that are the source of a connection and another column of nodes that are the target of the connection. The nodes in the data are identified by unique IDs. If the distinction between source and target is meaningful, the network is directed. If the distinction is not meaningful, the network is undirected. With the example of letters sent between cities, the distinction between source and target is clearly meaningful, and so the network is directed. For the examples below, I will name the source column as “from” and the target column as “to”. I will use integers beginning with one as node IDs.3 An edge list can also contain additional columns that describe attributes of the edges such as a magnitude aspect for an edge. If the edges have a magnitude attribute the graph is considered weighted.
Edge lists contain all of the information necessary to create network objects, but sometimes it is preferable to also create a separate node list. At its simplest, a node list is a data frame with a single column — which I will label as “id” — that lists the node IDs found in the edge list. The advantage of creating a separate node list is the ability to add attribute columns to the data frame such as the names of the nodes or any kind of groupings. Below I give an example of minimal edge and node lists created with the tibble() function.
library(tidyverse)
edge_list <- tibble(from = c(1, 2, 2, 3, 4), to = c(2, 3, 4, 2, 1))
node_list <- tibble(id = 1:4)
edge_list
#> # A tibble: 5 x 2
#> from to
#> <dbl> <dbl>
#> 1 1 2
#> 2 2 3
#> 3 2 4
#> 4 3 2
#> 5 4 1
node_list
#> # A tibble: 4 x 1
#> id
#> <int>
#> 1 1
#> 2 2
#> 3 3
#> 4 4
Compare this to an adjacency matrix with the same data.
#> 1 2 3 4
#> 1 0 1 0 0
#> 2 0 0 1 1
#> 3 0 1 0 0
#> 4 1 0 0 0
## Creating edge and node lists
To create network objects from the database of letters received by Daniel van der Meulen in 1585 I will make both an edge list and a node list. This will necessitate the use of the dplyr package to manipulate the data frame of letters sent to Daniel and split it into two data frames or tibbles with the structure of edge and node lists. In this case, the nodes will be the cities from which Daniel’s correspondents sent him letters and the cities in which he received them. The node list will contain a “label” column, containing the names of the cities. The edge list will also have an attribute column that will show the amount of letters sent between each pair of cities. The workflow to create these objects will be similar to that I have used in my brief introduction to R and in geocoding with R. If you would like to follow along, you can find the data used in this post and the R script used on GitHub.
The first step is to load the tidyverse library to import and manipulate the data. Printing out the letters data frame shows that it contains four columns: “writer”, “source”, “destination”, and “date”. In this example, we will only deal with the “source” and “destination” columns.
library(tidyverse)
letters
#> # A tibble: 114 x 4
#> writer source destination date
#> <chr> <chr> <chr> <date>
#> 1 Meulen, Andries van der Antwerp Delft 1585-01-03
#> 2 Meulen, Andries van der Antwerp Haarlem 1585-01-09
#> 3 Meulen, Andries van der Antwerp Haarlem 1585-01-11
#> 4 Meulen, Andries van der Antwerp Delft 1585-01-12
#> 5 Meulen, Andries van der Antwerp Haarlem 1585-01-12
#> 6 Meulen, Andries van der Antwerp Delft 1585-01-17
#> 7 Meulen, Andries van der Antwerp Delft 1585-01-22
#> 8 Meulen, Andries van der Antwerp Delft 1585-01-23
#> 9 Della Faille, Marten Antwerp Haarlem 1585-01-24
#> 10 Meulen, Andries van der Antwerp Delft 1585-01-28
#> # ... with 104 more rows
### Node list
The workflow to create a node list is similar to the one I used to get the list of cities in order to geocode the data in a previous post. We want to get the distinct cities from both the “source” and “destination” columns and then join the information from these columns together. In the example below, I slightly change the commands from those I used in the previous post to have the name for the columns with the city names be the same for both the sources and destinations data frames to simplify the full_join() function. I rename the column with the city names as “label” to adopt the vocabulary used by network analysis packages.
sources <- letters %>%
distinct(source) %>%
rename(label = source)
destinations <- letters %>%
distinct(destination) %>%
rename(label = destination)
To create a single dataframe with a column with the unique locations we need to use a full join, because we want to include all unique places from both the sources of the letters and the destinations.
nodes <- full_join(sources, destinations, by = "label")
nodes
#> # A tibble: 13 x 1
#> label
#> <chr>
#> 1 Antwerp
#> 2 Haarlem
#> 3 Dordrecht
#> 4 Venice
#> 5 Lisse
#> 6 Het Vlie
#> 7 Hamburg
#> 8 Emden
#> 9 Amsterdam
#> 10 Delft
#> 11 The Hague
#> 12 Middelburg
#> 13 Bremen
This results in a data frame with one variable. However, the variable contained in the data frame is not really what we are looking for. The “label” column contains the names of the nodes, but we also want to have unique IDs for each city. We can do this by adding an “id” column to the nodes data frame that contains numbers from one to whatever the total number of rows in the data frame is. A helpful function for this workflow is rowid_to_column(), which adds a column with the values from the row ids and places the column at the start of the data frame.4 Note that rowid_to_column() is a pipeable command, and so it is possible to do the full_join() and add the “id” column in a single command. The result is a nodes list with an ID column and a label attribute.
nodes <- nodes %>% rowid_to_column("id")
nodes
#> # A tibble: 13 x 2
#> id label
#> <int> <chr>
#> 1 1 Antwerp
#> 2 2 Haarlem
#> 3 3 Dordrecht
#> 4 4 Venice
#> 5 5 Lisse
#> 6 6 Het Vlie
#> 7 7 Hamburg
#> 8 8 Emden
#> 9 9 Amsterdam
#> 10 10 Delft
#> 11 11 The Hague
#> 12 12 Middelburg
#> 13 13 Bremen
### Edge list
Creating an edge list is similar to the above, but it is complicated by the need to deal with two ID columns instead of one. We also want to create a weight column that will note the amount of letters sent between each set of nodes. To accomplish this I will use the same group_by() and summarise() workflow that I have discussed in previous posts. The difference here is that we want to group the data frame by two columns — “source” and “destination” — instead of just one. Previously, I have named the column that counts the number of observations per group “count”, but here I adopt the nomenclature of network analysis and call it “weight”. The final command in the pipeline removes the grouping for the data frame instituted by the group_by() function. This makes it easier to manipulate the resulting per_route data frame unhindered.5
per_route <- letters %>%
group_by(source, destination) %>%
summarise(weight = n()) %>%
ungroup()
per_route
#> # A tibble: 15 x 3
#> source destination weight
#> <chr> <chr> <int>
#> 1 Amsterdam Bremen 1
#> 2 Antwerp Delft 68
#> 3 Antwerp Haarlem 5
#> 4 Antwerp Middelburg 1
#> 5 Antwerp The Hague 2
#> 6 Dordrecht Haarlem 1
#> 7 Emden Bremen 1
#> 8 Haarlem Bremen 2
#> 9 Haarlem Delft 26
#> 10 Haarlem Middelburg 1
#> 11 Haarlem The Hague 1
#> 12 Hamburg Bremen 1
#> 13 Het Vlie Bremen 1
#> 14 Lisse Delft 1
#> 15 Venice Haarlem 2
Like the node list, per_route now has the basic form that we want, but we again have the problem that the “source” and “destination” columns contain labels rather than IDs. What we need to do is link the IDs that have been assigned in nodes to each location in both the “source” and “destination” columns. This can be accomplished with another join function. In fact, it is necessary to perform two joins, one for the “source” column and one for “destination.” In this case, I will use a left_join() with per_route as the left data frame, because we want to maintain the number of rows in per_route. While doing the left_join, we also want to rename the two “id” columns that are brought over from nodes. For the join using the “source” column I will rename the column as “from”. The column brought over from the “destination” join is renamed “to”. It would be possible to do both joins in a single command with the use of the pipe. However, for clarity, I will perform the joins in two separate commands. Because the join is done across two commands, notice that the data frame at the beginning of the pipeline changes from per_route to edges, which is created by the first command.
edges <- per_route %>%
left_join(nodes, by = c("source" = "label")) %>%
rename(from = id)
edges <- edges %>%
left_join(nodes, by = c("destination" = "label")) %>%
rename(to = id)
Now that edges has “from” and “to” columns with node IDs, we need to reorder the columns to bring “from” and “to” to the left of the data frame. Currently, the edges data frame still contains the “source” and “destination” columns with the names of the cities that correspond with the IDs. However, this data is superfluous, since it is already present in nodes. Therefore, I will only include the “from”, “to”, and “weight” columns in the select() function.
edges <- select(edges, from, to, weight)
edges
#> # A tibble: 15 x 3
#> from to weight
#> <int> <int> <int>
#> 1 9 13 1
#> 2 1 10 68
#> 3 1 2 5
#> 4 1 12 1
#> 5 1 11 2
#> 6 3 2 1
#> 7 8 13 1
#> 8 2 13 2
#> 9 2 10 26
#> 10 2 12 1
#> 11 2 11 1
#> 12 7 13 1
#> 13 6 13 1
#> 14 5 10 1
#> 15 4 2 2
The edges data frame does not look very impressive; it is three columns of integers. However, edges combined with nodes provides us with all of the information necessary to create network objects with the network, igraph, and tidygraph packages.
## Creating network objects
The network object classes for network, igraph, and tidygraph are all closely related. It is possible to translate between a network object and an igraph object. However, it is best to keep the two packages and their objects separate. In fact, the capabilities of network and igraph overlap to such an extent that it is best practice to have only one of the packages loaded at a time. I will begin by going over the network package and then move to the igraph and tidygraph packages.
### network
library(network)
The function used to create a network object is network(). The command is not particularly straight forward, but you can always enter ?network() into the console if you get confused. The first argument is — as stated in the documentation — “a matrix giving the network structure in adjacency, incidence, or edgelist form.” The language demonstrates the significance of matrices in network analysis, but instead of a matrix, we have an edge list, which fills the same role. The second argument is a list of vertex attributes, which corresponds to the nodes list. Notice that the network package uses the nomenclature of vertices instead of nodes. The same is true of igraph. We then need to specify the type of data that has been entered into the first two arguments by specifying that the matrix.type is an "edgelist". Finally, we set ignore.eval to FALSE so that our network can be weighted and take into account the number of letters along each route.
routes_network <- network(edges, vertex.attr = nodes, matrix.type = "edgelist", ignore.eval = FALSE)
You can see the type of object created by the network() function by placing routes_network in the class() function.
class(routes_network)
#> [1] "network"
Printing out routes_network to the console shows that the structure of the object is quite different from data-frame style objects such as edges and nodes. The print command reveals information that is specifically defined for network analysis. It shows that there are 13 vertices or nodes and 15 edges in routes_network. These numbers correspond to the number of rows in nodes and edges respectively. We can also see that the vertices and edges both contain attributes such as label and weight. You can get even more information, including a sociomatrix of the data, by entering summary(routes_network).
routes_network
#> Network attributes:
#> vertices = 13
#> directed = TRUE
#> hyper = FALSE
#> loops = FALSE
#> multiple = FALSE
#> bipartite = FALSE
#> total edges= 15
#> missing edges= 0
#> non-missing edges= 15
#>
#> Vertex attribute names:
#> id label vertex.names
#>
#> Edge attribute names:
#> weight
It is now possible to get a rudimentary, if not overly aesthetically pleasing, graph of our network of letters. Both the network and igraph packages use the base plotting system of R. The conventions for base plots are significantly different from those of ggplot2 — which I have discussed in previous posts — and so I will stick with rather simple plots instead of going into the details of creating complex plots with base R. In this case, the only change that I make to the default plot() function for the network package is to increase the size of nodes with the vertex.cex argument to make the nodes more visible. Even with this very simple graph, we can already learn something about the data. The graph makes clear that there are two main groupings or clusters of the data, which correspond to the time Daniel spent in Holland in the first three-quarters of 1585 and after his move to Bremen in September.
plot(routes_network, vertex.cex = 3)
The plot() function with a network object uses the Fruchterman and Reingold algorithm to decide on the placement of the nodes.6 You can change the layout algorithm with the mode argument. Below, I layout the nodes in a circle. This is not a particularly useful arrangement for this network, but it gives an idea of some of the options available.
plot(routes_network, vertex.cex = 3, mode = "circle")
### igraph
Let’s now move on to discuss the igraph package. First, we need to clean up the environment in R by removing the network package so that it does not interfere with the igraph commands. We might as well also remove routes_network since we will not longer be using it. The network package can be removed with the detach() function, and routes_network is removed with rm().7 After this, we can safely load igraph.
detach(package:network)
rm(routes_network)
library(igraph)
To create an igraph object from an edge-list data frame we can use the graph_from_data_frame() function, which is a bit more straight forward than network(). There are three arguments in the graph_from_data_frame() function: d, vertices, and directed. Here, d refers to the edge list, vertices to the node list, and directed can be either TRUE or FALSE depending on whether the data is directed or undirected.
routes_igraph <- graph_from_data_frame(d = edges, vertices = nodes, directed = TRUE)
Printing the igraph object created by graph_from_data_frame() to the console reveals similar information to that from a network object, though the structure is more cryptic.
routes_igraph
#> IGRAPH f84c784 DNW- 13 15 --
#> + attr: name (v/c), label (v/c), weight (e/n)
#> + edges from f84c784 (vertex names):
#> [1] 9->13 1->10 1->2 1->12 1->11 3->2 8->13 2->13 2->10 2->12 2->11
#> [12] 7->13 6->13 5->10 4->2
The main information about the object is contained in DNW- 13 15 --. This tells that routes_igraph is a directed network (D) that has a name attribute (N) and is weighted (W). The dash after W tells us that the graph is not bipartite. The numbers that follow describe the number of nodes and edges in the graph respectively. Next, name (v/c), label (v/c), weight (e/n) gives information about the attributes of the graph. There are two vertex attributes (v/c) of name — which are the IDs — and labels and an edge attribute (e/n) of weight. Finally, there is a print out of all of the edges.
Just as with the network package, we can create a plot with an igraph object through the plot() function. The only change that I make to the default here is to decrease the size of the arrows. By default igraph labels the nodes with the label column if there is one or with the IDs.
plot(routes_igraph, edge.arrow.size = 0.2)
Like the network graph before, the default of an igraph plot is not particularly aesthetically pleasing, but all aspects of the plots can be manipulated. Here, I just want to change the layout of the nodes to use the graphopt algorithm created by Michael Schmuhl. This algorithm makes it easier to see the relationship between Haarlem, Antwerp, and Delft, which are three of the most signifiant locations in the correspondence network, by spreading them out further.
plot(routes_igraph, layout = layout_with_graphopt, edge.arrow.size = 0.2)
### tidygraph and ggraph
The tidygraph and ggraph packages are newcomers to the network analysis landscape, but together the two packages provide real advantages over the network and igraph packages. tidygraph and ggraph represent an attempt to bring network analysis into the tidyverse workflow. tidygraph provides a way to create a network object that more closely resembles a tibble or data frame. This makes it possible to use many of the dplyr functions to manipulate network data. ggraph gives a way to plot network graphs using the conventions and power of ggplot2. In other words, tidygraph and ggraph allow you to deal with network objects in a manner that is more consistent with the commands used for working with tibbles and data frames. However, the true promise of tidygraph and ggraph is that they leverage the power of igraph. This means that you sacrifice few of the network analysis capabilities of igraph by using tidygraph and ggraph.
library(tidygraph)
library(ggraph)
First, let’s create a network object using tidygraph, which is called a tbl_graph. A tbl_graph consists of two tibbles: an edges tibble and a nodes tibble. Conveniently, the tbl_graph object class is a wrapper around an igraph object, meaning that at its basis a tbl_graph object is essentially an igraph object.8 The close link between tbl_graph and igraph objects results in two main ways to create a tbl_graph object. The first is to use an edge list and node list, using tbl_graph(). The arguments for the function are almost identical to those of graph_from_data_frame() with only a slight change to the names of the arguments.
routes_tidy <- tbl_graph(nodes = nodes, edges = edges, directed = TRUE)
The second way to create a tbl_graph object is to convert an igraph or network object using as_tbl_graph(). Thus, we could convert routes_igraph to a tbl_graph object.
routes_igraph_tidy <- as_tbl_graph(routes_igraph)
Now that we have created two tbl_graph objects, let’s inspect them with the class() function. This shows that routes_tidy and routes_igraph_tidy are objects of class "tbl_graph" "igraph", while routes_igraph is object class "igraph".
class(routes_tidy)
#> [1] "tbl_graph" "igraph"
class(routes_igraph_tidy)
#> [1] "tbl_graph" "igraph"
class(routes_igraph)
#> [1] "igraph"
Printing out a tbl_graph object to the console results in a drastically different output from that of an igraph object. It is an output similar to that of a normal tibble.
routes_tidy
#> # A tbl_graph: 13 nodes and 15 edges
#> #
#> # A directed acyclic simple graph with 1 component
#> #
#> # Node Data: 13 x 2 (active)
#> id label
#> <int> <chr>
#> 1 1 Antwerp
#> 2 2 Haarlem
#> 3 3 Dordrecht
#> 4 4 Venice
#> 5 5 Lisse
#> 6 6 Het Vlie
#> # ... with 7 more rows
#> #
#> # Edge Data: 15 x 3
#> from to weight
#> <int> <int> <int>
#> 1 9 13 1
#> 2 1 10 68
#> 3 1 2 5
#> # ... with 12 more rows
Printing routes_tidy shows that it is a tbl_graph object with 13 nodes and 15 edges. The command also prints the first six rows of “Node Data” and the first three of “Edge Data”. Notice too that it states that the Node Data is active. The notion of an active tibble within a tbl_graph object makes it possible to manipulate the data in one tibble at a time. The nodes tibble is activated by default, but you can change which tibble is active with the activate() function. Thus, if I wanted to rearrange the rows in the edges tibble to list those with the highest “weight” first, I could use activate() and then arrange(). Here I simply print out the result rather than saving it.
routes_tidy %>%
activate(edges) %>%
arrange(desc(weight))
#> # A tbl_graph: 13 nodes and 15 edges
#> #
#> # A directed acyclic simple graph with 1 component
#> #
#> # Edge Data: 15 x 3 (active)
#> from to weight
#> <int> <int> <int>
#> 1 1 10 68
#> 2 2 10 26
#> 3 1 2 5
#> 4 1 11 2
#> 5 2 13 2
#> 6 4 2 2
#> # ... with 9 more rows
#> #
#> # Node Data: 13 x 2
#> id label
#> <int> <chr>
#> 1 1 Antwerp
#> 2 2 Haarlem
#> 3 3 Dordrecht
#> # ... with 10 more rows
Since we do not need to further manipulate routes_tidy, we can plot the graph with ggraph. Like ggmap, ggraph is an extension of ggplot2, making it easier to carry over basic ggplot skills to the creation of network plots. As in all network graphs, there are three main aspects to a ggraph plot: nodes, edges, and layouts. The vignettes for the ggraph package helpfully cover the fundamental aspects of ggraph plots. ggraph adds special geoms to the basic set of ggplot geoms that are specifically designed for networks. Thus, there is a set of geom_node and geom_edge geoms. The basic plotting function is ggraph(), which takes the data to be used for the graph and the type of layout desired. Both of the arguments for ggraph() are built around igraph. Therefore, ggraph() can use either an igraph object or a tbl_graph object. In addition, the available layouts algorithms primarily derive from igraph. Lastly, ggraph introduces a special ggplot theme that provides better defaults for network graphs than the normal ggplot defaults. The ggraph theme can be set for a series of plots with the set_graph_style() command run before the graphs are plotted or by using theme_graph() in the individual plots. Here, I will use the latter method.
Let’s see what a basic ggraph plot looks like. The plot begins with ggraph() and the data. I then add basic edge and node geoms. No arguments are necessary within the edge and node geoms, because they take the information from the data provided in ggraph().
ggraph(routes_tidy) + geom_edge_link() + geom_node_point() + theme_graph()
As you can see, the structure of the command is similar to that of ggplot with the separate layers added with the + sign. The basic ggraph plot looks similar to those of network and igraph, if not even plainer, but we can use similar commands to ggplot to create a more informative graph. We can show the “weight” of the edges — or the amount of letters sent along each route — by using width in the geom_edge_link() function. To get the width of the line to change according to the weight variable, we place the argument within an aes() function. In order to control the maximum and minimum width of the edges, I use scale_edge_width() and set a range. I choose a relatively small width for the minimum, because there is a significant difference between the maximum and minimum number of letters sent along the routes. We can also label the nodes with the names of the locations since there are relatively few nodes. Conveniently, geom_node_text() comes with a repel argument that ensures that the labels do not overlap with the nodes in a manner similar to the ggrepel package. I add a bit of transparency to the edges with the alpha argument. I also use labs() to relabel the legend “Letters”.
ggraph(routes_tidy, layout = "graphopt") +
geom_node_point() +
geom_edge_link(aes(width = weight), alpha = 0.8) +
scale_edge_width(range = c(0.2, 2)) +
geom_node_text(aes(label = label), repel = TRUE) +
labs(edge_width = "Letters") +
theme_graph()
In addition to the layout choices provided by igraph, ggraph also implements its own layouts. For example, you can use ggraph's concept of circularity to create arc diagrams. Here, I layout the nodes in a horizontal line and have the edges drawn as arcs. Unlike the previous plot, this graph indicates directionality of the edges.9 The edges above the horizontal line move from left to right, while the edges below the line move from right to left. Intsead of adding points for the nodes, I just include the label names. I use the same width aesthetic to denote the difference in the weight of each edge. Note that in this plot I use an igraph object as the data for the graph, which makes no practical difference.
ggraph(routes_igraph, layout = "linear") +
geom_edge_arc(aes(width = weight), alpha = 0.8) +
scale_edge_width(range = c(0.2, 2)) +
geom_node_text(aes(label = label)) +
labs(edge_width = "Letters") +
theme_graph()
## Interactive network graphs with visNetwork and networkD3
The htmlwidgets set of packages makes it possible to use R to create interactive JavaScript visualizations. Here, I will show how to make graphs with the visNetwork and networkD3 packages. These two packages use different JavaScript libraries to create their graphs. visNetwork uses vis.js, while networkD3 uses the popular d3 visualization library to make its graphs. One difficulty in working with both visNetwork and networkD3 is that they expect edge lists and node lists to use specific nomenclature. The above data manipulation conforms to the basic structure for visNetwork, but some work will need to be done for networkD3. Despite this inconvenience, both packages possess a wide range of graphing capabilities and both can work with igraph objects and layouts.
library(visNetwork)
library(networkD3)
### visNetwork
The visNetwork() function uses a nodes list and edges list to create an interactive graph. The nodes list must include an “id” column, and the edge list must have “from” and “to” columns. The function also plots the labels for the nodes, using the names of the cities from the “label” column in the node list. The resulting graph is fun to play around with. You can move the nodes and the graph will use an algorithm to keep the nodes properly spaced. You can also zoom in and out on the plot and move it around to re-center it.
visNetwork(nodes, edges)
visNetwork can use igraph layouts, providing a large variety of possible layouts. In addition, you can use visIgraph() to plot an igraph object directly. Here, I will stick with the nodes and edges workflow and use an igraph layout to customize the graph. I will also add a variable to change the width of the edge as we did with ggraph. visNetwork() uses column names from the edge and node lists to plot network attributes instead of arguments within the function call. This means that it is necessary to do some data manipulation to get a “width” column in the edge list. The width attribute for visNetwork() does not scale the values, so we have to do this manually. Both of these actions can be done with the mutate() function and some simple arithmetic. Here, I create a new column in edges and scale the weight values by dividing by 5. Adding 1 to the result provides a way to create a minimum width.
edges <- mutate(edges, width = weight/5 + 1)
Once this is done, we can create a graph with variable edge widths. I also choose a layout algorithm from igraph and add arrows to the edges, placing them in the middle of the edge.
visNetwork(nodes, edges) %>%
visIgraphLayout(layout = "layout_with_fr") %>%
visEdges(arrows = "middle")
### networkD3
A little more work is necessary to prepare the data to create a networkD3 graph. To make a networkD3 graph with a edge and node list requires that the IDs be a series of numeric integers that begin with 0. Currently, the node IDs for our data begin with 1, and so we have to do a bit of data manipulation. It is possible to renumber the nodes by subtracting 1 from the ID columns in the nodes and edges data frames. Once again, this can be done with the mutate() function. The goal is to recreate the current columns, while subtracting 1 from each ID. The mutate() function works by creating a new column, but we can have it replace a column by giving the new column the same name as the old column. Here, I name the new data frames with a d3 suffix to distinguish them from the previous nodes and edges data frames.
nodes_d3 <- mutate(nodes, id = id - 1)
edges_d3 <- mutate(edges, from = from - 1, to = to - 1)
It is now possible to plot a networkD3 graph. Unlike visNetwork(), the forceNetwork() function uses a series of arguments to adjust the graph and plot network attributes. The “Links” and “Nodes” arguments provide the data for the plot in the form of edge and node lists. The function also requires “NodeID” and “Group” arguments. The data being used here does not have any groupings, and so I just have each node be its own group, which in practice means that the nodes will all be different colors. In addition, the below tells the function that the network has “Source” and “Target” fields, and thus is directed. I include in this graph a “Value”, which scales the width of the edges according to the “weight” column in the edge list. Finally, I add some aesthetic tweaks to make the nodes opaque and increase the font size of the labels to improve legibility. The result is very similar to the first visNetwork() plot that I created but with different aesthetic stylings.
forceNetwork(Links = edges_d3, Nodes = nodes_d3, Source = "from", Target = "to",
NodeID = "label", Group = "id", Value = "weight",
opacity = 1, fontSize = 16, zoom = TRUE)
One of the main benefits of networkD3 is that it implements a d3-styled Sankey diagram. A Sankey diagram is a good fit for the letters sent to Daniel in 1585. There are not too many nodes in the data, making it easier to visualize the flow of letters. Creating a Sankey diagram uses the sankeyNetwork() function, which takes many of the same arguments as forceNetwork(). This graph does not require a group argument, and the only other change is the addition of a “unit.” This provides a label for the values that pop up in a tool tip when your cursor hovers over a diagram element.10
sankeyNetwork(Links = edges_d3, Nodes = nodes_d3, Source = "from", Target = "to",
NodeID = "label", Value = "weight", fontSize = 16, unit = "Letter(s)")
## Further reading on Network Analysis
This post has attempted to give a general introduction to creating and plotting network type objects in R using the network, igraph, tidygraph, and ggraph packages for static plots and visNetwork and networkD3 for interactive plots. I have presented this information from the position of a non-specialist in network theory. I have only covered a very small percentage of the network analysis capabilities of R. In particular, I have not discussed the statistical analysis of networks. Happily, there is a plethora of resources on network analysis in general and in R in particular.
The best introduction to networks that I have found for the uninitiated is Katya Ognyanova’s Network Visualization with R. This presents both a helpful introduction to the visual aspects of networks and a more in depth tutorial on creating network plots in R. Ognyanova primarily uses igraph, but she also introduces interactive networks.
There are two relatively recent books published on network analysis with R by Springer. Douglas A. Luke, A User’s Guide to Network Analysis in R (2015) is a very useful introduction to network analysis with R. Luke covers both the statnet suit of packages and igragh. The contents are at a very approachable level throughout. More advanced is Eric D. Kolaczyk and Gábor Csárdi’s, Statistical Analysis of Network Data with R (2014). Kolaczyk and Csárdi’s book mainly uses igraph, as Csárdi is the primary maintainer of the igraph package for R. This book gets further into advanced topics on the statistical analysis of networks. Despite the use of very technical language, the first four chapters are generally approachable from a non-specialist point of view.
The list curated by François Briatte is a good overview of resources on network analysis in general. The Networks Demystified series of posts by Scott Weingart is also well worth perusal.
1. One example of the interest in network analysis within digital humanities is the newly launched Journal of Historical Network Research. ↩︎
2. For a good description of the network object class, including a discussion of its relationship to the igraph object class, see Carter Butts, “network: A Package for Managing Relational Data in R”, Journal of Statistical Software, 24 (2008): 1–36 ↩︎
3. This is the specific structure expected by visNetwork, while also conforming to the general expectations of the other packages. ↩︎
4. This is the expected order for the columns for some of the networking packages that I will be using below. ↩︎
5. ungroup() is not strictly necessary in this case. However, if you do not ungroup the data frame, it is not possible to drop the “source” and “destination” columns, as I do later in the script. ↩︎
6. Thomas M. J. Fruchterman and Edward M. Reingold, “Graph Drawing by Force-Directed Placement,” Software: Practice and Experience, 21 (1991): 1129–1164. ↩︎
7. The rm() function is useful if your working environment in R gets disorganized, but you do not want to clear the whole environment and start over again. ↩︎
8. The relationship between tbl_graph and igraph objects is similar to that between tibble and data.frame objects. ↩︎
9. It is possible to have ggraph draw arrows, but I have not shown that here. ↩︎
10. It can take a bit of time for the tool tip to appear. ↩︎
|
{}
|
×
INTELLIGENT WORK FORUMS
FOR ENGINEERING PROFESSIONALS
Are you an
Engineering professional?
Join Eng-Tips Forums!
• Talk With Other Members
• Be Notified Of Responses
• Keyword Search
Favorite Forums
• Automated Signatures
• Best Of All, It's Free!
*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.
#### Posting Guidelines
Promoting, selling, recruiting, coursework and thesis posting is forbidden.
# Value of a company's CAD title block18
Forum Search FAQs Links MVPs
## Value of a company's CAD title block
(OP)
If this is an improper posting, please let me know. I'm new to this site, and I do not want to violate your policies.
I am a mechanical engineer and I represent a designer who copied and modified his previous employer's AutoCAD title block to use at his new company. I would like to get members' opinions as to a dollar value of the damages to the employer. The employer claims over $2 million. No drawings were taken, only the title block, and it was modified. Thank you. Again, if this is out of line, I apologize. I hope to contribute to this site in the future. ### RE: Value of a company's CAD title block I'd bet a dollar they stole it from someone else. Is there a copyright notice on the image he took? Do look carefully. In a CAD file, it could be hidden in super tiny text that plots too small to see, or hidden in wasted space in the file that doesn't plot at all, but is present, and will show up in a dump of the file. If you don't know what that means, find someone who does. Absent a copyright notice, well ... talk to your lawyer anyway. Mike Halloran Pembroke Pines, FL, USA ### RE: Value of a company's CAD title block What damage did the previous company sustain by someone copying their title block? SW07-SP3 ### RE: Value of a company's CAD title block CorBlimeyLimey - While I don't know the specifics of this case I can speak from experience as a consultant that your identity as a company can be damaged. Most title blocks for consulting companies are treated like a corporate identity and over years the company's work is quickly identifiable by a client. When the client is presented with similar drawings and leaner costs clients figure that they are getting the same quality as the former company but at less costs. So the client dumps original company. I don't know that there is a copyright on title blocks or title borders but I do see the perception. And as we know in society today, anyone can sue anybody for just about anything. Regards, Qshake Eng-Tips Forums:Real Solutions for Real Problems Really Quick. ### RE: Value of a company's CAD title block But unless the title block had a particularly artistic layout which in itself was a "logo" branded to the company, then a title block is a title block is a title block ... assuming no obvious company names or logos or unique fonts were left in the block. You are correct though; anyone can sue anybody for just about anything ... and win!!! SW07-SP3 ### RE: Value of a company's CAD title block (OP) Thanks everyone. I appreciate your opinions and your quick repsonse. I'm acting as an expert witness. I'm not personally involved. I agree the guy shouldn't have used his previous employer's file. It is a fairly standard title block with no visible references to the previous employer. Thanks again. ### RE: Value of a company's CAD title block The simplest thing to do is to collect up a bunch of title blocks from unrelated drawings and show that they're all pretty similar and no reasonable person could associate that title block with that company. Obviously, they didn't spend$2 million designing that gold-plated title block, did they? And it you can show that there's not unique about the title block itself, there really can be any claim of adverse economic impact.
TTFN
Eng-Tips Policies FAQ731-376
### RE: Value of a company's CAD title block
There are various standard title blocks available in AutoCAD and other places. You can or could buy paper for hand drafting with preprinted title blocks. If you can show that some other source(s) also used title blocks very similar, seems it would help.
### RE: Value of a company's CAD title block
Oh, a cool experiment would be to copy a bunch of different title blocks onto a single sheet and ask the jury, or the judge, if htey can pick out that company's title block, strictly from the design.
TTFN
Eng-Tips Policies FAQ731-376
### RE: Value of a company's CAD title block
One other way of looking at this...
Technically, your friend has committed the crime of theft. I assume he electronically took the title block from the first firm's data. Without the express permission of that company, he has no right to have any of their data (however small and insignificant he may deem that to be). If the title block he took had the first companies logo, name, address etc. then I think they will have a very strong case against him personally.
From the point of view of the dollar value of a claim against the second company, there are any number of ways that the second company can substansiate their claim. They can base it on loss of revenue (direct loss of projects and contracts to the second company), or loss/degradation of corporate image due to customer confusion.
Kevin Hammond
Mechanical Design Engineer
Derbyshire, UK
### RE: Value of a company's CAD title block
I'm sorry but a title block is not the property of a company. It is more than likely an arbitrary design someone came up with which is most likely a spin off of a standard ASME block. The only way this company can claim any damages is if they hold the appropriate copywrite paper work that shows they own that design. If that is the case, then I am afraid to say they should probably go after many other people and companies as most blocks look almost the same. Without the paperwork they have nothign, and personally I would like to see a copywrite office that would awarde that status to a companies title block...
David
### RE: Value of a company's CAD title block
You can sue anyone for anything.
Patents protect ideas. If you had a patent on the idea of putting a header on a spreadsheet, you could go after anyone who ever put a header of any kind on any spreadsheet. (First you need the time machine to establish precedence, but that's another matter.)
If you put a copyright notice on your customized spreadsheet header and someone copies it without permission, your suit against them could prevail.
Copyrights protect the _expression_ of ideas, not the idea itself. So a header in a different format, i.e. visually distinct, would not infringe your copyright. But if someone copies your work and does not make the copy visually distinct enough to qualify as a different expression of the same idea, _and_ you have applied a copyright notice, you would have a good case. I don't think it's even necessary for the plaintiff to prove that damage resulted; they can just demand an arbitrary license fee, or excision of the offending material from every copy ever made.
Mike Halloran
Pembroke Pines, FL, USA
### RE: Value of a company's CAD title block
I only studied a little law at uni but I suspect taking the electronic document may count as theft as others say but unless they have some really weird or wonderful format or he forgot to remove a logo that was copyright etc then I don't really see the copyright case.
Doesn't mean there isn't one though as the law can seem weird and wonderful at times.
That said if you are in the US then I think you owe it to your client to familiarize yourself with:
ASME Y14.1-1995 Decimal Inch Drawing Sheet Size and Format
If the format broadly complies with this document then I don't see anyway it can be a true copyright issue. Other countries/ISO/metric probably have equivalents.
The only way I can see $2m is if they can prove that they lost/stand to lose that much business, perhaps to confusion of drawings as another poster said. I'm getting worried now, I based sections of my current employers DRM on one from a previous employer, am I going to get sued? ### RE: Value of a company's CAD title block (OP) Mike, You are right. In this case, when the issue came up the title blocks were immediately replaced. (It was a contract draftsman, not the defendant who modified the title block.) The plaintiff is claiming it took him thousands of man-hours to develop the title block, and that his title block design (which includes only standard data)results in a 10% savings on everything he manufactures. These claims are made by an engineer. Several other ridiculous claims are included. Again, there isn't any allegation of theft of actual drawings or part designs; only the title block is at issue. The real problem is that in civil suits there is no such thing as reasonable demands. Totally outlandish demands are taken as seriously as reasonable demands, and if the person making these demands (often a "professional" engineer)can lie convincingly he may well prevail. This may not be the proper venue to express this kind of opinion, other than in some cases this problem is aggravated by unscrupulous and unethical members of our profession, and I think this is a concern to all of us. Larry ### RE: Value of a company's CAD title block I would like to see them prove that he "stole" the drawing. Since he was an engineer he constantly had access to the drawings so there would be no way for them to "log" a specific time that he had one out and took it. For all they know he could have just remembered what it looked like and re-produced a similar image. I swear to god that if this company wins the lawsuit, I am walking to my lawyer and generating some suits of my own! ### RE: Value of a company's CAD title block (OP) Kenat, I wouldn't worry. The real issue is that the defendant (little guy) has a better design than the plaintiff (big guy) and the title block issue is just a means to make him spend money defending himself and hopefully put him out of business. Larry ### RE: Value of a company's CAD title block (OP) Sbozy25, Sorry if I got you upset, although it's sometimes hard not to be. My report is due shortly, but this thing will go on for months. If you are interested, I'll let you know how it ends. Thanks, Larry ### RE: Value of a company's CAD title block 2 Theft? Sure. 2Million damages? Not so sure. But now that you mention it, a cool idea would be to use micro-text of the company name as 'lines' in your title block. It'll show up on electronic files and very good printers so that's maybe worth something. (TM) (R) (C) (patent pending) :) ### RE: Value of a company's CAD title block I can see the companies entire image costing$2M with all the marketing and business development costs it takes to put their image on the forefront.
If the designer removed that companies digital property without permission he is a thief.
Heckler
Sr. Mechanical Engineer
SWx 2007 SP 3.0 & Pro/E 2001
XP Pro SP2.0 P4 3.6 GHz, 1GB RAM
NVIDIA Quadro FX 1400
o
_`\(,_
(_)/ (_)
(In reference to David Beckham) "He can't kick with his left foot, he can't tackle, he can't head the ball and he doesn't score many goals. Apart from that, he's all right." -- George Best
### RE: Value of a company's CAD title block
I'm curious how the previous employer came to find out that the designer took their title block and used it to create one for his new employer. Is this part of a larger lawsuit where the guy took a lot of confidential drawings/intellectual property to his new job or started his own company designing very similar components to the ones his old employer designed and the previous employer is going after him for that?
It just seems odd that a company would waste so much resources going after the guy for this.
### RE: Value of a company's CAD title block
(OP)
Ataloss,
See my reply to Kenat above. The title block isn't the real issue. The defendant came up with a better design for the actual tool and the plaintiff doesn't want the competition. If I could disclose all of the morbid details the situation would make more sense.
Thanks,
Larry
### RE: Value of a company's CAD title block
(OP)
SylvestreW,
That's a good idea, but now none of us can use it. Maybe a licensing fee? LOL
Larry
### RE: Value of a company's CAD title block
I think copyright law works in reverse, too.
I.e., if you _don't_ put "copyright <year> <holder_name>" or "(C) <year> holder_name" on the "expression" _somewhere_, then anyone is free to copy it without restriction.
That would leave the defendant free and clear, _if_ he had regenerated the title block by, say, measuring a printed copy and building a clone via artwork or a CAD program.
;--
In this case, it appears that the defendant or his agent acquired and modified a computer file, and the action is based on the laws that apply to theft, not copyright law.
I'm conjecturing (as a non- lawyer) that one might be able to build a defense using copyright law, i.e. asserting that the defendant _could_have_ produced the clone by means that are perfectly legal under copyright law ... and offering to compensate the plaintiff for any media that were involved.
At which point, if I were the judge, I'd go after the plaintiff for abuse of process. But I'm not the judge.
Mike Halloran
Pembroke Pines, FL, USA
### RE: Value of a company's CAD title block
LarryEF,
I think the plaintiff should explain in court how he spent thousands of hours designing his title block.
A 10% savings on everything he manufactures is a challenge too. A really big performance improvement usually indicates that they were pretty crummy when they started off.
Could the title block have macros embedded in it? Quite a few years ago, I created some AutoCAD titleblocks and wrote some AutoLISP programs to manipulate them. I cannot claim I improved productivity 10%. I wrote copyright notices on my software. You cannot use it without company permission.
JHG
### RE: Value of a company's CAD title block
Hi,
LarryEF, we're all speaking about "defendants" and "plaintiffs", but did the company actually sue the designer or did it "only" menace a lawsuit?
In the second case, I wouldn't even bother with a stupidity like the company's claim, and just wait for them to take any action (if they dare to...), as far as it seems they have been idiot enough to miss the only point where - perhaps - there could effectively be a claim: the "theft", or call it "copy" or "subtraction" if you prefer, of an electronic document, which is implicitly the property of the company (regardless if there is a copyright note or not, and regardless if the designer had constant access to it or not). At least, this is what would apply in Italy - in other countries, I don't know.
Regards
### RE: Value of a company's CAD title block
(OP)
DrawOH,
You're right. You would think the plaintiff's engineer would have thought about having to defend his claims under oath as you suggest. Evidently some people, even intelligent people, make claims like this causually and without thinking ahead.
On the other hand, I've seen cases where the evidence supporting one party was totally overwelming and indisputable, but since the judge didn't understand the technical issues, he just said what-the-hell and split the difference rather than make the effort. King Solomon's approach isn't always right.
Larry
### RE: Value of a company's CAD title block
If it's of help I just spent some time creating a new format for a 'non drawing' CAD file. I also sit with the guy that created our templates based closely on ASME Y14.1-1995 "Decimal Inch Drawing Sheet Size and Format" and I help with their maintenance.
So if you want an estimate of how many hours it may take to create templates I could give some info.
I will say this, if it took him thousands of man hours just to create the title block then his business will fail because he's so slow/incompetant/inefficient, not because someone took his title block.
A good CAD title block/drawing format which helps makes sure all relevant info is there and that auto fills from CAD properties certainly can save time but I find it hard to believe it amounts to 10% on the actual product. At a push maybe 10% per drawing but no way on the actual product.
Did he literally just copy the 2D layout or were there linked file properties etc that auto fill the format?
(OP)
Cbrn,
### RE: Value of a company's CAD title block
Look on your paper. See that plain white border around the outside of the title block? I invented that. You all owe me big time...
Where should I send this crispy $100 bill with the plain white border around it? Hey, maybe you can just collect from uncle sam... <tg> ### RE: Value of a company's CAD title block LarryEF, No I am not up set. I think ammused is more the proper term to use here. I am very interested in how this turns out because it seems like such an out-landish clame. No drawing of a title block in the world, no matter how great it is, can save 10% time. They generally do not contain data that is needed for most manufacturing. They contain data that the suits generally use, ie. material that needs to be bought, specifications for the part, etc... The only thing I can think manufacturing would need to know is english or metric. Also, as others have said... There is no way that this block took thousands of hours to develop unless the guy is just plain dumb with his cad system. The should take no more than a few hours at most. You need to do us all a favor and let us know how this ends because I think this qualifies as one of the most pointless suits of the year! :) ### RE: Value of a company's CAD title block So if I put 10 title blocks on a drawing then it draws itself. Excellent. Cheers Greg Locock Please see FAQ731-376 for tips on how to make the best use of Eng-Tips. ### RE: Value of a company's CAD title block Hi, Greg, this is the best reply in this thread !!! I've got all my collegues laughing... 'cause they always get bored when it comes to fill-in a titleblock. LarryEF, I'm sincerely disappointed. That's a shame. Things may go very bad in Italy but at least a lawsuit like that would have been rejected right at the first audience (and nobody would have spent 1 MILLION !!!!!!!! for lawyers - they are known to be rich enough even without these absurdities...). I'm also curious to know how it will end, because if this company won, then it would be a very, very, very bad time coming on... Regards ### RE: Value of a company's CAD title block larry, i think you've got plenty ammunition to describe this as a frivilous suit. There is no technical value of the title block, if there are no distinguishing features how does anyone know from whence it came, even if it was taken from someone, they haven't lost anything (they still have the title block don't they?). the plaintive should be asked to quantify how his title block saves him an astounding 10% of everything. indeed, i think you mentioned earlier that the (possibly unconscious) point to the suit is to tie up a competitor and to possibly force them out of business ... this is surely an abuse of having more money than sense. i hope you are able to collect a big fee !! ### RE: Value of a company's CAD title block Is there anything unique about it? You might want to check with a few engineering/architectural firms and get copies of their titleblock to show the courts that they are common. You might want to find out during examination what if any firms the principles of the new firm worked for and obtain copies of their titleblocks for comparison... even, if you can show the courts that the same basic information is shown... The only thing that sets my titleblock off from anyone elses is the inclusion of a file number that is consecutive and unique to the drawing as well as the inclusion of the scale factor for the drawing... and it's possible that hundreds of firms use similar methods. I hope ??? hasn't admitted that he's copied the original with modification... contrition, in particular with litigation, is not good for the soul... Dik ### RE: Value of a company's CAD title block Forgot to add... welcome to the site... Dik ### RE: Value of a company's CAD title block "they haven't lost anything (they still have the title block don't they?). " Mmm, that's the argument that software pirates and music copiers use. As a creative profession we should be fighting for the protection of intellectual property, except where it has been explicitly released for free use. I do both. Cheers Greg Locock Please see FAQ731-376 for tips on how to make the best use of Eng-Tips. ### RE: Value of a company's CAD title block and an earlier comment about whores... you have it, you sell it, and you still got it... <G> Dik ### RE: Value of a company's CAD title block Larry, If rb's comments above are correct, then the defendant actually has a counter-suit case against the plantiff for suing for no other reason that to damage the defedant. I forget was the legal term is. Matt CAD Engineer/ECN Analyst Silicon Valley, CA http://sw.fcsuper.com/index.php ### RE: Value of a company's CAD title block "I forget was the legal term is. " I believe it's known as SalivatingLawyerItis ### RE: Value of a company's CAD title block You're all assuming that the claim for damages is based on the cost of the title block itself. Something that large has to be based on economic damages, e.g., loss of sales or confusion of customers. In some cases, this is a valid argument. Harley Davidson won a case of trademark infringement against Suzuki, for copying their trademarked exhaust sound. TTFN Eng-Tips Policies FAQ731-376 ### RE: Value of a company's CAD title block first prize goes to the self-assembling title blocks... My own experience is that it is not uncommon for a customer to mandate the drawing format and title block design all to the greif of the contractor, even requiring their own logos etc. This is extended, in the case of a general contractor on very large projects, to include his title block design in addition to that of the owner. It is not just to collect$2 million from all involved, but to insure how the plant data books are assembled and that the drawings can be retrived in the various file systems.
Sounds like the injured party is upset about the loss of a customer, rather than the supposed theft of the title block layout, but it was the only weapon at hand.
Title blocks are not commonly copyrighted, but if the first party can successfully show that an electronic file was stolen, as opposed to provided to the second party by the owner, he might have a valid basis for a claim of theft, but not for managing to duplicate a layout
my 2 cents
### RE: Value of a company's CAD title block
Theft is not something you can sue for per se. You sue for damages. If you stole a car and wrecked it, you sue for the loss of the car. The big company can say and prove how the person got their title block, but what is their damages.
The design cost of the block is not a damage because they still have it. The loss of future revenue because the average person thinks the small guy is really the big guy, then the big guy has been damaged.
#### Red Flag This Post
Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.
#### Red Flag Submitted
Thank you for helping keep Eng-Tips Forums free from inappropriate posts.
The Eng-Tips staff will check this out and take appropriate action.
#### Posting in the Eng-Tips forums is a member-only feature.
Click Here to join Eng-Tips and talk with other members!
Close Box
# Join Eng-Tips® Today!
Join your peers on the Internet's largest technical engineering professional community.
It's easy to join and it's free.
Here's Why Members Love Eng-Tips Forums:
• Talk To Other Members
• Notification Of Responses To Questions
• Favorite Forums One Click Access
• Keyword Search Of All Posts, And More...
Register now while it's still free!
|
{}
|
# Conditions for an additive value function
In a standard problem of fair cake-cutting, there is a real interval which is called "cake", and it has to be divided among $$n$$ partners. Each partner $$i$$ has a subjective value function $$v_i$$, which is an additive function on subsets of the cake. This means that, for every two disjoint subsets $$A$$ and $$B$$:
$$v_i(A\cup B)=v_i(A)+v_i(B)$$
Suppose that, instead of a value function, each partner has a preference relation $$\succeq_i$$.
A preference relation $$\succeq_i$$ is represented by a value function $$v_i$$ iff:
$$A\succeq_i B \iff v_i(A)\geq v_i(B)$$
What properties on the preference relation guarantee that it can be represented by an additive value function?
NOTE: The Wikipedia page Ordinal utility describes some conditions under which a preference relation can be represented by an additive value function. But, it deals with preferences on bundles of homogeneous goods. Here, the preferences are on subsets of a heterogeneous good.
EXAMPLES:
$$u_1(A) = \text{len}(A)^2$$
$$u_1$$ is not additive, but the preference relation it represents can be represented by the additive function $$v_1(A) = \text{len}(A)$$.
$$u_2(A) = \min[\text{len}(A\cap[0,4]),\text{len}(A\cap[4,8])]$$
The preference relation represented by $$u_2$$ cannot be represented by an additive function. Proof: suppose by contradiction that the preference relation is represented by an additive function $$v_2$$. Then, because:
$$u_2([0,1])=u_2([4,5])=u_2(\emptyset)$$
this must also be true for $$v_2$$:
$$v_2([0,1])=v_2([4,5])=v_2(\emptyset)$$
$$v_2([0,1]\cup[4,5])=v_2(\emptyset\cup\emptyset)=v_2(\emptyset)$$
This must also be true for $$u_2$$:
$$u_2([0,1]\cup[4,5])=u_2(\emptyset)$$
• Could you give an example where it cannot be done? Sep 22 '15 at 9:24
• @denesp added an example. Sep 22 '15 at 11:36
• I think @Nick comment in his answer is on point : why not speak of "utility function" instead of "value function" (which traditionally has many other meanings in econ, e.g. in intertemporal optimization)? Sep 23 '15 at 21:30
• @MartinVanderLinden in some books, "value" means an ordinal function which represents preferences on sure outcomes, and "utility" means a cardinal function which represents preferences on lotteries. This is not consistent, though. Sep 24 '15 at 13:54
• ok I see, I don't know how I feel about that terminology, but thanks for clarifying. Sep 24 '15 at 13:57
This is only a partial answer because it does not exactly fit your framework, but I hope it will still be helpful (and it's too long for a comment).
If you are ok with discretizing your cake into (possibly arbitrarilly small) pieces of cake, then you will find an answer in
• Kraft, C. H., Pratt, J. W., & Seidenberg, A. (1959). Intuitive Probability on Finite Sets. The Annals of Mathematical Statistics, 30(2), 408–419.
the bulk of which is very well summarized in the introduction of
• Fishburn, P. C. (1996). Finite linear qualitative probability. Journal of Mathematical Psychology, 40(1), 64–77.
Althought the setup of the papers is in terms of probability jugments, it can be reinterpreted from a preference point of view as follows :
• A finite set of objects $S = \{1,2,\dots,n\}$ (in your problem $S$ could contain the pieces of the cake)
• A preference relation $\succeq$ over $2^S$ the set of subsets of $S$.
• The question : when is $\succeq$ representable by an additive utility function $U$ on $2^S$.
A classical conjecture by de Finetti's was that the following conditions should suffice (here I follows the presentation in Fishburn (1996)):
• (Order) : $\succeq$ on $2^S$ is a weak order,
• (Nonnegativity) : $A \succeq \emptyset$ for every $A \in 2^S$,
• (Nontriviality) : $S \succ \emptyset$,
• (Additivity) : For all $A,B,C \in 2^S$, if $(A\cup B) \cap C = \emptyset$, then $[A \succ B] \Leftrightarrow [(A\cup C) \succ (B\cup C)]$.
de Finneti observed that these were necessary but could not determine whether they were sufficient. Eventually, Kraft, Pratt & Seidenberg (1959) provided a counter-example as well as an additional condition which, together with the four others implied the existence of an additive representation:
• (Strong additivity) : for all $m\geq 2$ and all $A_j,B_j \in 2^S$, if $(A_1,\dots,A_M)$ and $(B_1,\dots,B_M)$ contain the same number of replicas of each elements of $S$ (i.e. if $s_1$ appears three times in all the $A_j$ sets, it also appears three times in all the $B_j$ sets, etc) and $A_j \succeq B_j$ for all $j<m$, then we do not have $[A_m \succ B_m]$.
The last condition is often referred to in the literature as the "cancelation" property. Now (Strong additivity) is not the most intuitive condition. In general, it can be hard to to check and navigate, which has spurred a large literature on alternative sufficient condition. I can send you a reading list if you are interested. Unfortunately, I don't remember of any paper directly tackling preferences over subsets of infinite sets, like your real interval.
From my experience with these kinds of problems, changing the domain over which preferences are defined makes a huge difference in terms of the results that hold and the proof techniques you can use. If a result is not already out there in the literature, it is rarely easy to derive it from apparently similar results on different domains.
• Seems like I am only able to answer your questions for slightly different models than the one you are interested in cs.stackexchange.com/questions/10877/… ;) Sep 23 '15 at 19:59
The only thing I can think of which may be related to your question is Debreu's theorem, which states that preferences which are continuous can be represented by a continuous utility function. Of course, if the utility function is continuous, so is the value function. Also, I think monotonicity could play a role.
• It is better to post this as a comment. Sep 23 '15 at 9:27
• I think Debreu's theorem on additivity en.wikipedia.org/wiki/… is about an Euclidean space (e.g. the set of all bundles of a finite number of commodities). Sep 24 '15 at 13:50
• And I believe the theorem @ChinG is refering to is the one on representability of continuous preferences (Debreu (1954), not the one on additive representability of continuous and separable preferences over multidimensional Euclidean spaces (Debreu (1960)). Sep 24 '15 at 13:51
|
{}
|
Lemma 41.18.2. Let $f : X \to S$ be a morphism of schemes. Let $x_1, \ldots , x_ n \in X$ be points having the same image $s$ in $S$. Assume $f$ is separated and $f$ is étale at each $x_ i$. Then there exists an étale neighbourhood $(U, u) \to (S, s)$ and a finite disjoint union decomposition
$X_ U = W \amalg \coprod \nolimits _{i, j} V_{i, j}$
of schemes such that
1. $V_{i, j} \to U$ is an isomorphism,
2. the fibre $W_ u$ contains no point mapping to any $x_ i$.
In particular, if $f^{-1}(\{ s\} ) = \{ x_1, \ldots , x_ n\}$, then the fibre $W_ u$ is empty.
Proof. An étale morphism is unramified, hence we may apply Lemma 41.17.2. As in the proof of Lemma 41.18.1 the morphisms $V_{i, j} \to U$ are open immersions and we win after replacing $U$ by the intersection of their images. $\square$
There are also:
• 2 comment(s) on Section 41.18: Étale local structure of étale morphisms
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
{}
|
# Definition:Terminal Point of Vector
Let $\mathbf u$ be a vector quantity represented by an arrow $\vec {AB}$ embedded in the plane:
The point $B$ is the terminal point of $\mathbf u$.
|
{}
|
$$\require{cancel}$$
# 5.7: Delta-Star Transform
As we did with resistors in Section 4.12, we can make a delta-star transform with capacitors.
$$\text{FIGURE V.7}$$
I leave it to the reader to show that the capacitance between any two terminals in the left hand box is the same as the capacitance between the corresponding two terminals in the right hand box provided that
$c_1=\frac{C_2C_3+C_3C_1+C_1C_2}{C_1},\label{5.7.1}$
$c_2=\frac{C_2C_3+C_3C_1+C_1C_2}{C_2},\label{5.7.2}$
$c_3=\frac{C_2C_3+C_3C_1+C_1C_2}{C_3},\label{5.7.3}$
The converse relations are
$C_1=\frac{c_2c_3}{c_1+c_2+c_3},\label{5.7.4}$
$C_2=\frac{c_3c_1}{c_1+c_2+c_3},\label{5.7.5}$
$C_3=\frac{c_1c_2}{c_1+c_2+c_3},\label{5.7.6}$
For example, just for fun, what is the capacitance between points A and B in Figure $$V.8$$, in which I have marked the individual capacitances in microfarads?
$$\text{FIGURE V.8}$$
The first three capacitors are connected in delta. Replace them by their equivalent star configuration. After that it should be straightforward. I make the answer 0.402 $$\mu F$$.
|
{}
|
# what happens when computer
One of the biggest tragedies of computer science is how we so often forget our field has a history. It's not at all hard to read papers from a decade or two ago and find amazing ideas that never got popular, or even ideas which are currently being popularized in blog posts and tweets without any idea that there's prior art. As Ron Minnich once said:
You want to make your way in the CS field? Simple. Calculate rough time of amnesia (hell, 10 years is plenty, probably 10 months is plenty), go to the dusty archives, dig out something fun, and go for it. It's worked for many people, and it can work for you.
A good source of mostly-forgotten CS ideas is the Plan 9 from Bell Labs operating system, which is a lost utopia among computer systems. It was an operating system that took the same ideals that Unix paid lip-service to and realized them more fully than anyone else had. A lot has been written about the amazing networking portions of Plan 9, and the way that it considers the Unix adage everything is a file and takes it a hundred times further than Unix ever did. That's not what I'm going to go into here, but you should absolutely investigate it.
Instead, I'm going to talk about a minor but very interesting feature of the Plan 9 user-space utilities that has been undeservedly forgotten by many programmers: something called Structural Regular Expressions. Plan 9's user-land utilities are available for Unix-like operating systems: they are known as Plan 9 from User Space, or plan9port for short, and you can easily download them and experiment with the features I describe here. You can read about them in Rob Pike's paper describing them or by reading the documentation to the ed-inspired editor sam, but I'm going to take my own approach in describing them that's at least partly inspired by a now-deleted blog post that originally informed me.
I suspect at least part of the reason structural regular expressions aren't well-remembered is that the name is both misleading and thoroughly boring. It makes it sound like they're an alternative to regular expressions, which isn't the case: they're an alternative to the command language found in tools like ed or sed. Their implementation of regular expressions is (with a small but important exception) the same one you can find everywhere, but structural regular expressions build a more powerful set of actions on top of regular expressions.
## ed and sed
The most well-known of ed's regex commands is almost certainly regex substitution, which takes the form
(ed) s/<regex>/<replacement>/<flags>
For example, the command s/this/that/g will replace every instance of the string this with the string that. (Omitting the g results in a command that only replaces the first instance.) Another ed command is g/<regex>/<command>, which loops over every line in the file, and if a line matches the supplied <regex>, then it runs <command> over that line. The inverse command—which runs the <command> if the line doesn't match, is v/<regex>/<command>. For example, a command like This syntax is a bit obtuse: you can imagine it as a piping the result from g cake into s this that, but it might be more accurate to think of the latter command as a function or continuation passed to the earlier one: it's a command that represents what to do next, which is the essence of continuations.
(ed) g/cake/s/this/that/g
can be read as find every line that contains cake, then only on those lines, replace every instance of this with that. A classic simple command is finding and printing lines that match a pattern: using re to stand in for an arbitrary regular expression, the command would be written g/re/p: this command is useful enough that it inspired a tool specifically for the purpose of printing lines matched by a regex, which was creatively named after that equivalent ed command: grep.
## Structural regular expressions
Structural regular expressions build on a similar but non-identical command language, but the first deficiency identified in traditional Unix regexp-ey tools was that they were necessarily line-oriented. This isn't a feature of the theory of regular languages, but rather a practical API choice for Unix programs, which often deal with newline-delimited text files. While practical for some applications, this does create a weird edge case for regular expressions where some hopefully-straightforward uses of regular expressions don't suffice: for example, I might want to write a short script to search my prose for accidentally repeated instances of common words like the: a regex like /the +the/ would suffice for most cases, but would completely fail to match the string "the\nthe".
Structural regular expressions begin by tossing out line-orientedness: a regular expression like .* would match the entire file, newlines and all. Structural regular expressions use the escape sequence \n to represent a newline, so if I wanted to match a single line, I could use the regular expression .*\n to describe it. Consequently, I can handle the "the\nthe" case by writing /the[ \n]+the/, and replace all instances of repeated the—even across newlines—with the command I'm marking these snippets with sam, which is the ed- and ex-inspired stream editor that appeared in Plan 9. There's a bit more complexity to actually using sam which I'm eliding for the sake of explanation.
(sam) s/the[ \n]+the/the/g
That said, line-oriented commands are often very useful, and it'd be a shame if we lost out on the ability to do things on a per-line basis! Luckily, structural regular expressions have a trick up their sleeves: the x command, which can be thought of as a sort of for-each over every place where a regular expression matches the input. It takes the form x/<regex>/<command>, and will find every instance of <regex> in the input and then run <command> over only the portion of the input that matched the regex. We can trivially combine this with commands like p for printing:
(sam) x/cake/p
That command was pretty boring: all it will do is print out every instance of the word cake, and only that: none of the characters before, none of the surrounding lines, just a bunch of cakes, as many as appear in the input. If, instead, we wanted to print out every line that contained the word cake, we could write something like this:
(sam) x/.*cake.*\n/p
But there's a more elegant way: in the sam command language, the g command does something different than in ed: it looks like g/<regex>/<command> and acts like a filter, running a supplied command over the entire input (not just the part it matched) if its regex matches some part of the input. So, if we want to print every line that contains cake, we could first focus on each line of input with x, use g to filter down to just the lines that contain the string cake, and then print those lines:
(sam) x/.*\n/g/cake/p
There's a corresponding negative match command v, which runs the command if it doesn't find a match, so the following command prints every line which doesn't contain cake:
(sam) x/.*\n/v/cake/p
And there are commands for prepending, appending, modifying, or deleting text matched by previous commands. We can still use our old friend the s command to replace a regular expression, so the structural equivalent of our replace-this-with-that-on-lines-containing-cake example above could be written by focusing on lines, filtering by cake, and then running a traditional s command:
(sam) x/.*\n/g/cake/s/this/that/g
But we could also write this a slightly different way: by first focusing on lines, then filtering by cake, then focusing on instances of the string this, and using the c command to change the focused text to that:
(sam) x/.*\n/g/cake/x/this/c/that
## A High-Level View
A major advantage of structural regular expressions is that they're more compositional than the traditional ed-like command language. In ed, we might be tempted to find lines that contain both this and that by writing a command like
(ed) g/this/g/that/p # bad!
but we can't do this: g commands aren't allowed to invoke other g commands, only simpler commands like printing or deletion. But in structural regular expressions, the primitive components are designed to be used in a recursive way, composing complicated commands out of regular expressions and sets of commands, which has the wonderful side-effect that the regular expressions you actually write are much simpler. To borrow a few examples from Rob Pike's paper: with structural regular expressions, if we wanted to print every line that contained rob but not robot, we could write a command to focus on lines, keep only those that contain rob, filter out those that contain robot, and print them:
(sam) x/.*\n/g/rob/v/robot/p
The same thing in an ed-like command language would require making the regular expression much more complicated, to filter by the string rob when not followed by o, or when followed by o but not by t, and so forth:
(ed) g/rob(\$|[^o]|o[^t])/p
Another advantage of structural regular expressions is that they alleviate a common problem with submatches. I neglected to mention it above, but the s command also allows you to use parens within the supplied regex to select some subpart of the matched input, and use that within the output, e.g.
(sam) s/([A-Za-z]+) ([A-Za-z]+)/\2 \1/g
is a command which will match two words and swap their order: \1 referring to the item matched within the first set of parens, and \2 within the second. One problem with submatches is that they are no longer valid when you put them underneath a *:
(ed) s/words: ([A-Za-z]+ )*/got: \1/g # also bad!
What does this invocation mean? Well, nothing: \1 can't unambiguously refer to any submatch, because it may match zero or more times.
Because structural regular expressions contain for-each-like constructs, we can start to articulate commands that perform the same repetitions, but with no ambiguity about what replaces what:
(sam) x/words: [A-Za-z ]+\n/x/[A-Za-z]+ /i/got:
Structural regular expressions are a powerful and expressive tool: separating the ability to focus, filter, and edit into different but complimentary commands provides a surprising amount of power while also simplifying the regular expressions needed to perform these operations. They are sadly not as popular as they could be—as far as I know, the only editors that support them are part of the Plan 9 tools—but implementing them in other tools would be a wonderful and easy way of making those tools more powerful—so, editor-writers and tool-writers, keep them in mind!
|
{}
|
# How do you find the local max and min for f(x) = 2 x + 3 x ^{ -1 } ?
##### 1 Answer
Nov 20, 2016
Minimum=$x = \sqrt{\frac{3}{2}}$
#### Explanation:
Think about a line when it hits its maximum and then begins to curve back down. At its very maximum, its slope will be zero, and the same for the minimum. Therefore, what you must do is find the derivative and equal it to zero and solve, because the derivative is equal to the slope at a given time.
$f ' \left(x\right) = 2 - 3 {x}^{-} 2 = 0$
$3 {x}^{-} 2 = 2$
$3 = 2 {x}^{2}$
$x = \sqrt{\frac{3}{2}}$
By making the derivative equal to zero, you are finding one or more "critical points" that can either be a maximum of the function, minimum of the function, or neither. Because you don't know what these are, you can use the first derivative test.
In the first derivative test, you plug in a two values of x into the derivative equation, one greater and one lesser than the solution found. Let's use 1 and 2.
1
$2 - 3 {x}^{-} 2$
$2 - 3 {\left(1\right)}^{-} 2$
$2 - 3 = - 1$
This solution means that there is a NEGATIVE slope to the left of the point we found.
2
$2 - 3 {\left(2\right)}^{-} 2$
$2 - \frac{3}{4} = \frac{5}{4}$
This solution means that there is a POSITIVE slope to the right of the point we found.
This means that we have found a minimum. Visualize a dip in a graph- it goes down (negative slope) and then goes up again (positive slope).
We did not find any other values when we made the derivative equal to zero, so there are no maximums or other minimums in this graph!
If you want to find the coordinate point, just plug $x = \sqrt{\frac{3}{2}}$ into f(x).
|
{}
|
History
Help on query formulation
Judgment of association between potential factors and associated risk in $2\times 2$ tables: a study with psychology students. (English)
Math. Enthus. 12, No. 1-3, 347-363 (2015).
Summary: This study was aimed to evaluate the accuracy and strategies used in the estimation of association between potential factors and associated risks when data are presented in $2\times 2$ tables. A sample of 414 undergraduate Psychology students from three different Spanish universities was given three different tasks (direct and inverse association and perfect independence) where they had to estimate such association. Most participants judged association in the task where there was perfect independence, but the data contradicted the students’ previous expectations. The estimation of association was consistent with the perception of association and the accuracy of estimates increased with correct strategies. Our participants performed worse than secondary school students in a previous study and we found no difference in the three participating universities. We classify the students’ strategies in the tasks in levels of complexity and explain the incorrect strategies using the idea of semiotic conflict.
Classification: K75 K95 D75
|
{}
|
## Monday, September 17, 2012
### What's Wrong with Huntington's Clash of Civilizations?
A better question might be "What's not wrong with Huntington's Clash of Civilizations?", but I'll try not to be too pedantic. In light of recent events, interest in Huntington's thesis has been growing. Even the NYTimes is using language that is clearly influenced by Huntington's work, though they don't specifically mention it. So let's review the merits (such as they are) and demerits of this argument.
First, we need to review what Huntington actually said. Many of those who cite his work approvingly seem have precious little idea what it says, save that it speculates at length about a coming conflict between Islam and the West. And, sure, there is some intuitive appeal to such a gross misreading of his work. But I encourage you to read the Foreign Affairs article, or the book that followed, where you'll find some reasonable claims sharing space with concrete predictions about what was then the future—predictions that have proven to be completely wrong. This shouldn't really surprise us, since the entire argument rests upon a hugely problematic assumption, as I'll discuss below.$$^1$$
Huntington's argument was that cultural factors were suppressed by the super powers during the Cold War, but would come to dominate the post-Cold War era. Statistical critiques of his work (see here and here for example) have often been met with the criticism that they only demonstrate what Huntington had already acknowledged—that civilizational conflict was not common in the past (though see here). We're more than 20 years from the end of the Cold War now, and there's still no evidence for the precise claims he made, but it's worth acknowledging from the start that Huntington's argument was always a little more nuanced than it has sometimes been made out to be.
Huntington argued that cultural factors would have a very specific effect—that interstate conflict would be most likely to occur at or near civilizational fault lines. He did not simply say that identity would matter. He identified several essentially monolithic civilizations, and claimed that conflict would be most likely to occur in the future between governments of states that belonged to different civilizations, particularly those that shared a geographic border. These civilizations are: Western, Confucian, Japanese, Islamic, Hindu, Slavic-Orthodox, Latin American, "and possibly African". You'll note that some of these are defined by geography or political-legal boundaries, while others are delineated by religion.
Why are some religions civilizations while others are not? Who knows. Why are some states coterminal with cultural civilizations while others are not? Who knows. Why is there doubt about whether Africa has any civilization? Who knows. Surely there is no racist subtext there!
Note also that Huntington was not primarily talking about the possibility of religious or ethno-linguistic cleavages fueling civil wars, nor transnational terrorism. He was mostly interested in explaining interstate conflict, which he clearly said would be more likely to occur between certain types of countries in certain regions of the globe. He did discuss terrorism, and his claims there have been partially supported (see here), but this was secondary to his main argument. And insofar as he had anything to say about terrorism, his argument anticipates more incidents to cross civilizational boundaries than to occur within any given civilization—and he was dead wrong about that (see the paper in the previous link).
The big problem I, and others, have with the CoC is that conflict has historically been, and continues to be, more common within the boundaries of what he identifies as civilizations than it has been between states of different civilizations. Put differently, the key expectation of his argument is completely at odds with the available evidence. It doesn't just oversimplify—I'd argue that all theories, by necessity, do that to some degree—it fails on its own terms. Conflict, by any measure yet devised by social scientists, is not now, nor has it ever been, more likely to occur along "civilizational fault lines".$$^2$$ It is highly unlikely that shared civilization status is itself a cause of conflict, of course, "civilizational" identities appear to be less important, even in the post-Cold War era, than Huntington anticipated.
But there are other parts of Huntington's argument that are pretty reasonable. Let me acknowledge those before discussing the truly fatal flaw in his thesis.
Huntington argued that the clash of civilizations would be fueled by two factors. Neither of these was new, but the Cold War had kept a lid on the pressures created by them, according to Huntington. The first factor is globalization. The decreased cost of communicating and/or traveling across the globe puts people into contact with members of other civilizations more frequently in the post-Cold War era than was true for the decades preceding it. Among other things, this brought American popular culture to people who had previously had little exposure with it. This, in combination with the second factor, creates serious tension. Namely, at the same time that globalization was accelerating, elites around the world made a concerted effort to resist creeping Westernization and modernization, because it threatened their hold on power. Moreover, economic integration was expected to raise awareness of civilizational identities since it often occurred within the context of regional organizations, rather than on a strictly global scale. On these points, I think Huntington sounds pretty reasonable. I'm not an expert on any of the related topics, so I'll leave to others to judge, but I'll at least say that this part of his argument passes the laugh test. When folks like me joke that Huntington's CoC has detracted from the sum total of human knowledge, that it shouldn't even be taught in Intro IR classes because exposing students to it does more harm than good, it's the part I'm about to turn to that we have in mind more so than his claim that economic globalization has shrunk the world, or that economic integration has proceeded more rapidly at the regional level than the global level, or that dictators in the Islamic world might be threatened by the degree to which young people in their countries embraced American pop culture.
No, the big problem here is that Huntington claims that identity is immutable (even though some aspects of his argument clearly imply the opposite). The fundamental problem with any argument linking cultural cleavages with conflict is that identity is malleable, and we have just as much evidence that conflict -> identity cleavages as we do identity cleavages -> conflict.
The notion that identity is malleable is so well-documented that, in certain circles, I wouldn't even need to back that claim up at all. But anyone who has read this deep into a post about Huntington's CoC obviously doesn't think that it goes without saying. So allow me to present a brief case, based on a very limited sample of evidence. Bear in mind, though, that I'm hardly even scraping the surface here.
Stathis Kalyvas has a nice paper on the frequency with which actors choose which side to take in a conflict based on political expedience rather than ethnic solidarity. This doesn't speak to the notion of identity change being caused by conflict, since the behavior he discusses isn't driven by identity at all, but it further casts doubt on the claim that identity is the primary driver of behavior.
In a recent paper, Doug Gibler, Marc Huthison, and Steve Miller show that individuals living in countries that were recently the target of an interstate dispute over territory are more likely to identify along national lines than ethnic lines, while those who live in countries that recently experienced a civil conflict are more likely to identify along ethnic lines than national lines. That is, how people view themselves is influenced by who they consider the "other" to be.
Along similar lines, Maggie Penn has developed a theoretical model of identity choice that focuses specifically on the question of whether people self-identify first and foremost along ethnic lines or by nationality. Those who have little interest in formal theory should at least find her discussion of recent Iraqi politics to be of interest. Hint: the Sunni and the Shi'ia didn't see themselves as so different until after the sectarian conflict that followed Saddam's fall.
Perhaps the clearest evidence that identity choices can be manipulated by political elites comes from a study by Daniel Posner. While this study doesn't look specifically at the impact of armed conflict on identity, it is hard to look at this evidence and cling to the belief that identity is immutable.
Posner takes up the question of why Chewas are Tumbukas are allies in Zambia and adversaries in Malawi. These two ethnic groups straddle a border that was drawn by the British for purely administrative purposes. It just so happens, though, that in Zambia, neither group is large enough, relative to the rest of the population, for a political party that only represented Chewas or only represented Tumbukas to be viable. In Malawi, we have a different story. Both groups are large enough to contest the presidency on their own. Unsurprisingly, then, there are no political parties in Zambia who make appeals to Chewas in which they portray Tumbukas as "other", while there is such a party in Malawi. And, equally unsurprisingly, we see that Chewas and Tumbukas not only support different parties in Malawi while supporting the same party in Zambia, but generally dislike and distrust each other more in Malawi than Zambia. The following graph illustrates this (see the paper for details).
These are but a few examples of the many studies that find that identity is indeed highly responsive to political context. There is an equally rich body of literature demonstrating that identity is responsive to the social and economic context within which people find themselves.
Given the malleable nature of identity, if we observe a correlation between identity and conflict (and remember—there is no correlational evidence of the pattern anticipated by Huntington), we must ask ourselves whether cultural cleavages caused conflict or whether conflict exacerbated cultural cleavages. Insofar as there is conflict between some actors within the Islamic world and some actors in the Western world, we should remember to ask whether the fact that the people on one side of that conflict just so happen to be Muslim and the people on the other side largely are not is the actual cause of that conflict. There are good reasons to believe that the various and sundry political cleavages dividing these actors are no less important than the cultural ones.
1. Assumptions are not, in and of themselves, impermissible. Too often, perfectly reasonable theoretical arguments are criticized for resting upon assumptions that are clearly false. The question we should ask when evaluating the assumptions of a theoretical argument isn't whether they are true or false but whether their truth status is fundamental to the argument. If the same conclusion could be established, albeit with a lot more effort, after relaxing a given assumption, then we shouldn't care whether it is true or not. If the conclusion critically depends upon a patently false assumption, that's a different story. And that's what we're talking about here.
2. By "ever", I obviously mean "during any period for which we have systematic data available", but that doesn't sound as good, so forgive the poetic license. And it should also go without saying that I'm actually only saying that we do not yet have statistical evidence consistent with this expectation. Perhaps there are measurement issues or something that have yet to be resolved. I doubt that's the problem here, but obviously I don't know for sure, since none of us knows what the true data-generating process is.
1. Interesting post. (Haven't had a chance to do more than read it quickly.) On the identity-is-not-immutable point: yes. I would just add the minor note that "ideational cleavages" and "identity cleavages" are not synonymous, at least not to me: one refers to ideas, the other to identity. Not a big deal, though, since your intended meaning is clear.
One other point: the role of history in the conflict b/w some Muslims (a minority, no doubt) and 'the West'. Islam was a leading 'civilizational' force for centuries, then the West 'overtook' it, or so the standard narrative goes. Thus the emphasis on restoring Islam's glory and world position in the ideology of certain 'radical' Islamic groups. The narrative of a once glorious civilization, humiliated by a 'newer' one and now seeking some kind of putatively justified 'revenge' for that humiliation, would seem to play a role in the current context. Not sure exactly how this fits in w Huntington.
1. Hey LFC.
Good point about terminology. I should have been more careful with my words.
I think it's fair to say that some people in both the Islamic world *and* the Western world see themselves as being involved in "civilizational" conflict. The point is that Huntington is claiming that this is what drives aggregate behavior, while the truth seems to be that it applies to a relatively small fraction of the population and we therefore don't end up seeing the aggregate patterns he expected.
2. No disagreement there at all.
I was just pointing out that, for the minority who do see themselves as involved in 'civilizational' conflict, it has a historical dimension, esp. on one side. (But Huntington presumably notices that.)
1. Fair enough. Yes, that's part of his argument.
3. Good post, I think there are huge flaws in Huntungton's book. What do you think of his interpretation of the civil war in Yugoslavia? Reading it in civilizational terms seems absurd to me, I believe it is one of his biggest over-simplifications.
1. Thanks, GB. I agree his interpretation of the conflict in Yugoslavia is unpersuasive.
4. Awesome critiques! I've never thought about this kind of flaws in Huntington's book. In my opinion, the conflict between cultures seems to be class conflict, not cultural ones.
As an Asian, I don't have much information concerned with such cases of East Europe but I'm quite sure that Huntington's description of confucian coalition is totally wrong. Even though most of East Asian countries share similar experience of confucianism, the cooperation is a result of economic integration instead of cultural similarities. He just checked the issue of East Asians superficially, based on old-fashioned tool.
In fact, Korea and another Tigers in East Asians are not willing to accept the assumption that confucianism bind whole regional cultures or make them against the West power.
And the most illogical thing was the confucian-muslim connection. He exaggerated the connection without enough evidence to prove it. It was just occasional decision made solely by China.
1. Thanks for the comment! I'm glad you enjoyed the post. And thanks also for sharing your perspective. I also see cooperation in East Asia as being driven by economic integration, but it's nice to hear that someone who's from there sees it that way too.
5. "People can and
do redefine their identities and, as a result, the composition and
boundaries of civilizations change."
this a direct quote from CoC. although Hungtington mentions the fact that it's hard to be share two religions he also acknowledges that civilizations aren't stagnant and that is in line with Kalyvas's "Ethnic Defection in Civil War"
6. What was the second factor that the CoC was fueled by? The first one being Globalization.
|
{}
|
# Ticket #15299: 15299_lseries_prec.patch
File 15299_lseries_prec.patch, 38.5 KB (added by Jeroen Demeyer, 9 years ago)
• ## sage/rings/rational_field.py
# HG changeset patch
# User Jeroen Demeyer <jdemeyer@cage.ugent.be>
# Date 1382305549 -7200
# Parent 75deaaeb130a2179081f90496e0d4a66d05d2fe7
Use arbitrary precision reals instead of Python floats in L series
diff --git a/sage/rings/rational_field.py b/sage/rings/rational_field.py
a 23.2000000000000 sage: QQ(a, 10) 116/5 Here's a nice example involving elliptic curves:: sage: E = EllipticCurve('11a') sage: L = E.lseries().at1(300)[0]; L 0.253841860855911 0.2538418608559106843377589233... sage: O = E.period_lattice().omega(); O 1.26920930427955 sage: t = L/O; t
• ## sage/schemes/elliptic_curves/constructor.py
diff --git a/sage/schemes/elliptic_curves/constructor.py b/sage/schemes/elliptic_curves/constructor.py
a sage: R. = QQ[] sage: C = Curve(x^3+y^3+z^3) sage: P = C(1,-1,0) sage: E = EllipticCurve_from_plane_curve(C,P); E sage: E = EllipticCurve_from_plane_curve(C,P); E # long time (3s on sage.math, 2013) doctest:...: DeprecationWarning: use Jacobian(C) instead See http://trac.sagemath.org/3416 for details. Elliptic Curve defined by y^2 = x^3 - 27/4 over Rational Field
• ## sage/schemes/elliptic_curves/ell_egros.py
diff --git a/sage/schemes/elliptic_curves/ell_egros.py b/sage/schemes/elliptic_curves/ell_egros.py
a Using the "proof=False" flag suppresses these warnings. EXAMPLES: We find all elliptic curves with good reduction outside 2, listing the label of each: listing the label of each:: sage: [e.label() for e in EllipticCurves_with_good_reduction_outside_S([2])] sage: [e.label() for e in EllipticCurves_with_good_reduction_outside_S([2])] # long time (5s on sage.math, 2013) ['32a1', '32a2', '32a3', [] sage: [e.label() for e in egros_from_j_0([3])] ['27a1', '27a3', '243a1', '243a2', '243b1', '243b2'] sage: len(egros_from_j_0([2,3,5])) sage: len(egros_from_j_0([2,3,5])) # long time (8s on sage.math, 2013) 432 """ Elist=[] Elist=[] if not 3 in S: return Elist no2 = not 2 in S for ei in xmrange([2] + [6]*len(S)): for ei in xmrange([2] + [6]*len(S)): u = prod([p**e for p,e in zip([-1]+S,ei)],QQ(1)) if no2: u*=16 ## make sure 12|val(D,2) sage: from sage.schemes.elliptic_curves.ell_egros import egros_get_j sage: egros_get_j([]) [1728] sage: egros_get_j([2]) sage: egros_get_j([2]) # long time (3s on sage.math, 2013) [128, 432, -864, 1728, 3375/2, -3456, 6912, 8000, 10976, -35937/4, 287496, -784446336, -189613868625/128] sage: egros_get_j([3]) sage: egros_get_j([3]) # long time (3s on sage.math, 2013) [0, -576, 1536, 1728, -5184, -13824, 21952/9, -41472, 140608/3, -12288000] sage: jlist=egros_get_j([2,3]); len(jlist) # long time (30s) 83
• ## sage/schemes/elliptic_curves/ell_number_field.py
diff --git a/sage/schemes/elliptic_curves/ell_number_field.py b/sage/schemes/elliptic_curves/ell_number_field.py
a A curve with 2-torsion:: sage: K. = NumberField(x^2 + 7, 'a') sage: K. = NumberField(x^2 + 7) sage: E = EllipticCurve(K, '15a') sage: v = E.simon_two_descent(); v # long time (about 10 seconds), points can vary sage: E.simon_two_descent() # long time (3s on sage.math, 2013), points can vary (1, 3, [...]) A failure in the PARI/GP script ell.gp (VERSION 25/03/2009) is reported:: sage: K = CyclotomicField(43).subfields(3)[0][0] sage: E = EllipticCurve(K, '37') sage: E.simon_two_descent() sage: E.simon_two_descent() # long time (4s on sage.math, 2013) Traceback (most recent call last): ... RuntimeError: RuntimeError: *** at top-level: ans=bnfellrank(K,[0,0,1, *** ^-------------------- *** in function bnfellrank: ...eqtheta,rnfeq,bbnf];rang= sage: E = EllipticCurve('11a1') sage: K.=NumberField(x^4 + x^3 + 11*x^2 + 41*x + 101) sage: EK=E.base_extend(K) sage: tor = EK.torsion_subgroup() sage: tor sage: EK = E.base_extend(K) sage: tor = EK.torsion_subgroup() # long time (3s on sage.math, 2013) sage: tor # long time Torsion Subgroup isomorphic to Z/5 + Z/5 associated to the Elliptic Curve defined by y^2 + y = x^3 + (-1)*x^2 + (-10)*x + (-20) over Number Field in t with defining polynomial x^4 + x^3 + 11*x^2 + 41*x + 101 sage: tor.gens() sage: tor.gens() # long time ((16 : 60 : 1), (t : 1/11*t^3 + 6/11*t^2 + 19/11*t + 48/11 : 1)) :: [(0 : 1 : 0), (5 : -6 : 1), (5 : 5 : 1), (16 : -61 : 1), (16 : 60 : 1)] sage: K. = NumberField(x^4 + x^3 + 11*x^2 + 41*x + 101) sage: EK = E.base_extend(K) sage: EK.torsion_points() sage: EK.torsion_points() # long time (3s on sage.math, 2013) [(16 : 60 : 1), (5 : 5 : 1), (5 : -6 : 1), Compatible family of Galois representations associated to the Elliptic Curve defined by y^2 + y = x^3 + (-1)*x^2 + (-10)*x + (-20) over Number Field in a with defining polynomial x^2 + 1 sage: rho.is_surjective(3) True sage: rho.is_surjective(5) sage: rho.is_surjective(5) # long time (9s on sage.math, 2013) False sage: rho.non_surjective() [5]
• ## sage/schemes/elliptic_curves/gal_reps_number_field.py
diff --git a/sage/schemes/elliptic_curves/gal_reps_number_field.py b/sage/schemes/elliptic_curves/gal_reps_number_field.py
a EXAMPLES:: sage: K = NumberField(x**2 - 29, 'a'); a = K.gen() sage: E = EllipticCurve([1, 0, ((5 + a)/2)**2, 0, 0]) sage: rho = E.galois_representation() sage: rho.is_surjective(29) # Cyclotomic character not surjective. False sage: rho.is_surjective(31) # See Section 5.10 of [Serre72]. True sage: rho.non_surjective() [3, 5, 29] sage: K = NumberField(x**2 - 29, 'a'); a = K.gen() sage: E = EllipticCurve([1, 0, ((5 + a)/2)**2, 0, 0]) sage: rho = E.galois_representation() sage: rho.is_surjective(29) # Cyclotomic character not surjective. False sage: rho.is_surjective(31) # See Section 5.10 of [Serre72]. True sage: rho.non_surjective() # long time (8s on sage.math, 2013) [3, 5, 29] sage: E = EllipticCurve_from_j(1728).change_ring(K) # CM sage: E.galois_representation().non_surjective() [0] sage: E = EllipticCurve_from_j(1728).change_ring(K) # CM sage: E.galois_representation().non_surjective() [0] AUTHORS: sage: K = NumberField(x**2 + 3, 'a'); a = K.gen() sage: E = EllipticCurve([0, -1, 1, -10, -20]).change_ring(K) # X_0(11) sage: rho = E.galois_representation() sage: rho.non_surjective() sage: rho.non_surjective() # long time (8s on sage.math, 2013) [3, 5] sage: K = NumberField(x**2 + 1, 'a'); a = K.gen() sage: E = EllipticCurve_from_j(1728).change_ring(K) # CM
• ## sage/schemes/elliptic_curves/heegner.py
diff --git a/sage/schemes/elliptic_curves/heegner.py b/sage/schemes/elliptic_curves/heegner.py
a rank 1), and we reduce it modulo several primes.:: sage: E = EllipticCurve('11a1'); P = E.kolyvagin_point(-7) sage: P.mod(3,70) sage: P.mod(3,70) # long time (4s on sage.math, 2013) (1 : 2 : 1) sage: P.mod(5,70) (1 : 4 : 1) sage: N = 389; D = -7; ell = 5; c = 17; q = 3 sage: H = heegner_points(N).reduce_mod(ell) sage: k = H.rational_kolyvagin_divisor(D, c); k sage: k = H.rational_kolyvagin_divisor(D, c); k # long time (5s on sage.math, 2013) (14, 16, 0, 0, ... 0, 0, 0) sage: V = H.modp_dual_elliptic_curve_factor(EllipticCurve('389a'), q, 2) sage: [b.dot_product(k.element().change_ring(GF(q))) for b in V.basis()] sage: [b.dot_product(k.element().change_ring(GF(q))) for b in V.basis()] # long time [0, 0] sage: k = H.rational_kolyvagin_divisor(D, 59) sage: [b.dot_product(k.element().change_ring(GF(q))) for b in V.basis()]
• ## sage/schemes/elliptic_curves/lseries_ell.py
diff --git a/sage/schemes/elliptic_curves/lseries_ell.py b/sage/schemes/elliptic_curves/lseries_ell.py
a """ Complex Elliptic Curve L-series AUTHORS: - Jeroen Demeyer (2013-10-17): compute L series with arbitrary precision instead of floats. - William Stein et al. (2005 and later) """ #***************************************************************************** # Copyright (C) 2005 William Stein # Copyright (C) 2013 Jeroen Demeyer # # Distributed under the terms of the GNU General Public License (GPL) # as published by the Free Software Foundation; either version 2 of # the License, or (at your option) any later version. # http://www.gnu.org/licenses/ #***************************************************************************** from sage.structure.sage_object import SageObject from sage.rings.all import ( RealField, RationalField, ComplexField) from math import sqrt, exp, ceil from sage.rings.all import RealField, RationalField from math import sqrt, exp, log, ceil import sage.functions.exp_integral as exp_integral R = RealField() Q = RationalField() C = ComplexField() import sage.misc.all as misc class Lseries_ell(SageObject): """ An elliptic curve $L$-series. EXAMPLES: """ def __init__(self, E): """ Create an elliptic curve $L$-series. EXAMPLES: EXAMPLES:: sage: EllipticCurve([1..5]).lseries() Complex L-series of the Elliptic Curve defined by y^2 + x*y + 3*y = x^3 + 2*x^2 + 4*x + 5 over Rational Field """ def elliptic_curve(self): """ Return the elliptic curve that this L-series is attached to. EXAMPLES: EXAMPLES:: sage: E = EllipticCurve('389a') sage: L = E.lseries() sage: L.elliptic_curve () The output is a series in var, where you should view var as equal to s-a. Thus this function returns the formal power series whose coefficients are L^{(n)}(a)/n!. EXAMPLES: EXAMPLES:: sage: E = EllipticCurve('389a') sage: L = E.lseries() sage: L.taylor_series(series_prec=3) # random nearly 0 constant and linear terms -2.69129566562797e-23 + (1.52514901968783e-23)*z + 0.759316500288427*z^2 + O(z^3) sage: L.taylor_series(series_prec=3) -1.28158145675273e-23 + (7.26268290541182e-24)*z + 0.759316500288427*z^2 + O(z^3) # 32-bit -2.69129566562797e-23 + (1.52514901968783e-23)*z + 0.759316500288427*z^2 + O(z^3) # 64-bit sage: L.taylor_series(series_prec=3)[2:] 0.000000000000000 + 0.000000000000000*z + 0.759316500288427*z^2 + O(z^3) """ def _repr_(self): """ Return string representation of this L-series. EXAMPLES: EXAMPLES:: sage: E = EllipticCurve('37a') sage: L = E.lseries() sage: L._repr_() than bits and the object returned is a Magma L-series, which has different functionality from the Sage L-series.} EXAMPLES: EXAMPLES:: sage: E = EllipticCurve('37a') sage: L = E.lseries().dokchitser() sage: L(2) If the curve has too large a conductor, it isn't possible to compute with the L-series using this command. Instead a RuntimeError is raised: RuntimeError is raised:: sage: e = EllipticCurve([1,1,0,-63900,-1964465932632]) sage: L = e.lseries().dokchitser(15) Traceback (most recent call last): where \code{} is replaced by your value of $n$. This command takes a long time to run.} EXAMPLES: EXAMPLES:: sage: E = EllipticCurve('37a') sage: a = E.lseries().sympow(2,16) # not tested - requires precomputing "sympow('-new_data 2')" sage: a # not tested minutes. If this function fails it will indicate what commands have to be run.} EXAMPLES: EXAMPLES:: sage: E = EllipticCurve('37a') sage: print E.lseries().sympow_derivs(1,16,2) # not tested -- requires precomputing "sympow('-new_data 2')" sympow 1.018 RELEASE (c) Mark Watkins --- see README and COPYING for details Return the imaginary parts of the first $n$ nontrivial zeros on the critical line of the L-function in the upper half plane, as 32-bit reals. EXAMPLES: EXAMPLES:: sage: E = EllipticCurve('37a') sage: E.lseries().zeros(2) *** Warning:...new stack size = ... zeros, is missed). Higher up the critical strip you should use a smaller stepsize so as not to miss zeros. EXAMPLES: EXAMPLES:: sage: E = EllipticCurve('37a') sage: E.lseries().zeros_in_interval(6, 10, 0.1) # long time *** Warning:...new stack size = ... equally spaced sampled points on the line from s0 to s1. EXAMPLES: sage: I = CC.0 EXAMPLES:: sage: E = EllipticCurve('37a') sage: E.lseries().values_along_line(1, 0.5+20*I, 5) # long time and slightly random output [(0.500000000, 0), (0.400000000 + 4.00000000*I, 3.31920245 - 2.60028054*I), (0.300000000 + 8.00000000*I, -0.886341185 - 0.422640337*I), (0.200000000 + 12.0000000*I, -3.50558936 - 0.108531690*I), (0.100000000 + 16.0000000*I, -3.87043288 - 1.88049411*I)] sage: E.lseries().values_along_line(1, 0.5 + 20*I, 5) *** Warning:...new stack size = ... [(0.500000000, ...), (0.400000000 + 4.00000000*I, 3.31920245 - 2.60028054*I), (0.300000000 + 8.00000000*I, -0.886341185 - 0.422640337*I), (0.200000000 + 12.0000000*I, -3.50558936 - 0.108531690*I), (0.100000000 + 16.0000000*I, -3.87043288 - 1.88049411*I)] """ from sage.lfunctions.lcalc import lcalc return lcalc.values_along_line(s0-RationalField()('1/2'), critical strip is 1.} INPUT: s -- complex numbers dmin -- integer dmax -- integer - s -- complex numbers - dmin -- integer - dmax -- integer OUTPUT: list -- list of pairs (d, L(E, s,chi_d)) EXAMPLES: list of pairs (d, L(E, s,chi_d)) EXAMPLES:: sage: E = EllipticCurve('37a') sage: E.lseries().twist_values(1, -12, -4) # slightly random output depending on architecture [(-11, 1.4782434171), (-8, 0), (-7, 1.8530761916), (-4, 2.4513893817)] sage: vals = E.lseries().twist_values(1, -12, -4) *** Warning:...new stack size = ... sage: vals # abs tol 1e-17 [(-11, 1.47824342), (-8, 8.9590946e-18), (-7, 1.85307619), (-4, 2.45138938)] sage: F = E.quadratic_twist(-8) sage: F.rank() 1 dict -- keys are the discriminants $d$, and values are list of corresponding zeros. EXAMPLES: EXAMPLES:: sage: E = EllipticCurve('37a') sage: E.lseries().twist_zeros(3, -4, -3) # long time *** Warning:...new stack size = ... from sage.lfunctions.lcalc import lcalc return lcalc.twist_zeros(n, dmin, dmax, L=self.__E) def at1(self, k=0): def at1(self, k=None, prec=None): r""" Compute $L(E,1)$ using $k$ terms of the series for $L(E,1)$ as explained on page 406 of Henri Cohen's book"A Course in Computational Algebraic Number Theory". If the argument $k$ is not specified, then it defaults to $\sqrt(N)$, where $N$ is the conductor. The real precision used in each step of the computation is the precision of machine floats. Compute L(E,1) using k terms of the series for L(E,1) as explained on page 406 of Henri Cohen's book "A Course in Computational Algebraic Number Theory". If the argument k is not specified, then it defaults to \sqrt(N), where N is the conductor. INPUT: k -- (optional) an integer, defaults to sqrt(N). - k -- number of terms of the series. If zero or None, use k = \sqrt(N), where N is the conductor. - prec -- numerical precision in bits. If zero or None, use a reasonable automatic default. OUTPUT: float -- L(E,1) float -- a bound on the error in the approximation; this is a provably correct upper bound on the sum of the tail end of the series used to compute L(E,1). This function is disjoint from the PARI \code{elllseries} A tuple of real numbers (L, err) where L is an approximation for L(E,1) and err is a bound on the error in the approximation. This function is disjoint from the PARI elllseries command, which is for a similar purpose. To use that command (via the PARI C library), simply type \code{E.pari_mincurve().elllseries(1)} E.pari_mincurve().elllseries(1). ALGORITHM: \begin{enumerate} \item Compute the root number eps. If it is -1, return 0. \item Compute the Fourier coefficients a_n, for n up to and including k. \item Compute the sum $$2 * sum_{n=1}^{k} (a_n / n) * exp(-2*pi*n/Sqrt(N)),$$ where N is the conductor of E. \item Compute a bound on the tail end of the series, which is $$2 * e^(-2 * pi * (k+1) / sqrt(N)) / (1 - e^(-2*pi/sqrt(N))).$$ For a proof see [Grigov-Jorza-Patrascu-Patrikis-Stein]. \end{enumerate} EXAMPLES: - Compute the root number eps. If it is -1, return 0. - Compute the Fourier coefficients a_n, for n up to and including k. - Compute the sum .. MATH:: 2 * sum_{n=1}^{k} (a_n / n) * exp(-2*pi*n/Sqrt(N)), where N is the conductor of E. - Compute a bound on the tail end of the series, which is .. MATH:: 2 e^{-2 \pi (k+1) / \sqrt{N}} / (1 - e^{-2 \pi/\sqrt{N}}). For a proof see [Grigov-Jorza-Patrascu-Patrikis-Stein]. EXAMPLES:: sage: L, err = EllipticCurve('11a1').lseries().at1() sage: L, err (0.253804, 0.000181444) sage: parent(L) Real Field with 24 bits of precision sage: E = EllipticCurve('37b') sage: E.lseries().at1() (0.7257177, 0.000800697) sage: E.lseries().at1(100) (0.725681061936153, 1.52437502288743e-45) (0.7256810619361527823362055410263965487367603361763, 1.52469e-45) sage: L,err = E.lseries().at1(100, prec=128) sage: L 0.72568106193615278233620554102639654873 sage: parent(L) Real Field with 128 bits of precision sage: err 1.70693e-37 sage: parent(err) Real Field with 24 bits of precision and rounding RNDU Rank 1 through 3 elliptic curves:: sage: E = EllipticCurve('37a1') sage: E.lseries().at1() (0.0000000, 0.000000) sage: E = EllipticCurve('389a1') sage: E.lseries().at1() (-0.001769566, 0.00911776) sage: E = EllipticCurve('5077a1') sage: E.lseries().at1() (0.0000000, 0.000000) """ sqrtN = sqrt(self.__E.conductor()) if k: k = int(k) else: k = int(ceil(sqrtN)) if prec: prec = int(prec) else: # Use the same precision as deriv_at1() below for # consistency prec = int(9.065*k/sqrtN + 1.443*log(k)) + 12 R = RealField(prec) # Compute error term with bounded precision of 24 bits and # round towards +infinity Rerror = RealField(24, rnd='RNDU') if self.__E.root_number() == -1: return 0 sqrtN = float(self.__E.conductor().sqrt()) k = int(k) if k == 0: k = int(ceil(sqrtN)) an = self.__E.anlist(k) # list of Sage ints # Compute z = e^(-2pi/sqrt(N)) pi = 3.14159265358979323846 z = exp(-2*pi/sqrtN) return (R.zero(), Rerror.zero()) an = self.__E.anlist(k) # list of Sage Integers pi = R.pi() sqrtN = R(self.__E.conductor()).sqrt() z = (-2*pi/sqrtN).exp() zpow = z s = 0.0 # Compute series sum and accumulate floating point errors L = R.zero() error = Rerror.zero() for n in xrange(1,k+1): s += (zpow * float(an[n]))/n term = (zpow * an[n])/n zpow *= z L += term # We express relative error in units of epsilon, where # epsilon is a number divided by 2^precision. # Instead of multiplying the error by 2 after the loop # (to account for L *= 2), we already multiply it now. # # For multiplication and division, the relative error # in epsilons is bounded by (1+e)^n - 1, where n is the # number of operations (assuming exact inputs). # exp(x) additionally multiplies this error by abs(x) and # adds one epsilon. The inputs pi and sqrtN each contribute # another epsilon. # Assuming that 2*pi/sqrtN <= 2, the relative error for z is # 7 epsilon. This implies a relative error of (8n-1) epsilon # for zpow. We add 2 for the computation of term and 1/2 to # compensate for the approximation (1+e)^n = 1+ne. # # The error of the addition is at most half an ulp of the # result. # # Multiplying everything by two gives: error += term.epsilon(Rerror)*(16*n + 3) + L.ulp(Rerror) L *= 2 error = 2*zpow / (1 - z) return R(2*s), R(error) # Add series error (we use (-2)/(z-1) instead of 2/(1-z) # because this causes 1/(1-z) to be rounded up) error += ((-2)*Rerror(zpow)) / Rerror(z - 1) return (L, error) def deriv_at1(self, k=0): def deriv_at1(self, k=None, prec=None): r""" Compute $L'(E,1)$ using$k$ terms of the series for $L'(E,1)$. Compute L'(E,1) using k terms of the series for L'(E,1), under the assumption that L(E,1) = 0. The algorithm used is from page 406 of Henri Cohen's book A Course in Computational Algebraic Number Theory.'' The real precision of the computation is the precision of Python floats. INPUT: INPUT: k -- int; number of terms of the series - k -- number of terms of the series. If zero or None, use k = \sqrt(N), where N is the conductor. - prec -- numerical precision in bits. If zero or None, use a reasonable automatic default. OUTPUT: real number -- an approximation for L'(E,1) real number -- a bound on the error in the approximation A tuple of real numbers (L1, err) where L1 is an approximation for L'(E,1) and err is a bound on the error in the approximation. .. WARNING:: This function only makes sense if L(E) has positive order of vanishing at 1, or equivalently if L(E,1) = 0. ALGORITHM: \begin{enumerate} \item Compute the root number eps. If it is 1, return 0. \item Compute the Fourier coefficients $a_n$, for $n$ up to and including $k$. \item Compute the sum $$- Compute the root number eps. If it is 1, return 0. - Compute the Fourier coefficients a_n, for n up to and including k. - Compute the sum .. MATH:: 2 * \sum_{n=1}^{k} (a_n / n) * E_1(2 \pi n/\sqrt{N}),$$ where $N$ is the conductor of $E$, and $E_1$ is the exponential integral function. \item Compute a bound on the tail end of the series, which is $$2 * e^{-2 \pi (k+1) / \sqrt{N}} / (1 - e^{-2 \ pi/\sqrt{N}}).$$ For a proof see [Grigorov-Jorza-Patrascu-Patrikis-Stein]. This is exactly the same as the bound for the approximation to $L(E,1)$ produced by \code{E.lseries().at1}. \end{enumerate} where N is the conductor of E, and E_1 is the exponential integral function. - Compute a bound on the tail end of the series, which is .. MATH:: 2 e^{-2 \pi (k+1) / \sqrt{N}} / (1 - e^{-2 \pi/\sqrt{N}}). For a proof see [Grigorov-Jorza-Patrascu-Patrikis-Stein]. This is exactly the same as the bound for the approximation to L(E,1) produced by :meth:at1. EXAMPLES:: sage: E = EllipticCurve('37a') sage: E.lseries().deriv_at1() (0.305986660898516, 0.000800351433106958) (0.3059866, 0.000801045) sage: E.lseries().deriv_at1(100) (0.305999773834052, 1.52437502288740e-45) (0.3059997738340523018204836833216764744526377745903, 1.52493e-45) sage: E.lseries().deriv_at1(1000) (0.305999773834052, 0.000000000000000) (0.305999773834052301820483683321676474452637774590771998..., 2.75031e-449) With less numerical precision, the error is bounded by numerical accuracy:: sage: L,err = E.lseries().deriv_at1(100, prec=64) sage: L,err (0.305999773834052302, 5.55318e-18) sage: parent(L) Real Field with 64 bits of precision sage: parent(err) Real Field with 24 bits of precision and rounding RNDU Rank 2 and rank 3 elliptic curves:: sage: E = EllipticCurve('389a1') sage: E.lseries().deriv_at1() (0.0000000, 0.000000) sage: E = EllipticCurve((1, 0, 1, -131, 558)) # curve 59450i1 sage: E.lseries().deriv_at1() (-0.00010911444, 0.142428) sage: E.lseries().deriv_at1(4000) (6.9902290...e-50, 1.31318e-43) """ if self.__E.root_number() == 1: return 0 k = int(k) sqrtN = float(self.__E.conductor().sqrt()) if k == 0: k = int(ceil(sqrtN)) an = self.__E.anlist(k) # list of Sage Integers # Compute z = e^(-2pi/sqrt(N)) pi = 3.14159265358979323846 sqrtN = sqrt(self.__E.conductor()) if k: k = int(k) else: k = int(ceil(sqrtN)) if prec: prec = int(prec) else: # Estimate number of bits for the computation, based on error # estimate below (the denominator of that error is close enough # to 1 that we can ignore it). # 9.065 = 2*Pi/log(2) # 1.443 = 1/log(2) # 12 is an arbitrary extra number of bits (it is chosen # such that the precision is 24 bits when the conductor # equals 11 and k is the default value 4) prec = int(9.065*k/sqrtN + 1.443*log(k)) + 12 R = RealField(prec) # Compute error term with bounded precision of 24 bits and # round towards +infinity Rerror = RealField(24, rnd='RNDU') if self.__E.root_number() == 1: # Order of vanishing at 1 of L(E) is even and assumed to be # positive, so L'(E,1) = 0. return (R.zero(), Rerror.zero()) an = self.__E.anlist(k) # list of Sage Integers pi = R.pi() sqrtN = R(self.__E.conductor()).sqrt() v = exp_integral.exponential_integral_1(2*pi/sqrtN, k) L = 2*float(sum([ (v[n-1] * an[n])/n for n in xrange(1,k+1)])) error = 2*exp(-2*pi*(k+1)/sqrtN)/(1-exp(-2*pi/sqrtN)) return R(L), R(error) # Compute series sum and accumulate floating point errors L = R.zero() error = Rerror.zero() # Sum of |an[n]|/n sumann = Rerror.zero() for n in xrange(1,k+1): term = (v[n-1] * an[n])/n L += term error += term.epsilon(Rerror)*5 + L.ulp(Rerror) sumann += Rerror(an[n].abs())/n L *= 2 # Add error term for exponential_integral_1() errors. # Absolute error for 2*v[i] is 4*max(1, v[0])*2^-prec if v[0] > 1.0: sumann *= Rerror(v[0]) error += (sumann >> (prec - 2)) # Add series error (we use (-2)/(z-1) instead of 2/(1-z) # because this causes 1/(1-z) to be rounded up) z = (-2*pi/sqrtN).exp() zpow = ((-2*(k+1))*pi/sqrtN).exp() error += ((-2)*Rerror(zpow)) / Rerror(z - 1) return (L, error) def __call__(self, s): r""" Returns the value of the L-series of the elliptic curve E at s, where s must be a real number. Use self.extended for s complex. .. NOTE:: \note{If the conductor of the curve is large, say $>10^{12}$, then this function will take a very long time, since it uses an $O(\sqrt{N})$ algorithm.} If the conductor of the curve is large, say >10^{12}, then this function will take a very long time, since it uses an O(\sqrt{N}) algorithm. EXAMPLES: EXAMPLES:: sage: E = EllipticCurve([1,2,3,4,5]) sage: L = E.lseries() sage: L(1) """ return self.dokchitser()(s) #def extended(self, s, prec): # r""" # Returns the value of the L-series of the elliptic curve E at s # can be any complex number using prec terms of the power series # expansion. # # # WARNING: This may be slow. Consider using \code{dokchitser()} # instead. # # INPUT: # s -- complex number # prec -- integer # # EXAMPLES: # sage: E = EllipticCurve('389a') # sage: E.lseries().extended(1 + I, 50) # -0.638409959099589 + 0.715495262192901*I # sage: E.lseries().extended(1 + 0.1*I, 50) # -0.00761216538818315 + 0.000434885704670107*I # # NOTE: You might also want to use Tim Dokchitser's # L-function calculator, which is available by typing # L = E.lseries().dokchitser(), then evaluating L. It # gives the same information but is sometimes much faster. # # """ # try: # s = C(s) # except TypeError: # raise TypeError, "Input argument %s must be coercible to a complex number"%s # prec = int(prec) # if abs(s.imag()) < R(0.0000000000001): # return self(s.real()) # N = self.__E.conductor() # from sage.symbolic.constants import pi # pi = R(pi) # Gamma = transcendental.gamma # Gamma_inc = transcendental.gamma_inc # a = self.__E.anlist(prec) # eps = self.__E.root_number() # sqrtN = float(N.sqrt()) # def F(n, t): # return Gamma_inc(t+1, 2*pi*n/sqrtN) * C(sqrtN/(2*pi*n))**(t+1) # return C(N)**(-s/2) * C(2*pi)**s * Gamma(s)**(-1)\ # * sum([a[n]*(F(n,s-1) + eps*F(n,1-s)) for n in xrange(1,prec+1)]) def L1_vanishes(self): """ Returns whether or not L(E,1) = 0. The result is provably Returns whether or not L(E,1) = 0. The result is provably correct if the Manin constant of the associated optimal quotient is <= 2. This hypothesis on the Manin constant is true for all curves of conductor <= 40000 (by Cremona) and all semistable curves (i.e., squarefree conductor). EXAMPLES: ALGORITHM: see :meth:L_ratio. EXAMPLES:: sage: E = EllipticCurve([0, -1, 1, -10, -20]) # 11A = X_0(11) sage: E.lseries().L1_vanishes() False sage: E.lseries().L1_vanishes() False WARNING: It's conceivable that machine floats are not large enough precision for the computation; if this could be the case a RuntimeError is raised. The curve's real period would have to be very small for this to occur. ALGORITHM: Compute the root number. If it is -1 then L(E,s) vanishes to odd order at 1, hence vanishes. If it is +1, use a result about modular symbols and Mazur's "Rational Isogenies" paper to determine a provably correct bound (assuming Manin constant is <= 2) so that we can determine whether L(E,1) = 0. AUTHOR: William Stein, 2005-04-20. """ return self.L_ratio() == 0 def L_ratio(self): r""" Returns the ratio $L(E,1)/\Omega$ as an exact rational Returns the ratio L(E,1)/\Omega as an exact rational number. The result is \emph{provably} correct if the Manin constant of the associated optimal quotient is $\leq 2$. This constant of the associated optimal quotient is \leq 2. This hypothesis on the Manin constant is true for all semistable curves (i.e., squarefree conductor), by a theorem of Mazur from his \emph{Rational Isogenies of Prime Degree} paper. EXAMPLES: EXAMPLES:: sage: E = EllipticCurve([0, -1, 1, -10, -20]) # 11A = X_0(11) sage: E.lseries().L_ratio() 1/5 sage: E.lseries().L_ratio() 2 # See trac #3651: See :trac:3651 and :trac:15299:: sage: EllipticCurve([0,0,0,-193^2,0]).sha().an() 4 WARNING: It's conceivable that machine floats are not large enough precision for the computation; if this could be the case a RuntimeError is raised. The curve's real period would have to be very small for this to occur. sage: EllipticCurve([1, 0, 1, -131, 558]).sha().an() # long time 1.00000000000000 ALGORITHM: Compute the root number. If it is -1 then L(E,s) vanishes to odd order at 1, hence vanishes. If it is +1, use self.__lratio = self.__E.minimal_model().lseries().L_ratio() return self.__lratio QQ = RationalField() if self.__E.root_number() == -1: self.__lratio = Q(0) self.__lratio = QQ.zero() return self.__lratio # Even root number. Decide if L(E,1) = 0. If E is a modular d = self.__E._multiple_of_degree_of_isogeny_to_optimal_curve() C = 8*d*t eps = omega / C # coercion of 10**(-15) to our real field is needed to # make unambiguous comparison if eps < R(10**(-15)): # liberal bound on precision of float raise RuntimeError, "Insufficient machine precision (=%s) for computation."%eps sqrtN = 2*int(sqrt(self.__E.conductor())) k = sqrtN + 10 while True: L1, error_bound = self.at1(k) if error_bound < eps: n = int(round(L1*C/omega)) quo = Q(n) / Q(C) quo = QQ((n,C)) self.__lratio = quo / self.__E.real_components() return self.__lratio k += sqrtN
• ## sage/schemes/elliptic_curves/sha_tate.py
diff --git a/sage/schemes/elliptic_curves/sha_tate.py b/sage/schemes/elliptic_curves/sha_tate.py
a See :trac:1115:: sage: sha=EllipticCurve('37a1').sha() sage: [sha.an_numerical(prec) for prec in xrange(40,100,10)] sage: [sha.an_numerical(prec) for prec in xrange(40,100,10)] # long time (3s on sage.math, 2013) [1.0000000000, 1.0000000000000, 1.0000000000000000,
|
{}
|
# How does the empty set work in arithmetic of surreal numbers?
I'm working my way through Surreal Numbers by Knuth, and am finding myself a little hung up on the explanation of how addition works. The rule for addition is given as:
$$x + y = ((X_L+y)\cup(Y_L+x), (X_R+y)\cup(Y_R+x))$$
On the next page, one of the protagonists works out $1+1$ to be:
$$(\{0+1,0+1\},\emptyset)$$ and follows the reasoning on through to prove that $1+1=2$. Where I'm getting a bit lost is that it seems there should be some additions involving $\emptyset$ that are omitted without explanation. If $1$ is defined as $(0,\emptyset)$, then a true following of the addition rule seems like it should be:
$$1+1=((0+1)\cup(0+1), (\emptyset+1)\cup(\emptyset+1))$$ $$1+1=(1,1)$$
All of this is obviously wrong, but it's not clear to me why $\emptyset+1$ isn't $1$. I guess the most basic version of my question is: why is $0$ distinct from $\emptyset$ in surreal numbers, and what is the nature of that distinction?
• $\varnothing$, here, is not used to denote a surreal number; it is used to denote a set of surreal numbers.
– user14972
Jan 8 '17 at 2:02
• $0$ in the surreal numbers is $(\emptyset,\emptyset)$. Jan 8 '17 at 6:25
A surreal number is, in particular, a ordered pair of sets of surreal numbers, so $\emptyset$ isn't a surreal number at all. In the formula for addition, the expressions like $X_R + y$ do not represent one value but rather the set of all numbers of the form $z + y$, where $z \in X_R$. But the definition $1 = (\{0\}, \emptyset)$ implies that $1$ has no right options at all. So if $x=1$, then $X_R$ is empty, and there aren't any such numbers. Similar reasoning applies to $Y_R + x$. The conclusion is that $1+1$ has no right options whatsoever.
• You contradict yourself a bit: You're correct that $X_R+y$ does mean the set of all such sums of the form $x_R+y$ for $x_R\in X_R$. But then in the case where $X_R=\emptyset$, we have $\emptyset+y$ is meaningful (it represents a set of surreals), and it means $\emptyset$. Jan 8 '17 at 6:24
• @MarkS Actually, come to think of it, if it works the same as in ONAG, then $X_R$ is not a set, but a variable that ranges over all right options of $x$. But perhaps Knuth's book (which I haven't read) uses the notation differently. Jan 8 '17 at 6:39
• In notation like $\left\{G^L+H,G+H^L\mid\cdots\right\}$ (a common shorthand found in ONAG and Siegel) then of course $G^L$ is ranging over the left options so that we retain the idea of listing out the options of the sum. But Pat's question explicitly uses $\cup$ in the definition of the sum of two surreals, so we are forced to interpret $X_R+y$ as a set since it has to be something that we can take a union with. Jan 8 '17 at 6:45
$1$ is not really defined as $(0,\emptyset)$, but as $(\{0\},\emptyset)$. Then $$1+1=\left(\{0+1\}\cup\{0+1\},(\emptyset+1)\cup(\emptyset+1)\right)$$ However, $\emptyset+1$ is empty since there are no surreals in $\emptyset$ to add to $1$ ($\emptyset+1$ is the set of $x+1$ for all $x\in\emptyset$), so $1+1$ simplifies to $$\left(\{0+1\}\cup\{0+1\},\emptyset\cup\emptyset\right)=\left(\{0+1\},\emptyset\right)\text{.}$$
If you know that $0+1$ is $1$, we have $1+1=(\{1\},\emptyset)$.
To answer your other question, $0$ is distinct from $\emptyset$ in the surreals since every surreal is an ordered pair of sets. $0$ is defined to be the ordered pair $(\emptyset,\emptyset)$.
|
{}
|
Stumbled Upon
Average Salaries in the Chaebols
Here is a chart from the Korea Herald showing the average salary of the workers in the chaebols [l] in millions of won per year:
Note: 1 million won is about 850\$US or 1100\$CAD.
09.08.15
Mapping Tree Density at a Global Scale
An article entitled “Mapping Tree Density at a Global Scale” appeared in the journal “Nature” this month discussing the tree density worldwide [l]. One figure showing tree density is striking:
In the latter, the white dots represent areas with year-round ice cover, the brown dots represent areas with no trees, while the dark green dots represent areas with the highest tree concentration. The land proportion composed of areas with little or no trees is alarmingly large — about half of the world's forests have been destroyed or irremediably damaged since the advent of agriculture 10000 years ago. But what is more worrisome is that half of this destruction took place in the last 50 years.. Still today, it seems we haven't learned our lesson: we continue to cut down more trees than we plant as we have been doing for generations. If we do not change our living habits soon and start caring for our environment, the entire planet will be treeless within a couple of centuries.
Map of Flights in Real Time
Here is an interesting website for the aerospace aficionados: flightradar24.com [l] provides real time display of flights on a map everywhere around the world.
09.25.15
The website worldbusinessculture.com [l] gives information on how local culture can affect meetings, communication styles, management styles, etc. This can be useful in understanding foreign cultures obviously but also in not making a faux pas when visiting foreign firms/universities or when collaborating with them. The information seems spot on, at least regarding the countries I have experienced firsthand (Japan, South Korea, Canada, U.S.).
10.26.15
Steve Jobs Commencement Speech: Linking the Dots
Yesterday was graduation day at Pusan National University, and this reminded me of an inspiring commencement speech I had seen on youtube a while back. One part that I particularly liked is the one about “linking the dots”, with the dots referring to ideas or concepts. Many ideas we get, many papers we publish as researchers may seem useless at first. But often, the breakthroughs that are useful to society come from linking together these seemingly useless concepts. Sadly, linking the dots is becoming rarer these days in university environments. Less and less focus is now given to long-term basic research (i.e. the seemingly “useless” research) and more and more emphasis is given to short-term research with immediate applications. Ironically, putting emphasis on research that is immediately useful prevents more useful breakthroughs from occurring..
02.27.16
https://www.weforum.org/agenda/2015/...ancial-centre/
http://www.longfinance.net/images/GFCI18_23Sep2015.pdf
10.24.16
Inequality in Hiring Process
A recent study shows that graduates from the creme de la creme universities have a much easier time finding a job in academia than those in slightly lower ranked institutions: http://doi.org/10.1126/sciadv.1400005 The Authors argue that this can not be due simply to the students at the top universities being more talented and/or more productive. Rather, they demonstrate that the hiring process in universities is not meritocratic but largely based on social class: the “class” to which we belong to (as determined by our present and past affiliations), rather than our accomplishments, is often the dominant factor that leads to job offers in the academia.
03.11.17
“Blind hiring” causes stir among Korea’s top ranks
A month has passed since the government announced a plan to implement a so-called blind hiring system that removes job applicants’ personal information, including the name of the university they attended, their place of birth and other credentials unrelated to the job from their applications, but groups who believe they’ll be losers in the process, such as graduates of the country’s prestigious colleges, are crying foul over the plan... Read more here: http://koreajoongangdaily.joins.com/new ... id=3036954
08.12.17
The Lewis Model — Cultural Differences Worldwide
The Lewis model was proposed by Richard Lewis in “When Cultures Collide” (1996) as a means to clarify cultural differences between countries. The Lewis model identifies three dominant cultural traits: reactive, linear-active, and multi-active:
Linear-active Multi-active Reactive Talks half the time Talks most of the time Listens most of the time Does one thing at a time Does several things at once Reacts to partner's action Plans ahead step by step Plans grand outline only Looks at general principles Polite but direct Emotional Polite, indirect Partly conceals feelings Displays feelings Conceals feelings Confronts with logic Confronts emotionally Never confronts Dislikes losing face Has good excuses Must not lose face Rarely interrupts Often interrupts Doesn't interrupt Job-oriented People-oriented Very people-oriented Sticks to facts Feelings before facts Statements over promises Truth before diplomacy Flexible truth Diplomacy over truth Sometimes impatient Impatient Patient Limited body language Unlimited body language Subtle body language Respects officialdom Seeks out key person Uses connection Separates the social and professional Mixes the social and professional Connects the social and professional
The culture of each country is then represented as a mix of the latter three traits:
The latter reflects the dominant culture within each country. However, independently of the country, engineers tend to be linear-active, sales people multi-active, and lawyers & medical doctors reactive. Read more here: https://www.crossculture.com/latest-new ... behaviour/
10.16.17
Components of Drag on a Transonic Aircraft
02.17.18
Previous 1 , 2 • PDF 1✕1 2✕1 2✕2
$\pi$
|
{}
|
# Undernetmath’s Weblog
### Solutions_2008
This page contains solutions of former potds. You are explicitly encouraged to post your own solutions to old or new potds as comments to this site. An administrator will incorporate them into this page later on (assuming they are correct and stated in a reasonably accessible manner). Please note that in the case of essentially identical solutions only one will be incorporated into the page (usually the first one).
2008-11-19 by kmh
Compute $\displaystyle{\lim_{x\rightarrow 0} \frac{\int_0^x \ln(1+t^2)\,dt}{x^3}}$
Solution by bor0
$v = t$
$dv = dt$
$\ln(1 + t^2) = u$
$\dfrac{2t}{t^2+1}dt = du$
Use integration by parts.
$\int u dv = uv - \int v du$
$\int \ln(t^2 + 1) dt = t\ln(t^2 + 1) - 2\int\dfrac{t^2}{t^2+1} dt$
$\int\dfrac{t^2}{t^2+1} dt = \int\dfrac{t^2+1-1}{t^2+1} dt = \int\dfrac{t^2+1}{t^2+1} dt - \int\dfrac{dt}{t^2+1}$
$\int\dfrac{t^2+1}{t^2+1} dt - \int\dfrac{dt}{t^2+1} = t - \arctan(t)$
$\int(t^2 + 1) dt = t\ln(t^2 + 1) - 2t + 2\arctan(t)$
$\int(x^2 + 1) dx = x(\ln(x^2+1) - 2) + 2\arctan(x)$
$\lim_{x\to0} \dfrac{x(\ln(x^2+1) - 2) + 2\arctan(x)}{x^3}$ (Indeterminate form 0/0, apply l’Hospital)
$\lim_{x\to0} \dfrac{\ln(x^2+1)}{3x^2}$ (Indeterminate form 0/0, apply l’Hospital)
$\lim_{x\to0} \dfrac{\dfrac{2x}{x^2+1}}{6x} = \lim_{x\to0} \dfrac{1}{3x^2+3} = \dfrac{1}{3}$
2008-10-30 by kmh
Show that if $100m+n$ is divisible by $7$, then $m+4n$ is divisble by $7$ as well.
Solution by Magnus_RM and rafno
$100m+n\, mod\, 7=2m+n$ :
$\Leftrightarrow (2m+n) \,mod\, 7=0$
$\Leftrightarrow 4\cdot (2m+n)\, mod\, =0$
$\Leftrightarrow 8m+4n\, mod\, 7=0$
$\Leftrightarrow m+4n\, mod\, 7=0$
2008-7-17 by kmh
Show that for all power of 3 the 2nd last digit is even. For instance we have $3^5=729$ and 2 is even.
(original source: de.rec.denksport)
Solution by rafno
Show that for all power of 3 the 2nd last digit is even. For instance we have 3^5=729 and 2 is even.
We prove by induction.
For k=1 the second last digit of 3^1=3=03 is 0. True.
Supose its true for k-1.
3^k=3*3^(k-1).
Then the second last digt of 3^(k-1) is even. So 3^(k-1)= ….+2n*10+a written in base 10.
So 3^k= … +(2*(3n)+b)*10+c
where 3*a=b*10+c. So we must verify that b is even.
but the last digit of any power of 3 is 1,3,9 or 7. and 3 times any of these gives an even last second digit. So b=2*r and then 3^k=…+2*(3n+k)*10+c.
Therefore all power of 3 has even second last digit.
2008-6-10 by CaosTheory
Compute $\displaystyle{ \int_0^\infty \frac{\sin(x)^2}{x^2} dx}$.
$\displaystyle{\int_0^\infty\frac{\sin(x)^2}{x^2}dx}$ $\displaystyle{=\lim_{a\rightarrow 0} \lim_{b\rightarrow\infty} \int_a^b\frac{\sin(x)^2}{x^2}dx}$ $\displaystyle{\stackrel{(1)}{=}\lim_{a\rightarrow 0} \lim_{b\rightarrow\infty} \left[-\frac{\sin(x)^2}{x}\right]_a^b+\int_a^b\frac{2\sin(x)\cos(x)}{x} dx}$ $\displaystyle{\stackrel{(2)}{=}\lim_{a\rightarrow 0} \lim_{b\rightarrow\infty} \left[-\frac{\sin(x)^2}{x}\right]_a^b+\int_a^b\frac{\sin(2x)}{x} dx}$ $\displaystyle{\stackrel{(3)}{=}\lim_{a\rightarrow 0} \lim_{b\rightarrow\infty} \left[-\frac{\sin(x)^2}{x}\right]_a^b+\int_\frac{a}{2}^\frac{b}{2}\frac{\sin(z)}{z} dz}$ $\displaystyle{=\lim_{a\rightarrow 0} \underbrace{\frac{\sin(a)}{a}}_{\rightarrow 1}\cdot\sin(a)-0+\int_\frac{a}{2}^\infty\frac{\sin(z)}{z} dz}$ $\displaystyle{=0-0+\int_0^\infty\frac{\sin(z)}{z} dz=\frac{\pi}{2}}$
2008-5-30 by kmh
Determine the size of the dark area.
Solution by pisagor
2008-5-19 by kmh
Show that $\frac{\binom{2n}{n}}{n+1} \in \mathbb{N}$ for all $n\in\mathbb{N}$.
(Original source: The Red Book of Mathematical Problems)
First note that $\frac{1}{n+1}=1-\frac{n}{n+1}$ and $\binom{2n}{n}=\frac{(2n)!}{n!n!}$
Now we have
$\frac{\binom{2n}{n}}{n+1} = \frac{(2n)!}{n!n!}\cdot(1-\frac{n}{n+1})=\frac{(2n)!}{n!n!}-\frac{(2n)!}{(n-1)!(n+1)!}$ $= \binom{2n}{n}-\binom{2n}{n-1} \in \mathbb{N}$
2008-4-29 by kmh
Find all values of $\sin(x^8-x^6-x^4+x^2)$ for $x\in\mathbb{N}$ and $x^8-x^6-x^4+x^2$ denoting an angle in degrees.
Because´sin(u) = sin(u mod 360)
for u expressed in degrees, this problem is suitable for brute force.
For:
f(x) = x^8 – x^6 – x^4 + x^2
and x natural, one may easily see that
f(x) mod 360 = f(x mod 360) mod 360
and, subsequently
sin(f(x)) = sin(f(x) mod 360) = sin(f(x mod 360) mod 360)
Thus it suffices to find by means of computer/code the set
{f(x mod 360) mod 360 | x in N} =
= {f(x) mod 360 | x in {0, 1,…, 359}} =
= {0, 180}
which gives:
{sin(f(x)) | x in N} = {0}
2008-4-29 by kmh
Compute $\displaystyle \lim_{n\rightarrow\infty} \sum_{k=n}^{2n} \frac{1}{k}$
( original source: Internet Math Olympiad Israel 2008 )
For f(x) = 1/x and each x in [k, k+1] and k>1 one has:
$f(x) \leq f(k) \leq f(x-1)$
Subsequently:
$\displaystyle \int_k^{k+1} f(x)dx \leq \int_k^{k+1} f(k)dx$
$ln(k+1) - ln(k) \leq (k+1 - k)/k$
$ln(k+1) - ln(k) \leq 1/k$
Since the left sum is telescoping we have:
$\displaystyle \sum_{k=n}^{2n} (ln(k+1) - ln(k)) \leq \sum_{k=n}^{2n} 1/k$
$\displaystyle ln((2n+1)/n) \leq \sum_{k=n}^{2n} 1/k$
So with $n\rightarrow\infty$ and an analog argument for f(k-1) you’ll get:
$\displaystyle ln(2) \leq \lim_{n\rightarrow\infty} \sum_{k=n}^{2n} 1/k \leq ln(2)$
2008-3-31 by Karlo
Construct the foci of arbitrary ellipse given as a graph using ruler and compass only.
Solution by kmh
Construct 2 non collinear pairs of parallel chords. For each pair of parallel chords construct the midpoints of the chords and a line through the 2 midpoints. Due to the conjugated diameter property of the ellipse those 2 lines will intersect in the center C . Now draw a circle around C such that it has 4 intersection points with the ellipse. Construct the bisecting perpendiculars through those intersection points, they are the major and minor axis of the ellipse. The major intersects the ellipse in Q and the minor in R. Draw a circle with radius |CQ| around R this circle with will intersect the major in the foci.
2008-2-5 by karlo
Three friends are taking me out for my birthday. The product of their ages is 2450. The sum of their ages is my cousin’s age. I could tell you my cousin’s age, but to find the ages of my friends, you’d also need to know that each of the three is younger than I am. How old am I?
Solution by …
Let a1, a2 and a3 be the 3 ages of the 3 friends. “The product of their ages is 2450” => a1 * a2 * a3 = 2450 = 2 * 5^2 * 7^2. It is fairly simple (by hand, or via a computer) to find all distinct triplets that satisfy the product rule. (Note: throughout thist solution, I call (t1, t2, t3) and (q1, q2, q3) –distinct triplets– if and only if {t1, t2, t3} != {q1, q2, q3}. This means that, for example, (t1, t2, t3) and (t2, t3, t1) are -Not- distinct triplets). “I could tell you my cousin’s age, but to find the ages of my friends, you’d also..” => knowing the sum a1+a2+a3 does Not suffice in finding a1, a2, a3. Picture the triplets “determined” above. Each triplet generates a sum (of its elements). Assume one of these triplets, Triplet_X, generates Sum_X and that no other triplet generates Sum_X. If the sum of the ages (cousin’s age) would be equal to Sum_X, we’d know for sure that the friends age triplet is Triplet_X, since it is the only one that satisfies the “friends_ages_sum = cousins_age” condition. Again: this can not happen, the sum (cousin’s age) must not be sufficient and therefore the sum, whichever it is, must satisfy the following condition: (at least) two distinct triplets must exist so that each triplet has the sum of its elements equal to the cousin’s age. Go back to the distinct triplets “determined” at step1. For each, compute the sum of their elements. If a sum appears only once, this means that the triplet that generated it is invalid. Find any sum which appears at least twice. Luckly, there is only one sum with this proprety, and mainly: 64, generated by distinct triplets (7, 7, 50) and (5, 10, 49). Therefore, it is certain, that the friends age triplet (a1, a2, a3) is equal to one of the two determined previously. The final condition holds the key. “you’d also need to know that each of the three is younger than I am.”. This means that knowing that all 3 of the friends are younger than Karlo MUST invalidate one of the two triplets. If Karlo is older than 50 (> 50), both triplets are “valid”. If Karlo is younger than 49 (<= 49) both triplets are “invalid”. One possibility remains: Karlo is 50, which makes the first triplet: (7, 7, 50) invalid and the second triplet (5, 10, 49) valid. To conclude: the friends are aged 5, 10, 49; the cousin is 64 and Karlo is 50. Happy birthday to Karlo!
2008-1-17 by yfk
Let $x_1, x_2,\ldots, x_n \in \{0,1\}$ and $\overline{x}= \frac{1}{n} \displaystyle \sum_{i=1}^n x_i$. Show that $\frac{1}{n} \displaystyle \sum_{i=1}^n (x_i-\overline{x})^2 \leq\frac{1}{4}$.
Solution by kmh
First note that due to $x_i \in \{0,1\}$ we have $x_i=x_i^2$ for $i=1,\ldots,n$. This yields us $\frac{1}{n} \displaystyle \sum_{i=1}^n (x_i-\overline{x})^2=\frac{1}{n} \displaystyle \sum_{i=1}^n (x_i^2 -2x_i\overline{x}+\overline{x}^2)=\frac{1}{n} \displaystyle \sum_{i=1}^n x_i -2\overline{x}\frac{1}{n} \displaystyle \sum_{i=1}^n x_i+\frac{1}{n} \displaystyle \sum_{i=1}^n \overline{x}^2$ $=\overline{x}-2\overline{x}^2+\overline{x}^2=\overline{x}-\overline{x}^2\stackrel{(\star)}{\leq}\frac{1}{4}$ (*): If we consider the last expression as a function of $\overline{x}$, then we have an upside down parabola with a maximum of $\frac{1}{4}$. This can be determined completing the square or finding the the root of the derivative. Completing the square:
$\overline{x}-\overline{x}^2=-(\overline{x}^2-\overline{x})=-((\overline{x}-\frac{1}{2})^2-\frac{1}{4} )=-(\overline{x}-\frac{1}{2})^2+\frac{1}{4}$
2008-1-8 by lhrrwcc
Let $A$ be a noetherian ring. Prove that if $f:A\rightarrow A$ is a surjective ring homomorphism then $f$ is bijective.
Solution by lhrrwcc
A being noetherian means that every increasing chain of ideals $(a_1) \subset (a_2) \subset (a_3) \subset \ldots$ eventually becomes stationary, ie $(a_k) = (a_{k+1}) = \ldots$. Now in order to prove that any surjection is an inyection we can prove that the kernel is trivial. So assume that $f: A \rightarrow A$ is a surjective ring homomorphism and that Ker f is not trivial. Let $m \in$ Ker f. There is, because f is surjective, an element t such that f(t)=m, so $f(f(t)) = f(m) = 0$ so t in Ker f^2. Now, there is an element s in A such that f(s) = t, and again $f(f(f(s))) = f(f(t)) =f(m) = 0$ so s in $Ker f^3$. If we keep doing will we will get an increasing chain of ideals $Ker f \subset Ker f^2 \subset Ker f^3 \subset \ldots$ contradicting the hypothesis that A is noetherian, therefore the kernel is trivial, and f is an injection
|
{}
|
# Infrared modification of General relativity
+ 2 like - 0 dislike
217 views
General relativity can be derived as the only consistent effective field theory for interacting spin two particles. If this statement is correct , this makes the study of modified gravity with torsion , like the sciama kibble theory and also teleparallel gravity on weitzenbochs manifolds irrelevant and a waste of time. So , Why do people study these theories ?
If somebody is interested in studying those theories and he gets funding for doing his research, then nothing prevents him from such an activity.
Be aware of consensus. Actually, students learn at least one theory based on flawed experiments and it is forbidden to say the contrary with the risk to be banned from grants. More trivially, in the 7ties and 8ties, students were learning 'fundamental' limitations on data bandwith while engineers were joking about. Add to these considerations the intrinsic beauty of maths ...
''the only consistent effective field theory for interacting spin two particles''. - you should give a reliable source for such an unlikely claim!
It is very well known that in four dimensions one can only have a field theory with up to spin 2 particles that is Lorentz invariant, unitary, etc such that it has non-trivial $S$-matrix. This is directly related to the Coleman-Mandula theorem and I disagree that this claim needs justification in PO. As for the actual question.. good question.
@conformal_gk: A non trivial S-matrix implies some interaction. I am afraid this is the key to undertsand what the Coleman-Mandula theorem says. I just want to underline that the "interaction term" may be quite different from what is currently implied and then other possibilities open.
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$y$\varnothing$icsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
{}
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Vladikavkaz. Mat. Zh.: Year: Volume: Issue: Page: Find
Some subordination results for certain class with complex order defined by Salagean type $q$-difference operatorM. K. Aouf, T. M. Seoudy 7 A convolution type nonlinear integro-differential equation with a variable coefficient and an inhomogeneity in the linear partS. N. Askhabov 16 Recovery of rapidly oscillated right-hand side of the wave equation by the partial asymptotics of the solutionP. V. Babich, V. B. Levenshtam 28 Finite-difference method for solving of a nonlocal boundary value problem for a loaded thermal conductivity equation of the fractional orderM. Kh. Beshtokov, Z. V. Beshtokova, M. Z. Khudalov 45 Solutions of the Carleman system via the Painlevé expansionS. A. Dukhnovskii 58 New numerical method for solving nonlinear stochastic integral equationsR. Zeghdane 68 On the structure of elementary nets over quadratic fieldsV. A. Koibaev 87 Some properties of orthogonally additive homogeneous polynomials on Banach latticesZ. A. Kusraeva, S. N. Siukaev 92 Grand Morrey type spacesS. G. Samko, S. M. Umarkhadzhiev 104 On Hadamard and Hadamard-type directional fractional integro-differentiation in weighted Lebesgue spaces with mixed normM. U. Yakhshiboev 119
|
{}
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 6.4: Percent of a Number
Difficulty Level: At Grade Created by: CK-12
0%
Progress
Practice Percent of a Number
Progress
0%
### Let’s Think About It
Credit: Jessie Pearl
Source: https://www.flickr.com/photos/terwilliger911/2765544421
License: CC BY-NC 3.0
Taylor wants to buy a new pair of shoes. She goes to the shoe store and discovers a shelf on which all of the shoes are 60% off. She finds a pair of shoes on the shelf that are ticketed at $50. “I wonder how much money I will get back if I pay for the shoes with a$50 bill,” Taylor muses. At the checkout register, how much change will Taylor get back?
In this concept, you will learn to find the percent of a number using fraction multiplication.
### Guidance
To work with percents you have to understand how they relate to parts and fractions.
The table below shows the fractional equivalents for common percents.
5% 10% 20% 25% 30% 40% 50% 60% 70% 75% 80% 90% 120\begin{align*}\frac{1}{20}\end{align*} 110\begin{align*}\frac{1}{10}\end{align*} 15\begin{align*}\frac{1}{5} \end{align*} 14\begin{align*}\frac{1}{4}\end{align*} 310\begin{align*}\frac{3}{10}\end{align*} 25\begin{align*}\frac{2}{5}\end{align*} 12\begin{align*}\frac{1}{2}\end{align*} 35\begin{align*}\frac{3}{5}\end{align*} 710\begin{align*}\frac{7}{10}\end{align*} 34\begin{align*}\frac{3}{4}\end{align*} 45\begin{align*}\frac{4}{5}\end{align*} 910\begin{align*}\frac{9}{10}\end{align*}
The word “of” in a percent problem means to multiply. If you know the fractional equivalents for common percents, you can use this information to find the percent of a number by multiplying the fraction by that number. If you want to find a part of a whole using a percent, you use multiplication to solve.
Let’s look at an example.
Find 40% of 45.
First, 40% of 45 means 40%×45\begin{align*}40\% \times 45\end{align*}.
Next, look at the chart. The fraction 25\begin{align*}\frac{2}{5}\end{align*} is equivalent to 40%.
Then perform the multiplication, simplifying as you go.
40%×45=25×45=251×4591=181=18
The answer is that 40% of 45 is 18.
Now, let’s look at another example using an alternate way to solve.
What is 18% of 50?
First, to figure this out, you can change the percent to a fraction and then create a proportion.
18100=x50
Next, you can cross multiply and solve for x\begin{align*}x\end{align*}.
100xx==9009
The answer is 18% of 50 is 9.
### Guided Practice
Change the percent to a fraction in simplest form.
Find 85% of 20.
First, 85% of 20 means 85%×20\begin{align*}85\% \times 20\end{align*}.
Next, change the percent to a fraction in simplest form.
85%=85100=85÷5100÷5=1720
Then perform the multiplication, simplifying as you go.
85%×20=1720×20=17201×2011=171=17
The answer is that 85% of 20 is 17.
### Examples
#### Example 1
What is 10% of 50?
First, 10% of 50 means 10%×50\begin{align*}10\% \times 50\end{align*}.
Next, look at the chart. The fraction 110\begin{align*}\frac{1}{10}\end{align*} is equivalent to 10%.
Then perform the multiplication, simplifying as you go.
10%×50=110×50=1101×5011=51=5
The answer is that 10% of 50 is 5.
#### Example 2
What is 25% of 80?
First, 25% of 80 means 25%×80\begin{align*}25\% \times 80\end{align*}.
Next, look at the chart. The fraction 14\begin{align*}\frac{1}{4}\end{align*} is equivalent to 25%.
Then perform the multiplication, simplifying as you go.
25%×80=14×80=141×80201=201=20
The answer is that 25% of 80 is 20.
#### Example 3
What is 22% of 100?
First, 22% of 100 means \begin{align*}22\% \times 100\end{align*}.
Next, change the percent to a fraction in simplest form.
Then perform the multiplication, simplifying as you go.
The answer is that 22% of 100 is 22.
### Follow Up
Credit: Raymond Bryson
Source: https://www.flickr.com/photos/f-oxymoron/9652160760/in/photolist-fGVR7Q-94ZzgS-4hyDo1-5aXq3R-7hdZvJ-819gq-sK254-94DqWT-v3mufT-ADz3U-5RVRfu-gtGGbf-gtGG8u-eX2mEn-5Tpb6M-fLmQgu-r7yrJc-6vPeU2-5Nuxh8-bgaeUR-gwo5rX-gtFT8q-ADzVr-fJfPBp-gtGPGY-fJaXQZ-fJqWEG-fJaWar-dPiUay-fJc8Xx-33WWF8-eW7Jwo-eVVjf8-5Pfi7a-eVVj7i-2Z7koi-61Ziyn-eXdKK5-eX2my4-eXdKFJ-eX2mxc-eXdKSm-eX2mka-eX2meB-eX2mB4-eXdKf7-nGfcuf-rTcE2-6w4apD-nr5WYC
License: CC BY-NC 3.0
Remember Taylor and her shoes? They were ticketed at $50 but there was a 60% off sale on them. Taylor wanted to know how much money she would get back if she paid for the shoes with a$50 bill.
First, 60% of 50 means \begin{align*}60\% \times 50\end{align*}.
Next, change the percent to a fraction in simplest form.
Then perform the multiplication, simplifying as you go.
The answer is that the shoes are marked down by $30. If Taylor pays with a$50 bill she will get \$30 back.
### Video Review
https://www.youtube.com/watch?v=yl0Rb6T09VM
### Explore More
Use fraction multiplication to find each percent of the number.
1. 10% of 25
2. 20% of 30
3. 25% of 80
4. 30% of 90
5. 75% of 200
6. 8% of 10
7. 10% of 100
8. 19% of 20
9. 15% of 30
10. 12% of 30
11. 15% of 45
12. 25% of 85
13. 45% of 60
14. 50% of 200
15. 55% of 300
### Vocabulary Language: English
improper fraction
improper fraction
An improper fraction is a fraction in which the absolute value of the numerator is greater than the absolute value of the denominator.
At Grade
Dec 21, 2012
## Last Modified:
Sep 23, 2015
You can only attach files to Modality which belong to you
If you would like to associate files with this Modality, please make a copy first.
# Reviews
Please wait...
Please wait...
Image Detail
Sizes: Medium | Original
MAT.ARI.780.L.2
|
{}
|
## anonymous one year ago If sine of x equals square root of 2 over 2, what is cos(x) and tan(x)? Explain your steps in complete sentences.
1. anonymous
@jamesr @Xaze @MoonMoonWolf @miszzkeriee @mathway @Michele_Laino @mickey1513
2. anonymous
Ok, we have $$\huge \sin x = \frac{\sqrt{2}}{2}$$ So we know that $$\huge \sin \theta = \frac{y}{r}$$ $$\huge y = \sqrt{2}$$ r = radius $$\huge r = 2$$ $$\huge \cos x = \frac{x}{r}$$ Since we already know what y equals and r, we can use the following formula to find x $$\huge x^2 + y^2 = r^2$$ $$\huge x^2 + \sqrt{2}^2 = 2^2$$ $$\huge x^2 + \sqrt{2}^2 = 2^2 - \sqrt{2}^2$$ $$\huge \sqrt{x^2} = \sqrt{2^2 - \sqrt{2}^2}$$ Can you finish solving for x ?
3. anonymous
Note, we are working on cos right now. We will do tan after cos.
4. anonymous
I tried but couldn't get it.
5. anonymous
@Nixy
6. anonymous
Ok, what did you get for x?
7. anonymous
4
8. anonymous
Ok, one sec
9. anonymous
By the way, using complete sentences how would I explain the key features of the graph of the tangent function?
10. anonymous
$$\huge \sqrt{x^2} = \sqrt{2^2 - \sqrt{2}^2}$$ $$\huge \sqrt{x^2} = x$$ $$\huge x = \sqrt{2^2 - \sqrt{2}^2}$$ $$\huge 2^2 = 4$$ $$\huge \sqrt{2}^2 = \sqrt{4} = 2$$ So now we have $$\huge x = \sqrt{4 - 2}$$ What is the value of x ?
11. anonymous
x = pie2
12. anonymous
Once you know how to solve the problem you should be able to explain.
13. anonymous
$$\huge x = \sqrt{2}$$
14. anonymous
Yeah
15. anonymous
So, $$\huge \cos x = \frac{x}{r}$$ since we know what x is and r, we have cos x = $$\huge \frac{\sqrt{2}}{2}$$
16. anonymous
Since we have found cos, what do you think tan x =? $$\huge \tan x = \frac{y}{x}$$
17. anonymous
tan x would be pie2/4?
18. anonymous
Tan x = $$\huge \frac{\frac{\sqrt{2}}{2}}{\frac{\sqrt{2}}{2}}$$ but you need to divide this
19. anonymous
Can you divide that?
20. anonymous
Wouldn't it equal 1?
21. anonymous
You are correct!!!!
22. anonymous
tan x = 1
23. anonymous
AYYYY
24. anonymous
so for the final answer how would i explain
25. anonymous
Just go over the steps that we went through here
26. anonymous
ok
27. anonymous
thank u
28. anonymous
YW
29. anonymous
but for this how would i explain
30. anonymous
Using complete sentences, explain the key features of the graph of the tangent function.
31. anonymous
Is that a separate question on your homework?
32. anonymous
nah separate
33. anonymous
It is separate ?
34. anonymous
yes
35. anonymous
Ok, you just need to explain the key feature of the tan when it comes to graphing.
36. anonymous
Look in your book. It should tell you step by step. For instance you will have asymptotes at odd multiples of $$\huge \frac{\pi}{2}$$
37. anonymous
Here are the properties of the tan The domain is the set of all real numbers except odd multiples of $$\huge \frac{\pi}{2}$$ The range is the set of all real numbers The tangent function is an odd function, as the symmetry of the graph with respect to the origin indicates. The tangent function is periodic with period pi The x-intercepts are .....,-2pi, -pi, 0, pi, 2pi, 3pi, ..... Vertical asymptotes occur at x = odd multiples of $$\huge \frac{\pi}{2}$$
38. anonymous
Got it?
39. anonymous
Yes
40. anonymous
Good job. It is a lot to take in but keep practicing and you will get it.
41. anonymous
Thanks Nixy
|
{}
|
# Smooth approximation to a continuous curve
Let $\gamma: [0,1] \rightarrow M$ be a continuous curve in a smooth manifold $M$. Is there a standard way to approximate $\gamma$ by a smooth curve? My thought was to look at every point $p$ where $\gamma$ is not smooth, consider a coordinate chart $(U, \phi)$ containing $p$ and smoothen $\gamma \cap U$. Can this be made precise?
• There are recent questions asking about smoothing a piecewise linear curve; or piecewise smooth; for these you can make modifications at the finite set of points in which the curve is not smooth. But in general, there's no reason you should be able to make "small, local modifications": what if the curve is big and gross, like a space-filling curve? Your best bet then is probably to do a convolution with a (very small) smooth bump function. This will smooth your curve to a curve $\eta$, and will not be "far" in the sense that you can make $\text{max }d(\gamma(t),\eta(t))$ as small as you want – user98602 Sep 2 '15 at 3:45
• Do you have an example or reference on that? – user265669 Sep 2 '15 at 3:51
• I suggest you look at, say, Bredon's Topology and Geometry, theorem II.11.8: Suppose $M^m$ and $N^n$ are smooth manifolds with $N^n$ compact metric (by this we just mean we have fixed a metric i.e. a distance function, not a Riemannian metric). Let $A \subset M^m$ be closed. Let $f : M^m \to N^n$ be a map with $f|_{A}$ smooth. Then for any given $\epsilon > 0$, there exists a map $h : M^m \to N^n$ such that 1) $h$ is smooth 2) $d(h(x), f(x)) < \epsilon$ for all $x \in M^m$ 3) $h|_{A} = f|_{A}$ 4) $h \simeq f$ by an $\epsilon$-small homotopy (rel $A$) – Pedro Sep 3 '15 at 0:52
• So basically you can homotope your continuous curve to a smooth one, and you can do so without changing it on a subset where it's already smooth. So, for example, trivially $\gamma$ is smooth on $\{0, 1\}$, hence you may homotope it to a smooth curve without changing the endpoints. – Pedro Sep 3 '15 at 0:55
|
{}
|
## Wednesday, February 06, 2019
### "End of high energy physics" is silly
The newest anti-collider tirade at Backreaction, Why a larger particle collider is not currently a good investment, begins by saying that the negative statement is an uncontroversial position.
Well, as Ms Hossenfelder could have learned at Twitter where she has debated these issues with real particle physicists, her remarks are controversial, to say the least. It's much less controversial to say that she doesn't have a clue what she is talking about. Let me elaborate on this statement in some detail.
The Livingston Plot, via K. Yokoya.
High energy physics was a new name given to particle (or subnuclear) physics because the plan has been from the beginning to indefinitely raise the collision energy – and therefore the ability of the experiments to probe ever shorter distances (short distances are tied to high momenta/energies by the uncertainty principle). The rate of progress may slow down but it has always been clear that the progress could continue basically indefinitely.
In the first part of her new text, she makes it clear that she was looking for some "allies" who have questioned the future of particle accelerators just like she does. So she found a 2001 text in Physics Today by Maury Tigner, Does Accelerator-Based Particle Physics Have a Future?
Now, Tigner had been a big "design group" boss of the cancelled collider in Texas, the SSC. So what do you think was his answer to the question in his own article? Pretend that your IQ is above 70 if it is not and try to answer this question: Was Tigner, a collider boss, an anti-collider activist similar to Ms Hossenfelder?
Just to be sure, because there may be readers with the IQ below 70, I have to give a short answer to this "difficult" question: No, he wasn't.
In the very first paragraph, Ms Hossenfelder makes an extraordinary statement:
That the costs of larger particle colliders would at some point become economically prohibitive has been known for a long time. Even particle physicists could predict this.
As I have said, this statement is completely ludicrous. No physicist – and no person with the technical thinking at least at the high school level – has ever stated that "larger particle colliders would become economically prohibitive" at some point of time. The economy is generally growing, the technologies are generally improving, so of course that we may keep on building ever stronger particle colliders and that has always been the plan – that's why the field is called "high energy physics".
Maybe Sabine Hossenfelder, John Horgan, Uncle Al, and a bunch of similar "physicists" were saying something else to each other but actual physicists haven't. Of course there is no "end of physics".
She may have misunderstood the statement that some collision energy chosen on the log scale in between the LHC and Planck scale would be impossible to realize on Earth. Some energies such as $$10^{10}\GeV$$ could be economically prohibitive and almost impossible on Earth. There's some order-of-magnitude estimate of the collision energy where we can't realistically get on Earth. But if you translate this "cutoff" to the moment of time or the year when particle physics should "hit the wall", you will surely not get a moment in the next 100 or 1,000 years. There is no reason for high energy physics to stop in the next millennium.
Even at the sociological level, such an "end of experimental particle physics" is as silly as the "end of sports" or "end of Olympic Games" or "end of Formula One" or "end of Miss USA" (OK, the latter has mostly occurred when the exhibitions in bikinis were replaced with contestants' left-wing political monologues). Athletes' performance is improving at a slower rate than it did in the past but it doesn't mean that we must abolish sports, does it? What's the fudging difference? Even if the rate of improvements slowed down incredibly, it would make sense to build new colliders. Although sports have been pointless for a long time, or always, some people still do similar stupid things. ;-)
And other, smarter people want to do particle physics. You can dislike baseball (I don't even like it enough to hate it LOL) but you won't prevent other people from playing or watching it. Similarly, Ms Hossenfelder may dislike particle physics but she's just a petty woman who makes Germany suck again and who can't prevent others, especially people from different nations and in a few years, from doing experimental particle physics.
Hasn't Maury Tigner, Hossenfelder's "source number one", written his own 2001 reaction to her 2019 statement about the "end of particle physicists at some point" that is "well known" and "predicted even by particle physicists"? Well, he has. This paragraph was fully focused on that claim:
The falloff in the energy frontier’s rate of advance might inspire the reader to ask whether we are approaching some inherent physical limit to the capability of accelerators, or perhaps some other limit. The answer is complex, but one thing is clear: We are not approaching a technical limit to the energies that can be achieved in the laboratory.
I added the bold face because Tigner, like any competent particle physicist, knows that there is no nearby limit. Larger tunnels and/or stronger magnets translate to higher energy collisions and the current colliders are extremely far from a limit, at least in the length of the tunnels – let's say that Earth radius could be such a limit, assuming that people won't build the colliders in outer space which they should.
Instead, Tigner – who clearly felt some responsibility for their failure to convince the U.S. Congress (and suggested that he would have been capable of "selling" a $1 billion experiment) – offers a detailed discussion about the rate of various prices. Some parts of the gadgets were getting cheaper extremely quickly, e.g. the superconducting wires, others were not. But let me post the Livingston Plot again: Hossenfelder seems to use this plot as some kind of an argument in favor of her and Horgan's "end of science" delusions and she even wrote: You can clearly see that the golden years of particle accelerators ended around 1990. But only people with a severe enough eye disorder or with a brain disease may "clearly see" such a non-existent thing in the plot. Others see that the golden years are always in the future because the collision energy keeps on increasing. What she probably wanted to say is that the rate at which the collision energy was increasing per decade decreased after 1990 or so. But does it mean that "golden years of particle accelerators ended in 1990"? This statement is exactly as true – or as false – as the statement The golden years of the European and U.S. economy ended at the end of the 19th century. Why? Because the average annual GDP growth was around 10% a year in the final decades of the 19th century. And we only expect some 3% today. Does it mean that the golden years of the economy stopped over a century ago? Well, if you define "golden years" as those with the highest annual growth, then yes. But no sane people do. The economy continued to grow after 1900 which is why it's just plain silly to say that the golden years of the economy occurred before 1900. (By the way, we can debate what is behind the "disappointing" slowdown after 1900 or so. The low-hanging fruits of industrialization had been picked by 1900 or so – but I still think that the overregulation and overtaxation of the 20th and 21st century was more harmful. But I digress.) It is even much more silly to say that the economy should have stopped producing things in 1900. And this is the actual perfect analogy of Hossenfelder's plan to give up on particle colliders. It's utterly uncontroversial that she has no idea what she is talking about. The decadal rate of the increasing accelerator energy dropped around 1990 but the energy kept on rising and indeed, you can see that the Livingston Plot also includes a projection to the future colliders where the energy keeps on growing. The collision energy jumped by one order of magnitude each 10 years before 1990 – and the time needed for the 10-fold increase is closer to 20-30 years after 1990 (and it may be 200 years around 2300 AD). It's totally analogous to the slowed down GDP growth from 10% in the 19th century to 3% today. But the GDP and the collision energy has no reason to stop growing. In the following ten paragraphs, she repeats the mostly untrue statement that "colliders are damn expensive" several times while she adds some irrelevant details that have nothing to do with her basic wrong claims. At some moment, she gets to a comparison to LIGO: Compare the expenses for CERN’s FCC plans to that of the gravitational wave interferometer LIGO. LIGO’s price tag was well below a billion US$. Still, in 1991, physicists hotly debated whether it was worth the money.
I love LIGO, I have rediscovered my gravitational waves from the raw LIGO data as well, and did lots of analyses, recommended the Nobel prize for the exact 3 men who really got it later, and so on. But it was still sensible to debate whether the gadget was worth almost one billion dollars because
the LIGO didn't and basically couldn't discover any new fundamental physics.
The LIGO detected something that is absolutely unavoidable given the general theory of relativity – even at the level at which the theory was almost perfectly understood (by the competent theorists – I don't mean by the general public). So LIGO gave us the ability to "hear" particular astrophysical events – black hole mergers and neutron star mergers so far – which means that it is giving us some new data about astrophysics and perhaps "cosmology close to astrophysics". But it is not producing new data about fundamental physics – and the chance that LIGO could have done so was virtually zero.
In that sense, it dramatically differs from the LHC (or the next colliders) that was (or will be) probing so far untested energy regime of particle physics. Every physicist understands that her suggestion that the colliders are worse than LIGO is absolutely irrational. Here is a CERN response:
Right. In her stupidity that she has enthusiastically exposed in The New York Times, she basically explicitly wrote that LIGO was nice because there was a firm prediction, the gravitational waves, and LIGO got it. On the other hand, the LHC was bad because it discovered the firmly predicted Higgs boson.
What she writes doesn't make any sense. It's nice to confirm firm predictions but if we are really certain about a prediction, then the experiment is pointless. In the case of the LHC and the Higgs boson, we got more information about fundamental physics than in the case of LIGO and gravitational waves: We have learned that the Higgs mass was about $$125\GeV$$. The mass was previously unknown. We haven't learned any parameter of fundamental physics from LIGO.
Maybe her obsession with "firm predictions confirmed by experiments" is enough at school, where schoolkids learn lots of things that had been known for a very long time and where schoolgirls are more likely to be praised by their teacher for being "right" and obedient. But the scientific research is something else than the elementary school and the repetition and confirmation of scientific findings that have been known for a long time isn't enough in research!
After numerous additional boring paragraphs full of arrogance, stupidity, and irrelevant technicalities, she wraps up with the final paragraph which starts as follows:
Of course, particle physicists do have a large number of predictions for new particles within the reach of the next larger collider, but these are really fabricated for no other purpose than to rule them out. You cannot trust them. [...]
The only problem with this dumb attack against particle physicists or their work is that it logically cannot influence the benefits of a new collider. The reason is that the scientific benefits of a new collider don't depend on the trustworthiness of the predictions at all.
In fact, the very purpose of the experiment – and basically any experiment in science – is to empirically evaluate the validity of all relevant predictions. The fundamental point about science that this lady still completely misunderstands is that
experiments are not being built in order to confirm firm and guaranteed predictions, to show how trustworthy theorists or their celebrated theories are. Instead, experiments are being built to give us previously unknown or uncertain information and decide which expectations were true and which were not.
The incomplete trustworthiness of predictions not only isn't "fatal" for a meaningful experiment. It is a necessary condition for a meaningful experiment!
Because the $$100\TeV$$ collider is going to tell the physicists what happens in that new energy range, whatever it is, we may even say that the scientific benefits of the collider are completely time-independent. So as far as the benefits go, the word "currently" in her title (the collider isn't a good investment) is completely irrational because the benefits for the mankind of probing that energy regime won't change if we delay the experiment by a century (well, unless all people will really turn into stupid apes, in which case the perceived benefits may drop). However, the benefits of a $$100\TeV$$ collider built in 2150 AD will be zero for the currently living physicists because at that time, they will be dead. We may include this preference for an earlier collider to a discount rate. It's competing against the dropping expenses. If the expenses drop less quickly than the discount rate, it means that we should build as soon as possible! The previous sentence is an example of an actual rational argument affecting the cost-and-benefit analysis, something that Hossenfelder pretends to do but she never does.
I really find it amazing that an adult woman who has pretended to be a scientist for very many years simply doesn't get this elementary universal point about all of science – that experiments are only meaningful if and because they reduce ignorance or uncertainty.
The very last sentences say:
[...] You cannot trust them. When they tell you that a next larger collider may see supersymmetry or extra dimensions or dark matter, keep in mind they told you the same thing 20 years ago.
And that's very correct that particle physicists are making qualitatively identical statements about supersymmetry as they did 20 years ago – because nothing qualitative has changed about our knowledge about the supersymmetry in the real world around us since that time! There are good reasons to think that supersymmetry exists in Nature – and almost certainty that the superpartners don't have masses in the range of energies that have already been measured.
So indeed, instead, what should raise red flags would be if the physicists were saying something completely and qualitatively different than 20 years ago because that qualitative change would be indefensible!
The broad situation of particle physics hasn't changed – and there are certain truly universal principles about high energy physics that haven't changed in the recent 80 years and that won't change in the next 80 years, either. In particular, a more advanced civilization is capable of building ever stronger colliders that are capable of seeing increasingly massive new particles, resolve ever shorter distances, and most of the general hypotheses that have been neither proven nor falsified yet remain in the state of uncertainty. The fewer new discoveries are made each decade (the Higgs boson was discovered less than 7 years ago, just to be sure), the less quickly the wisdom in physics – and the physicists' commentaries – are changing.
It's a sad testimony to our politically correct epoch that a person who is incapable of understanding these "almost tautologies" is allowed to share her delusions in the New York Times and similar "publications".
P.S.: I realized I forgot to discuss her comment about "alternatives" like the precise electron/muon magnetic moment measurements etc.
Those are indeed cheaper and great but they don't replace the high-energy frontier. They are complementary. An obvious limitation of an anomaly in the magnetic moment that may be found (and that was already found, in the muon case) is that there is no way to attribute the discrepancy to a physical effect. It's just a number – either right or wrong number – but it can't tell us any interesting details about the causes.
More generally, she and others sometimes say "it's right to divide the FCC money to hundreds of [unnamed] experiments". To spread billions of dollars to unnamed experiments means not to care where the money goes – it's a recipe to waste the money. At the end, I think that some people's tendency to "redistribute" or "decentralize" the money is just another example of their Marxist egalitarianism.
Egalitarianism of the communist type is decimating for the economies – and its analogy may be equally devastating for science. Hundreds of such small experiments could be guaranteed to be worthless and their "principal investigators" could easily hide rubbish and unoriginal repetitiveness behind the shortage of scrutiny – because when the money is spread to lots of places, the scrutiny of each goes down considerably.
Small experiments may do interesting things but there's a rather good reason to think that unless there are some overlooked light axions or something weakly coupled in the available energy range, we may be nearly certain that none of these cheap experiments may find anything really and qualitatively new because, if I oversimplify just a little bit, we simply do know all the physics beneath $$1\TeV$$. Those are good reasons to think that the money for smaller experiments is much likely to be wasted than the money for an experiment that actually pushes the energy frontier further.
Competent physicists in these fields have simply thought about the question and they can explain why they consider the investment into a higher-energy collider to be a better investment than the investment to the known named alternatives. You can be pretty sure that it's better than unnamed random projects that someone proposes (and that haven't been scrutinized at all), too.
|
{}
|
0
9.4kviews
Show that G=[1,-1,i,-i] is a group under usual multiplication of complex number.
0
622views
The adjoioning table shows the result of multiplication of elements G
Since for every pair a,b,E, G
$\ G_1 \Rightarrow$ Since multiplication of complex number is associative, the multiplication associative in G
$\ G_2 \Rightarrow$ From the first coloumn (or row), we see that l is an identity element. Hence, 1 E G is an identity element.
$\ G_3 \Rightarrow$ Since
11=1, _1-1=1, i * -i=1
-i * i=1, inverse exists
for every elements in G
and, we have
\$\ 1^{-1} =1, -1^{-1} =-1, i^{-1} =-i, -i^{-i} = i
Hence G is a group under multiplication.
|
{}
|
<meta http-equiv="refresh" content="1; url=/nojavascript/">
You are reading an older version of this FlexBook® textbook: CK-12 Middle School Math Concepts - Grade 6 Go to the latest version.
# 7.8: Expression Evaluation with Products of Mixed Numbers
Difficulty Level: Basic Created by: CK-12
%
Progress
Progress
%
### Vocabulary Language: English
Algebraic Equation
Algebraic Equation
An algebraic equation contains numbers, variables, operations, and an equals sign.
improper fraction
improper fraction
An improper fraction is a fraction in which the absolute value of the numerator is greater than the absolute value of the denominator.
Mixed Number
Mixed Number
A mixed number is a number made up of a whole number and a fraction, such as $4\frac{3}{5}$.
Numerical expression
Numerical expression
A numerical expression is a group of numbers and operations used to represent a quantity.
Basic
Oct 29, 2012
Jun 03, 2015
|
{}
|
• ### Precision Measurement of the Radiative $\Beta$ Decay of the Free Neutron(1603.00243)
May 26, 2016 nucl-ex, physics.ins-det
The standard model predicts that, in addition to a proton, an electron, and an antineutrino, a continuous spectrum of photons is emitted in the $\beta$ decay of the free neutron. We report on the RDK II experiment which measured the photon spectrum using two different detector arrays. An annular array of bismuth germanium oxide scintillators detected photons from 14 to 782~keV. The spectral shape was consistent with theory, and we determined a branching ratio of 0.00335 $\pm$ 0.00005 [stat] $\pm$ 0.00015 [syst]. A second detector array of large area avalanche photodiodes directly detected photons from 0.4 to 14~keV. For this array, the spectral shape was consistent with theory, and the branching ratio was determined to be 0.00582 $\pm$ 0.00023 [stat] $\pm$ 0.00062 [syst]. We report the first precision test of the shape of the photon energy spectrum from neutron radiative decay and a substantially improved determination of the branching ratio over a broad range of photon energies.
• ### High-Sensitivity Measurement of 3He-4He Isotopic Ratios for Ultracold Neutron Experiments(1512.09351)
Dec. 31, 2015 nucl-ex, physics.ins-det
Research efforts ranging from studies of solid helium to searches for a neutron electric dipole moment require isotopically purified helium with a ratio of 3He to 4He at levels below that which can be measured using traditional mass spectroscopy techniques. We demonstrate an approach to such a measurement using accelerator mass spectroscopy, reaching the 10e-14 level of sensitivity, several orders of magnitude more sensitive than other techniques. Measurements of 3He/4He in samples relevant to the measurement of the neutron lifetime indicate the need for substantial corrections. We also argue that there is a clear path forward to sensitivity increases of at least another order of magnitude.
• ### The PROSPECT Physics Program(1512.02202)
Dec. 7, 2015 hep-ex, nucl-ex, physics.ins-det
The Precision Reactor Oscillation and Spectrum Experiment, PROSPECT, is designed to make a precise measurement of the antineutrino spectrum from a highly-enriched uranium reactor and probe eV-scale sterile neutrinos by searching for neutrino oscillations over meter-long distances. PROSPECT is conceived as a 2-phase experiment utilizing segmented $^6$Li-doped liquid scintillator detectors for both efficient detection of reactor antineutrinos through the inverse beta decay reaction and excellent background discrimination. PROSPECT Phase I consists of a movable 3-ton antineutrino detector at distances of 7 - 12 m from the reactor core. It will probe the best-fit point of the $\nu_e$ disappearance experiments at 4$\sigma$ in 1 year and the favored region of the sterile neutrino parameter space at $>$3$\sigma$ in 3 years. With a second antineutrino detector at 15 - 19 m from the reactor, Phase II of PROSPECT can probe the entire allowed parameter space below 10 eV$^{2}$ at 5$\sigma$ in 3 additional years. The measurement of the reactor antineutrino spectrum and the search for short-baseline oscillations with PROSPECT will test the origin of the spectral deviations observed in recent $\theta_{13}$ experiments, search for sterile neutrinos, and conclusively address the hypothesis of sterile neutrinos as an explanation of the reactor anomaly.
• ### Background Radiation Measurements at High Power Research Reactors(1506.03547)
Nov. 11, 2015 hep-ex, nucl-ex, physics.ins-det
Research reactors host a wide range of activities that make use of the intense neutron fluxes generated at these facilities. Recent interest in performing measurements with relatively low event rates, e.g. reactor antineutrino detection, at these facilities necessitates a detailed understanding of background radiation fields. Both reactor-correlated and naturally occurring background sources are potentially important, even at levels well below those of importance for typical activities. Here we describe a comprehensive series of background assessments at three high-power research reactors, including $\gamma$-ray, neutron, and muon measurements. For each facility we describe the characteristics and identify the sources of the background fields encountered. The general understanding gained of background production mechanisms and their relationship to facility features will prove valuable for the planning of any sensitive measurement conducted therein.
• ### Neutron-Antineutron Oscillations: Theoretical Status and Experimental Prospects(1410.1100)
This paper summarizes the relevant theoretical developments, outlines some ideas to improve experimental searches for free neutron-antineutron oscillations, and suggests avenues for future improvement in the experimental sensitivity.
• ### Light Collection and Pulse-Shape Discrimination in Elongated Scintillator Cells for the PROSPECT Reactor Antineutrino Experiment(1508.06575)
Aug. 26, 2015 hep-ex, physics.ins-det
A meter-long, 23-liter EJ-309 liquid scintillator detector has been constructed to study the light collection and pulse-shape discrimination performance of elongated scintillator cells for the PROSPECT reactor antineutrino experiment. The magnitude and uniformity of light collection and neutron/gamma discrimination power in the energy range of antineutrino inverse beta decay products have been studied using gamma and spontaneous fission calibration sources deployed along the cell long axis. We also study neutron-gamma discrimination and light collection abilities for differing PMT and reflector configurations. Key design features for optimizing MeV-scale response and background rejection capabilities are identified.
• ### Multiple Detectors for a Short-Baseline Neutrino Oscillation Search Near Reactors(1307.2859)
July 10, 2013 hep-ex, physics.ins-det
Reactor antineutrino experiments have the ability to search for neutrino oscillations independent of reactor flux predictions using a relative measurement of the neutrino flux and spectrum across a range of baselines. The range of accessible oscillation parameters are determined by the baselines of the detector arrangement. We examine the sensitivity of short-baseline experiments with more than one detector and discuss the optimization of a second, far detector. The extended reach in baselines of a 2-detector experiment will improve sensitivity to short-baseline neutrino oscillations while also increasing the ability to distinguish between 3+1 mixing and other non-standard models.
• ### Experimental Parameters for a Reactor Antineutrino Experiment at Very Short Baselines(1212.2182)
Dec. 10, 2012 hep-ex, nucl-ex
Reactor antineutrinos are used to study neutrino oscillation, search for signatures of non-standard neutrino interactions, and to monitor reactor operation for safeguard applications. The flux and energy spectrum of reactor antineutrinos can be predicted from the decays of the nuclear fission products. A comparison of recent reactor calculations with past measurements at baselines of 10-100m suggests a 5.7% deficit. Precision measurements of reactor antineutrinos at very short baselines O(1-10 m) can be used to probe this anomaly and search for possible oscillations into sterile neutrino species. This paper studies the experimental requirements for a new reactor antineutrino measurement at very short baselines and calculates the sensitivity of various scenarios. We conclude that an experiment at a typical research reactor provides 5{\sigma} discovery potential for the favored oscillation parameter space with 3 years of data collection.
• ### A gamma- and X-ray detector for cryogenic, high magnetic field applications(1207.4505)
July 18, 2012 nucl-ex, physics.ins-det
As part of an experiment to measure the spectrum of photons emitted in beta-decay of the free neutron, we developed and operated a detector consisting of 12 bismuth germanate (BGO) crystals coupled to avalanche photodiodes (APDs). The detector was operated near liquid nitrogen temperature in the bore of a superconducting magnet and registered photons with energies from 5 keV to 1000 keV. To enlarge the detection range, we also directly detected soft X-rays with energies between 0.2 keV and 20 keV with three large area APDs. The construction and operation of the detector is presented, as well as information on operation of APDs at cryogenic temperatures.
• ### Search for a T-odd, P-even Triple Correlation in Neutron Decay(1205.6588)
May 31, 2012 nucl-ex
Background: Time-reversal-invariance violation, or equivalently CP violation, may explain the observed cosmological baryon asymmetry as well as signal physics beyond the Standard Model. In the decay of polarized neutrons, the triple correlation D<J_{n}>\cdot(p_{e}\timesp_{\nu}) is a parity-even, time-reversal- odd observable that is uniquely sensitive to the relative phase of the axial-vector amplitude with respect to the vector amplitude. The triple correlation is also sensitive to possible contributions from scalar and tensor amplitudes. Final-state effects also contribute to D at the level of 1e-5 and can be calculated with a precision of 1% or better. Purpose: We have improved the sensitivity to T-odd, P-even interactions in nuclear beta decay. Methods: We measured proton-electron coincidences from decays of longitudinally polarized neutrons with a highly symmetric detector array designed to cancel the time-reversal-even, parity-odd Standard-Model contributions to polarized neutron decay. Over 300 million proton-electron coincidence events were used to extract D and study systematic effects in a blind analysis. Results: We find D = [-0.94\pm1.89(stat)\pm0.97(sys)]e-4. Conclusions: This is the most sensitive measurement of D in nuclear beta decay. Our result can be interpreted as a measurement of the phase of the ratio of the axial-vector and vector coupling constants (CA/CV= |{\lambda}|exp(i{\phi}_AV)) with {\phi}_AV = 180.012{\deg} \pm0.028{\deg} (68% confidence level) or to constrain time-reversal violating scalar and tensor interactions that arise in certain extensions to the Standard Model such as leptoquarks. This paper presents details of the experiment, analysis, and systematic- error corrections.
• This white paper addresses the hypothesis of light sterile neutrinos based on recent anomalies observed in neutrino experiments and the latest astrophysical data.
• ### A New Limit on Time-Reversal Violation in Beta Decay(1104.2778)
April 26, 2011 nucl-ex
We report the results of an improved determination of the triple correlation $D P \cdot(p_{e}\times p_{\nu})$ that can be used to limit possible time-reversal invariance in the beta decay of polarized neutrons and constrain extensions to the Standard Model. Our result is $D=(-0.96\pm 1.89 (stat)\pm 1.01 (sys))\times 10^{-4}$. The corresponding phase between g_A and g_V is $\phi_{AV} = 180.013^\circ\pm0.028^\circ$ (68 % confidence level). This result represents the most sensitive measurement of D in beta decay.
• ### emiT: an apparatus to test time reversal invariance in polarized neutron decay(nucl-ex/0402010)
Feb. 9, 2004 nucl-ex
We describe an apparatus used to measure the triple-correlation term (\D \hat{\sigma}_n\cdot p_e\times p_\nu) in the beta-decay of polarized neutrons. The \D-coefficient is sensitive to possible violations of time reversal invariance. The detector has an octagonal symmetry that optimizes electron-proton coincidence rates and reduces systematic effects. A beam of longitudinally polarized cold neutrons passes through the detector chamber, where a small fraction beta-decay. The final-state protons are accelerated and focused onto arrays of cooled semiconductor diodes, while the coincident electrons are detected using panels of plastic scintillator. Details regarding the design and performance of the proton detectors, beta detectors and the electronics used in the data collection system are presented. The neutron beam characteristics, the spin-transport magnetic fields, and polarization measurements are also described.
• ### New Limit on the D Coefficient in Polarized Neutron Decay(nucl-ex/0006001)
July 19, 2000 nucl-ex
We describe an experiment that has set new limits on the time reversal invariance violating D coefficient in neutron beta-decay. The emiT experiment measured the angular correlation J . p_e x p_p using an octagonal symmetry that optimizes electron-proton coincidence rates. The result is D=[-0.6+/-1.2(stat)+/-0.5(syst)]x10^(-3). This improves constraints on the phase of g_A/g_V and limits contributions to T violation due to leptoquarks. This paper presents details of the experiment, data analysis, and the investigation of systematic effects.
|
{}
|
# Intersection of lines from lineair systems defines a conic section
I have two pencils of hyperplanes $$\Sigma_1$$ and $$\Sigma_2$$ in $$P^2(\mathbb{R})$$ and a projective map $$\phi \colon \Sigma_1 \to \Sigma_2$$. Now I need to show that the the points that are defined by $$l_1 \cap \phi(l_1)$$, with $$l_1 \in \Sigma_1$$ forms a conic section in $$P^2(\mathbb{R})$$.
I first thought that I could show it by using the eigenvectors of the matrix associated with the projective map but any point on the line $$l_1$$ could map to itself so now I have no idea how I can prove this.
• What is a "lineair system of hyperplanes"? Do you mean a pencil of hyperplanes? – user10354138 Jun 9 at 2:46
• – amd Jun 9 at 5:14
• Yeah I mean a pencil of hyperplanes! This is a question that was originally in Dutch and I didn’t find a translation but now I know! – Mee98 Jun 9 at 6:52
|
{}
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Guidelines for authors Publishing Ethics Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
CMFD: Year: Volume: Issue: Page: Find
CMFD, 2018, Volume 64, Issue 4, Pages 650–681 (Mi cmfd365)
On boundedness of maximal operators associated with hypersurfaces
I. A. Ikromov, S. E. Usmanov
Samarkand State University, Samarkand, Uzbekistan
Abstract: In this paper, we obtain the criterion of boundedness of maximal operators associated with smooth hypersurfaces. Also we compute the exact value of the boundedness index of such operators associated with arbitrary convex analytic hypersurfaces in the case where the height of a hypersurface in the sense of A. N. Varchenko is greater than 2. Moreover, we obtain the exact value of the boundedness index for degenerated smooth hypersurfaces, i.e., for hypersurfaces satisfying conditions of the classical Hartman–Nirenberg theorem. The obtained results justify the Stein–Iosevich–Sawyer hypothesis for arbitrary convex analytic hypersurfaces as well as for smooth degenerated hypersurfaces. Also we discuss some related problems of the theory of oscillatory integrals.
Funding Agency Grant Number Commitee for coordination science and technologies under Cabinet Ministers of the Republic of Uzbekistan ÎÒ-Ô-4-69
DOI: https://doi.org/10.22363/2413-3639-2018-64-4-650-681
Full text: PDF file (357 kB)
References: PDF file HTML file
UDC: 517.982.42
Citation: I. A. Ikromov, S. E. Usmanov, “On boundedness of maximal operators associated with hypersurfaces”, Contemporary problems in mathematics and physics, CMFD, 64, no. 4, Peoples' Friendship University of Russia, M., 2018, 650–681
Citation in format AMSBIB
\Bibitem{IkrUsm18} \by I.~A.~Ikromov, S.~E.~Usmanov \paper On boundedness of maximal operators associated with hypersurfaces \inbook Contemporary problems in mathematics and physics \serial CMFD \yr 2018 \vol 64 \issue 4 \pages 650--681 \publ Peoples' Friendship University of Russia \publaddr M. \mathnet{http://mi.mathnet.ru/cmfd365} \crossref{https://doi.org/10.22363/2413-3639-2018-64-4-650-681}
|
{}
|
# Retirement
To purchase a gift to a retiring co-worker, everyone had to pay EUR 15. Because 4 workers were ill on the day the money was collected, everyone present had to pay additional 1 euro except retiring one. How many workers did the workshop have and how much did the gift cost?
n = 65
d = 975 Eur
### Step-by-step explanation:
Did you find an error or inaccuracy? Feel free to write us. Thank you!
Tips to related online calculators
Do you have a linear equation or system of equations and looking for its solution? Or do you have a quadratic equation?
## Related math problems and questions:
• Workers
Two workers A and B, will be done work together for 15 days. They worked together for 13.5 days. Then A worker became ill, and worker B finish the job alone for 7.5 days. If every worker was working alone, how many days took whole work for worker A and B?
• Retirement annuity
How much will it cost to purchase a two-level retirement annuity that will pay $2000 at the end of every month for the first 10 years, and$3000 per month for the next 15 years? Assume that the payment represent a rate of return to the person receiving th
• Purchase
Three buyers pay € 468. The first paid 3-times more than second, third half over second. How many euros paid each of them?
• Present value
A bank loans a family \$90,000 at 4.5% annual interest rate to purchase a house. The family agrees to pay the loan off by making monthly payments over a 15 year period. How much should the monthly payment be in order to pay off the debt in 15 years?
• Three workers
The three workers received € 2,850 together for the work done. They divided them according to the time worked so that the first received 20% less than the second and the third € 50 more than the second. How much EUR did each worker receive?
• Three workers
The company rewarded three workers CZK 9200, and the money was divided by the work they have done. The first worker to get twice than the second, the second three times more than the third. How much money did each worker receive?
How much wrapping paper is needed to wrap a cube-shaped gift with edge measuring 15 cm?
• Two workers
The first worker completed the task by himself in 9 hours, the second in 15 hours. After two hours of joint work left first worker to a doctor and the second finished the job himself. How many hours worked second worker himself?
• Saving
The boy has saved 50 coins € 5 and € 2. He saved € 190. How many were € 5, and how many € 2?
Students of canoeists trip traveled in three days 102 km. Second day traveled 15% more than first day and at the third day 9 km over the second day. How many kilometers traveled each day?
• Sales off
After discounting 40% the goods cost 15 €. How much did the cost of the goods before the discount?
• Land cost
The land is worth 33295 CZK. 1st owner has 5/15, 2nd owner has 4/15, 3rd owner has 1/5 4th owner has 1/5. How much CZK each of them will receive?
• Pension or fraud
Imagine that your entire working life working honestly and pay taxes and to Social Insurance (in Slovakia). You have a gross wage 900 Euros, you pay social insurance (you and your employer) monthly € 288 during 41 years and your retirement is 405 Eur
• Mushrooms
For five days, we have collected 410 mushrooms. Interestingly every day we have collected 10 mushrooms more than the preceding day. How many mushrooms we have collected during 4th day?
• 15 number
What number is smaller (greater) by 15 than its half?
• Hiker
Hiker went half of the trip the first day, a third of the trip the second day, and remains 15 km. How long the trip he planned?
• How much
How much money will we pay for 20 planks 4m long, 15cm wide and 26mm thick when 1m³ of wood costs 4500kč?
|
{}
|
Select Board & Class
Inverse Trigonometric Functions
If sin y = x, then y = sin–1x (We read it as sine inverse x)
Here, sin–1x is an inverse trigonometric function. Similarly, the other inverse trigonometric functions are as follows:
If cos y = x, then y = cos–1x If tan y = x, then y = tan–1x If cot y = x, then y = cot–1x If sec y = x, then y = sec–1x If cosec y = x, then y = cosec–1x The domains and ranges (principle value branches) of inverse trigonometric functions can be shown in a table as…
To view the complete topic, please
What are you looking for?
Syllabus
|
{}
|
## The Annals of Probability
### A Unified Approach to a Class of Optimal Selection Problems with an Unknown Number of Options
#### Abstract
In the so-called secretary problem, if an unknown number, $N$, of options arrive at i.i.d. times with a known continuous distribution, then ignorance of how many options there are becomes almost irrelevant: The optimal rule for infinitely many options is shown to be minimax with respect to all possible distributions of $N$, nearly optimal whenever $N$ is likely to be large, and formal Bayes against a noninformative prior. These results hold whatever the loss function.
#### Article information
Source
Ann. Probab., Volume 15, Number 2 (1987), 824-830.
Dates
First available in Project Euclid: 19 April 2007
https://projecteuclid.org/euclid.aop/1176992175
Digital Object Identifier
doi:10.1214/aop/1176992175
Mathematical Reviews number (MathSciNet)
MR885147
Zentralblatt MATH identifier
0592.60034
JSTOR
|
{}
|
(there is only one feature in the convex hull GeoDataFrame, so we can access the first member directly with [0]. The hand detection technique utilizes Convex Hull algorithm to calculate the largest polygon that encapsulates the hand point cloud. Each row represents a facet of the triangulation. Tangents between two disjoint convex polygons. Geometric Manipulations¶. Mahotas has a simple one, called convexhull. Generate an Alpha Shape (Alpha=0. # The first and last points points must be the same, making a closed polygon. Topcoder is a crowdsourcing marketplace that connects businesses with hard-to-find expertise. Repeat steps 2 to 4, until you end up adding the same point to the convex_hull list as you started with (the left most point). Compute the intersection of two convex polygons without relying on complicated third-party Python libraries. The convex hull of a data set is similar to the bounding box but instead of a square it is the smallest possible polygon which can contain a data set. For 2-D convex hulls, the vertices are in counterclockwise order. Also, if the non-empty Voronoi region of polygon P is bounded, then the convex hull of P contains another polygon in its interior and the Voronoi region of P is simply connected. The Convex Hull of a given point P in the plane is the unique convex polygon whose vertices are points from P and contains all points of P. Voronoi instance. points in the interior of the convex hull). To generate convex hull polygons in PostGIS: CREATE TABLE convexhull AS SELECT gid, geoid, ST_ConvexHull(district_table. the sides meet at vertices but otherwise do not intersect each other, then there is a general formula for the area. Its vertices are some points of A. - High availability and fault tolerant system, serving billions of documents data responding over 900 QPS. Implements Andrew's monotone chain algorithm. Then you can create a MultiPoint geometry and get the convex hull polygon. Shapely has convex hull as a built in function so let's try that out on our points. In 2019, a C++ port has been created, allowing for efficient usage from C/C++, Python (via cffi) and other languages featuring an FFI and/or plug-in mechanism for C (e. Voronoi diagrams only implemented for the 2D plane and 3D sphere. I want to quantify how much I have to pay to add B into set A. Abstract: This paper describes an algorithm to compute the envelope of a set of points in a plane, which generates convex or non-convex hulls that represent the area occupied by the given points. approxPolyDP. With this definition, a cube, rectangle, regular polygon and the like are convex in nature. The output is the convex hull of this set of points. I was given some points to calculate the convex hull. Saryk, there is no problem with the minimum area bounding rectangle: it's easy to prove (rigorously) that it must include a side of the convex hull. This is the default. • Language and Libraries: Python 3. Hi, I have a bunch of 2D XY points I would like to derive the concave hull, polygon (key here is concave and not convex). A two-dimensional polygon. The Delaunay triangulation of a given set of points is a triangulation of the convex hull of such that no point of is inside the circumcircle of any triangle of. 1 or above, tested in 10. Polygons¶ class sympy. Indices of points forming the vertices of the convex hull. It the arithmetic mean position of all the points that make up the polygon. The convex hull is a convex closure of a set of points or polygon verticies and can be may be conceptualized as the shape enclosed by a rubber band stretched around the point set. The bounding box of a data set always contains its convex hull. Input is an array of points specified by their x and y coordinates. As with everything ggplot wise hat tip to the incredible Hadley Wickham. For this intro I'm going to focus on one of the most fundamental concepts in CG- the convex hull. apply_hull (bool) – Set to True to apply a convex hull algorithm to vertices. Contribute to bikemule/convex-hull-python development by creating an account on GitHub. Ordering points in a clockwise manner is straightforward when it is a convex shape. One of their applications is building of common convex hull for two convex polygons (repeat convex hull merging for more and more polygons) (chapter 5, chapter 2. If there aren't any clip or erase polygons used to define the TIN, the domain is equivalent to the convex hull. I also know the plane intersections points which form polygons on each face. Convex hulls are convex, so you can use a convex polygon collision detection algorithm. OpenCV also offers a cv2. contourArea (cnt) hull = cv2. In addition to the convex hull itself, I need to know what are the contact points, i. I have not checked (I guess it can be proved by contradiction), but it seems obvious that a minimal convex hull is required to be convex. SCI relates a polygon's shape to that of an encompassing convex hull. In the above picture first you see the convex hull in black lines. Pen Plotter Art & Algorithms, Part 2 — This post is a continuation of Pen Plotter Art & Algorithms, Part 1. GEOSGeometry. So, the test point is outside the polygon, as indicated by the even number of nodes (two and two) on either side of it. when Formally, the convex hull may be defined as the intersection of all convex sets containing X or as the set of all convex combinations of points in X. I'm interested in working in Python and would be open to any preprogrammed solutions. The failure of the convex hull. You may use floating-point numbers in intermediate computations, but only if you can guarantee that the final result will be always correct. These points make up a concave polygon. Polygon [source] ¶. Similar ideas were used in the paper "Approximating Center Points with Iterated Radon Point". If you have never seen a barcode or a QR code, please send me the address of your cave so I can send you a sample by mail. The vertices will be listed clockwise starting from an arbitrary vertex. The convex hull is one of the first. within(polygon). r – 带有ggbiplot的凸壳 2019-06-01 algorithm geometry convex-hull convex-polygon. In computational geometry, Chan's algorithm, named after Timothy M. If none of these two segments has an intersection with convex-hull, it means you can see that edge completely (as you consider a convex polygon). Classes and Objects with Python - Part 1 (Python Tutorial #9 Types of polygon II convex polygon II concave polygon II regular II Equilateral Convex Hull Jarvis March(Gift. The Shapely library for Python has built-in support for. We propose. An algorithm for the MaxMin area triangulation of a convex polygon. Chan's algorithm, another convex hull algorithm, combines the logarithmic dependence of Graham scan with the output sensitivity of the gift wrapping algorithm, achieving an asymptotic running time () that improves on both Graham scan and gift wrapping. In this article by Erik Westra, the author of the book Python GeoSpatial Development – Third Edition, examines a number of libraries and other tools, which can be used for geospatial development in Python. py Calculate the convex hull of a set of n 2D-points in O(n log n) time. A small number will result in a concave hull that follows the points very closely, while a high number will make the polygon look more like the convex hull (if the number is equal to or larger than the number of features, the result will be the convex hull). If there aren't any clip or erase polygons used to define the TIN, the domain is equivalent to the convex hull. Qhull is a general dimension code for computing convex hulls, Delaunay triangulations, Voronoi vertices, furthest-site Voronoi vertices, and halfspace intersections. The indices of the points specifying the convex hull of a set of points in two dimensions is given by the command ConvexHull[pts] in the Wolfram Language package ComputationalGeometry. CONVEX_HULL — The minimum bounding geometry of the mosaic dataset will be used to simplify the boundary. Given X, a set of points in 2-D, the convex hull is the minimum set of points that define a polygon containing all the points of X. I add B into A and get a bigger point set. Creates a convex polygon hull around the maya plugins for polygon work in maya, polygon maya plugins, polygon or maya plugins, and anything else having to do with polygon in maya. vtkPolyData hull. OpenCV Documentation 3. points in the interior of the convex hull). A convex hull is the smallest convex polygon containing all the given points. And then, draw the trade area polygon with 70% percentile. Ask Question 5. the sides meet at vertices but otherwise do not intersect each other, then there is a general formula for the area. Convex Hull¶ The convex hull of a binary image is the set of pixels included in the smallest convex polygon that surround all white pixels in the input. neighbors. Prints output as EPS file. The larger the threshold, the closer the resulting polygon will be to the Convex Hull. There are many useful functionalities that you can do with Shapely such as: Create a Line or Polygon from a Collection of Point geometries Calculate areas/length/bounds etc. Convex Hull Background. Questions tagged [convex-hull] Ask Question The convex hull of a point set is the outer boundary of the smallest convex set that encloses the point set entirely. convex_hull Point-in-Polygon. convex_hull. Create a convex hull for a given set of points. Input is an array of points specified by their x and y coordinates. The convex hull of a data set is similar to the bounding box but instead of a square it is the smallest possible polygon which can contain a data set. convex hull algorithms C++ implementation. The convex hull of a concave set of points. This works in any constant dimension. We will briefly explain the algorithm and then follow up with C++ and Python code implementation using OpenCV. All geometry objects can contain a reference to a coordinate reference system, and metadata, with multi-part collections additionally containing table data. I am thinking about using the additional area to quantify this cost. Any deviation of the object from this hull can be considered as convexity defect. points (ndarray of double, shape (npoints, ndim)) Coordinates of input points. # To generate the convex hull we supply a vtkPolyData object and a bounding box. I tried to replicate this workflow but quickly realized certain classes and functions weren’t available in GeoScript Python. Returns: inter_points – List of intersection points between the line segment and the polygon. Shapely has convex hull as a built in function so let's try that out on our points. The bounding box of a data set always contains its convex hull. Using a very simple Python code, you can mimic the Minimum Bounding Geometry tool operation while having only Basic/Standard license. Browse other questions tagged python scipy spatial convex-hull or ask your own question. convex_hull_polygon =point_collection. We have to make a polygon by taking less amount of points, that will cover all given points. Flat and Gouraud shading. Since you asked only for the points which represent the vertices of the convex hull, I gave you the indices which reference that set. One can compute the convex hull of a set of points in three dimensions in two ways: using a static algorithm or using a triangulation to get a fully dynamic computation. Let's consider a 2D plane, where we plug pegs at the points mentioned. This project computes the convex hull by using the Graham Scan. Given a boolean image (or anything that will get interpreted as a boolean image), it finds the convex hull of all its on points. Convex hull is defined by a set of planes (point on plane, plane normal). This guide is no longer being maintained - more up-to-date and complete information is in the Python Packaging User Guide. In this algorithm, at first the lowest point is chosen. (It may be found between more than one pair of vertices, but the first found will be used. (avg latency: 0. So when I try to find that on internet I also saw that a question which has asked what is the difference of convex hull and convex polygon. It provides a set of common mesh processing functionalities and interfaces with a number of state-of-the-art open source packages to combine their power seamlessly under a single developing environment. Mace: An example of multiple inputs and outputs. simplices (ndarray of ints, shape (nfacet, ndim)) Indices of points forming the simplical facets of the convex hull. I just can't seem to understand what data it could possibly be failing. Then, an outward-pointing normal vector for e i is given by: , where "" is the 2D perp-operator described in Math Vector. Converted the java version first and it kept giving strange results, then took the sub() from python and it worked rightaway. Use: Bounding containers only make sense when projected data are used. Given the set of points for which we have to find the convex hull. those that do not contain concavities or holes, have a value of 0. Convex hulls are polygons drawn around points too - as if you took a pencil and connected the dots on the outer-most points. Convex Hull Background. A Concave hull describes better the shape of the point cloud than the convex hull; Convex Hul. Aboli has 4 jobs listed on their profile. All convex polygons are simple. this one is converted from those Java & Python versions. Ken Clarkson describes some implementation details of algorithms for convex hulls, alpha shapes, Voronoi diagrams, and natural neighbor interpolation. The web site is a project at GitHub and served by Github Pages. To generate convex hull polygons in PostGIS: CREATE TABLE convexhull AS SELECT gid, geoid, ST_ConvexHull(district_table. Let's have two convex polygons as shown, For finding the upper tangent, we start by taking two points. Convexity Defects. Hong, "Convex Hulls of Finite Sets of Points in Two and Three Dimensions", Comm. voronoi_plot_2d (vor, ax=None, **kw) ¶ Plot the given Voronoi diagram in 2-D. This is the default. This makes this function suitable if you have only two points (of the diagonally opposing. Then, I have extra points, point set B. This guide is no longer being maintained - more up-to-date and complete information is in the Python Packaging User Guide. Graham’s Scan algorithm will find the corner points of the convex hull. MBG_Width—The shortest distance between any two vertices of the convex hull. It was originally designed for bioimage informatics, but is useful in other areas as well. The algorithm for convex hull algorithm. within(polygon). geometry as geometry 'をインポートし、' geometry. convex_hull_xy_numpy. You may use floating-point numbers in intermediate computations, but only if you can guarantee that the final result will be always correct. With this definition, a cube, rectangle, regular polygon and the like are convex in nature. Created with matplotlib and NumPy. And this convex hull should be a polygon (as it's created from a collection of polygons) which means that you should be able to use it as input for your output dataset. vertices] as an argument to Delaunay, so the integers in tri. computes the convex hull of random given points. Then, I have extra points, point set B. Source Data. Its vertices are some points of A. The asymptotic behavior of the hull algorithm depends on m, where potentially mis much larger than n. A good overview of the algorithm is given on Steve Eddin's blog. Please see this page to learn how to setup your environment to use VTK in Python. Parameters. We won’t cover them in any comprehensive way, but will only present examples to illustrate the capabilities of the Python GeoPandas package and. For other dimensions, they are in input order. Might I suggest Qhull. I have continued development because the only existing major polygon library for python, Polygon. Use the slider to set the number of points and drag the resulting locators around to visualize their convex hull. The convex hull of 30 random points in 3D. We discuss two cases: Tangents from a point to a convex polygon. Convex combinations have the additional property that the result in is in the convex hull. However, for the purposes of this book, a black-box library simply does not meet our instructional needs. Description: The Graham scan is a fundamental backtracking technique in computational geometry which was originally designed to compute the convex hull of a set of points in the plane and has since found application in several different contexts. nnNext n lines contains three integers x, y, z. Basically, if a point is inside a polygon, the sum of the angles between the point and each pair of vertices should be $2\pi$, otherwise it's outside the polygon. Input is an array of points specified by their x and y coordinates. A project on 3D Curvature and the Convex Hull of a 3D Model Date: 26 January 2018 Author: iasonmanolas 5 Comments In this project I wrote the code for computing and visualizing a 3D model’s both mean and Gaussian curvature as well as it’s convex hull. Pen Plotter Art & Algorithms, Part 2 — This post is a continuation of Pen Plotter Art & Algorithms, Part 1. I was given some points to calculate the convex hull. If they're polygonal, the polygon will not have perfectly flat faces due to finite precision and may have false convexity. The polygon has touched just 5 points (see map of Convex hull below) but normally it should pass on 20 points (all points of the contour). If you would like the CONVEX hull for a plane model, just replace concave with convex at EVERY point in this tutorial, including the source file, file names and the CMakeLists. Arbitrary polygon meshes aren't supported for dynamic actors, because they are very difficult to simulate in realtime. It is not an aggregate function. This tool can be used to generate a convex hull (the minimum bounding polygon) around a set of points. Convex Optimization - Hull - The convex hull of a set of points in S is the boundary of the smallest convex region that contain all the points of S inside it or on its boundary. The algorithm is described in the published paper "Concave Hull: A k-nearest neighbours approach for the computation of the region occupied by a set of points" by A. The gist is that you combine the two previous algorithms that I discussed. Indices of points forming the vertices of the convex hull. Convex Hull¶ The convex hull of a binary image is the set of pixels included in the smallest convex polygon that surround all white pixels in the input. The result depends on the user defined distance threshold. If there aren't any clip or erase polygons used to define the TIN, the domain is equivalent to the convex hull. The convex hull must be computed rst, and the output is a set of npoints. find Please, share your knowledge and links on the topic. hull = vtk. The result is a convex hull with a mesh of all the polygons: Linux Commands in Python C# Toy Robot Simulator. convex_hull» sur un «geometry. Combine or Merge: We combine the left and right convex hull into one convex hull. RECTANGLE_BY_AREA —The rectangle of the smallest area enclosing an input feature. Schmidt Hans Raj Tiwary z Abstract We consider approximation algorithms for the prob-lem of computing an inscribed rectangle having largest area in a convex polygon on n vertices. 10 months ago. Qhull computes the convex hull, Delaunay triangulation, Voronoi diagram, halfspace intersection about a point, furthest-site Delaunay triangulation, and furthest-site Voronoi diagram. # Make it a generous fit as it is only used to create the initial # polygons that are eventually clipped. For point data: For line data: For polygon data: The Python code snippet:. The best way to approach computationally hard problems is to find a library that can solve the problem you're trying to cover. In mathematics, a Voronoi diagram is a partitioning of a plane into regions based on distance to points in a specific subset of the plane. locus) of points. The Shapely library for Python has built-in support for. MultiPolygon» – Alex 12 janv. The algorithm was originally proposed by Preparata and Hong: Franco Preparata & S. This algorithm first sorts the set of points according to their polar angle and scans the points to find. You can manipulate the alphaShape object to tighten or loosen the fit around the points to create a nonconvex region. 0 represents a polygon that approaches a straight line. EXACT_SIMPLIFIED —A generalized polygon representing the exact shape of the symbolized feature. A good overview of the algorithm is given on Steve Eddin's blog. Finding the centre of of a polygon can be useful for many geomtrical analysis and processing techniques. What do you mean by ‘consider’ the inside pixels. The failure of the convex hull. As with everything ggplot wise hat tip to the incredible Hadley Wickham. Then, I have extra points, point set B. This is the default. To be rigorous, a polygon is a piecewise-linear, closed curve in the plane. Computing the convex hull is a problem in computational geometry. Check if a LatLong Point from a List of points "Example Latlongs points to check and output column. Then, an outward-pointing normal vector for e i is given by: , where "" is the 2D perp-operator described in Math Vector. If any vertex points 'inward' to towards the center of the polygon, it ceases to be a convex polygon. The Concave hull option ( geometry_type="CONCAVE_HULL" in Python) provides the greatest amount of detail about the shape of the bounding volume but is computationally heavy and should not be used with large collections of input data. I can find which points construct the convex hull but calculating the area is a little bit difficult for me. Convex hulls of point sets are an important building block in many computational-geometry applications. Convex Hull of a set of points, in 2D plane, is a convex polygon with minimum area such that each point lies either on the boundary of polygon or inside it. In addition to the convex hull itself, I need to know what are the contact points, i. – The curve passes through the endpoints of the control polygon – The curve lies within the convex hull of the control polygon – Affine invariance – The first and last line of the control polygon are tangents to the curve Cubic Bézier curves are used very often: Four control points, the curve passes through the first and the last. Source Data. Ask Question 5. But as long as the polygon is “simple,” i. In Groeneboom (1988) a central limit theorem for the number of vertices of the convex hull of a uniform sample from the interior of convex polygon was derived. This is the default. Mathematica and Python code: CHDVArchive. In any case, I know that qhull (a convex hull code library) has the ability to tell you the volume. Not going to work; Concave hull looks suitable. Convex Hull. Perhaps there is a similar approach in 3d. the convex hull of the set is the smallest convex polygon that contains all the points of it. The program itself is based on a simple discrete version of the Jordan curve theorem: if a point is inside of a polygon, then a ray emanating from it in a direction that is not parallel to any of the edges of the polygon will cross the polygon boundary an odd number of times. If I just plot the polygon with the vertices directly, I don't get the vertices in the right order to make up a convex polygon (plots edges joining the wrong points). In this tutorial, we have practiced filtering a dataframe by player or team, then using SciPy's convex hull tool to create the data for plotting the smallest area that contains our datapoints. Convex and Concave hull “Convex and concave hulls are useful concepts for a wide variety of application areas, such as pattern recognition, image processing, statistics, and classification tasks. You must compute the number of regions into which the polygon is divided by the straight lines. The indices of the points specifying the convex hull of a set of points in two dimensions is given by the command ConvexHull[pts] in the Wolfram Language package ComputationalGeometry. As a visual analogy, consider a set of points as nails in a board. Santos, 2007, University of Minho, Portugal. — Patchwork, printed with AxiDraw, December 2017. convex_hull'を使用することも可能です。. Using a very simple Python code, you can mimic the Minimum Bounding Geometry tool operation while having only Basic/Standard license. I just can't seem to understand what data it could possibly be failing. The Computational Geometry Algorithms Library. Here we will see one example on convex hull. The convex hull is the smallest convex polygon that encloses all 4 points. Graham Scan. A two-dimensional polygon. Generate an Alpha Shape (Alpha=0. geopandas makes available all the tools for geometric manipulations in the *shapely* library. The following code block demonstrates … - Selection from Hands-On Image Processing with Python [Book]. For instance, when X is a bounded subset of the plane, the convex hull may be visualized as the shape formed by a rubber band stretched around X. To facilitate this, the Polygon class provides an alternate constructor method, convex_hull(). Currently the generation of the service area polygon output is a mystery, it appears to be a merged line buffer yet each side of the line are different widths for no apparent reason. In any case, I know that qhull (a convex hull code library) has the ability to tell you the volume. Smallest convex set containing all the points. - hull_plot. Computing the convex hull is a problem in computational geometry. convexityDefects(). The Delaunay triangulation of a given set of points is a triangulation of the convex hull of such that no point of is inside the circumcircle of any triangle of. approxPolyDP. A convex hull for a set of points is the smallest convex polygon that contains all the points. Some nice extensions to this that you may want to play with include adding some annotations for player names, or changing colours for each player. Heads up! Contrary to the normal convention of "latitude, longitude" ordering in the coordinates property, GeoJSON and Well Known Text order the coordinates as "longitude, latitude" (X coordinate, Y coordinate), as other GIS coordiate systems are encoded. CONVEX_HULL — The minimum bounding geometry of the mosaic dataset will be used to simplify the boundary. For 3-D points, k is a 3-column matrix representing a triangulation that makes up the convex hull. We will briefly explain the algorithm and then follow up with C++ and Python code implementation using OpenCV. It is not too hard. Convex Hull¶ The convex hull of a binary image is the set of pixels included in the smallest convex polygon that surround all white pixels in the input. Voronoi diagrams only implemented for the 2D plane and 3D sphere. Solving Geometric Problems with the Rotating Calipers * Godfried Toussaint School of Computer Science McGill University Montreal, Quebec, Canada ABSTRACT Shamos [1] recently showed that the diameter of a convex n-sided polygon could be computed in O(n) time using a very elegant and simple procedure which resembles. Use: Bounding containers only make sense when projected data are used. Especially, an n-dimensional. Abstract: This paper describes an algorithm to compute the envelope of a set of points in a plane, which generates convex or non-convex hulls that represent the area occupied by the given points. (It may be found between more than one pair of vertices, but the first found will be used. This can be used to generate a convex hull polygon from an input Geometry object which can be a point, polyline, or a polygon. Convex-hull of a set of points is the smallest convex polygon containing the set. With a concave thing, I really don't know what to do. Convex hull functions are a more typical image processing feature. Pythonで最小二乗法を使って円の中心を見つける方法は? c# - 不規則な多角形の中の点を見つけるためのアルゴリズム; sf :: st_centroidを使って多角形の重心を計算するにはどうすればいいですか? python - 画像内の多角形を見つけるためのハフライン変換. CONVEX_HULL —The convex hull of the symbolized geometry of the feature. ax matplotlib. These outer segments of the elastic band form the convex hull. All geometry objects can contain a reference to a coordinate reference system, and metadata, with multi-part collections additionally containing table data. It turns out that the vertices of the polygon is represented by a unique sublattice of L, and that the sublattices representing vertices form a chain. within(polygon). If you imagine the points as pegs sticking up in a board, think of a convex hull as a rubberband wrapped around them. If you’ve never used these libraries before, or are looking for a refresher on how they work, this page is for you!. Is there an effecient algorithm to determine if a polygon (defined by a series of coordinates) is convex, non convex or complex. The Attempt at a Solution Hi there, I am trying draw the convex hull of the 5 points x1,x2,x3,x4,x5 below. The Convex Hull of a set of points P is the smallest convex polygon CH(P) for which each point in P is either on the boundary of CH(P) or in its interior. Qhull implements the Quickhull algorithm for computing the convex hull. I'm trying to write a program that calculates the area of the convex hull of a set of points in a plane. Ask Question Asked 1 year, 8 months ago. The convex hull of a set X of points in the Euclidean plane is the smallest convex set that contains X. In the first case, the hull elements are 0-based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). in blender python ? Is the only way to check length betweeen vertices like: (theory) search for "3d convex hull. The Convex hull can be created with the function Convex hull(s) under the menu Vector | Geoprocessing tools | convex hull). on general convex optimization that focuses on problem formulation and modeling. I tried to replicate this workflow but quickly realized certain classes and functions weren’t available in GeoScript Python. Finding the centre of of a polygon can be useful for many geomtrical analysis and processing techniques. Convex-hull of a star shaped polygon in O(n) 4. If all you really need is a point in convex polygon test, it's probably a little too trivial to be worth dragging in a dependency on anything. net Polygon Fractals with Rhino Python;. nnGiven X, your task is to find the surface area of the convex hull of X, rounded to the nearest integer. Polygons created with this method will have a significantly smaller number of vertices compared to polygons created with the EXACT method. Something called 'Convex hull' came up. NOTE: you may want to use use scipy. Check if a LatLong Point from a List of points "Example Latlongs points to check and output column. CONVEX_HULL —The smallest convex polygon enclosing an input feature. Ordering points in a clockwise manner is straightforward when it is a convex shape. simplices (ndarray of ints, shape (nsimplex, ndim+1)) Indices of the points forming the simplices in the triangulation. Find the area of the convex hull of those points, rounded to the nearest integer; an exact midpoint should be rounded to the closest even integer. org The library offers data structures and algorithms like triangulations, Voronoi diagrams, Boolean operations on polygons and polyhedra, point set processing, arrangements of curves, surface and volume mesh generation, geometry processing, alpha shapes, convex hull algorithms, shape reconstruction, AABB and KD trees.
|
{}
|
# Converting from trace reversed to standard metric perturbation
In linearised gravity, when dealing with gravitational waves it is common to consider just the spatial part of the trace-reversed metric perturbation $$\bar{h}_{ij}$$.
I know that in general the trace-reversed perturbation is defined,
$$\bar{h}_{\mu \nu} = h_{\mu \nu} - \frac{1}{2} \eta_{\mu \nu} h$$
My question is, how explicitly do I convert e.g. $$\bar{h}_{xx} \rightarrow h_{xx}$$? I have a feeling that it is just a factor of $$2$$, e.g. Eq 6.26 of Carrol ?
|
{}
|
# Limit of Function in Interval
## Theorem
Let $f$ be a real function which is defined on the open interval $\left({a \,.\,.\, b}\right)$.
Let $\xi \in \left({a \,.\,.\, b}\right)$
Suppose that, $\forall x \in \left({a \,.\,.\, b}\right)$, either:
$\xi \le f \left({x}\right) \le x$
or:
$x \le f \left({x}\right) \le \xi$
Then $f \left({x}\right) \to \xi$ as $x \to \xi$.
## Proof
Note that $\left|{f \left({x}\right) - \xi}\right| \le \left|{\xi - x}\right|$.
From Limit of Absolute Value we have that $\left|{x - \xi}\right| \to 0$ as $x \to \xi$.
The result follows from the Squeeze Theorem.
$\blacksquare$
|
{}
|
# How to create a plot in base R with tick marks of larger size?
R ProgrammingServer Side ProgrammingProgramming
To create a plot in base R with tick marks of larger size, we can make use of axis function tck argument. The tck argument value will decide the size of the tick mark but since the ticks lie below the plot area hence the value will have a negative associated with it. Therefore, it will be like -0.05. Check out the below examples to understand how it works.
## Example
plot(1:10,axes=FALSE,frame=TRUE)
axis(1,1:10,tck=-0.02)
axis(2,1:10,tck=-0.02)
## Example
plot(1:10,axes=FALSE,frame=TRUE)
axis(1,1:10,tck=-0.05)
axis(2,1:10,tck=-0.05)
## Output
Published on 08-Feb-2021 05:07:11
|
{}
|
Karl and Nell made donations to the same charity last year. : GMAT Data Sufficiency (DS)
Check GMAT Club Decision Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack
It is currently 20 Jan 2017, 08:03
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Karl and Nell made donations to the same charity last year.
Author Message
TAGS:
### Hide Tags
Director
Status: 1,750 Q's attempted and counting
Affiliations: University of Florida
Joined: 09 Jul 2013
Posts: 513
Location: United States (FL)
Schools: UFL (A)
GMAT 1: 600 Q45 V29
GMAT 2: 590 Q35 V35
GMAT 3: 570 Q42 V28
GMAT 4: 610 Q44 V30
GPA: 3.45
WE: Accounting (Accounting)
Followers: 26
Kudos [?]: 747 [0], given: 630
Karl and Nell made donations to the same charity last year. [#permalink]
### Show Tags
02 Oct 2013, 08:00
00:00
Difficulty:
55% (hard)
Question Stats:
65% (02:06) correct 35% (01:10) wrong based on 80 sessions
### HideShow timer Statistics
Karl and Nell made donations to the same charity last year. Did Karl donate more money to the charity than Nell did?
(1) The amount of money that Nell donated is $200 less than twice the amount that Karl donated. (2) Karl donated more than$250 to the charity.
GH 10.02.13
[Reveal] Spoiler: OA
Last edited by avohden on 09 Oct 2013, 17:11, edited 2 times in total.
Moderator
Status: It always seems impossible until it's done!!
Joined: 29 Aug 2012
Posts: 702
Location: India
GMAT 1: 680 Q49 V34
Followers: 60
Kudos [?]: 1026 [1] , given: 226
Re: Karl and Nell made donations to the same charity last year. [#permalink]
### Show Tags
02 Oct 2013, 08:49
1
KUDOS
avohden wrote:
Karl and Nell made donations to the same charity last year. Did Karl donate more money to the charity than Nell did?
(1) The amount of money that Nell donated is $200 less than twice the amount that Karl donated. (2) Karl donated more than$250 to the charity.
[Reveal] Spoiler:
C
Source: GMAT HACKS
Statement 1 says :
N= 2k-200.. Not Sufficient. Doesn't tell anything much. Also we have 2 extreme if K is high N also will be high.
If K is less, N will also be less.
Statement 2:
K>250. Not sufficient as is.
Together : We know K >250 and from taking statement 1 we can the value of N. so we can find the relationship between Karl and Nell's donation.
_________________
Director
Status: 1,750 Q's attempted and counting
Affiliations: University of Florida
Joined: 09 Jul 2013
Posts: 513
Location: United States (FL)
Schools: UFL (A)
GMAT 1: 600 Q45 V29
GMAT 2: 590 Q35 V35
GMAT 3: 570 Q42 V28
GMAT 4: 610 Q44 V30
GPA: 3.45
WE: Accounting (Accounting)
Followers: 26
Kudos [?]: 747 [1] , given: 630
Re: Karl and Nell made donations to the same charity last year. [#permalink]
### Show Tags
02 Oct 2013, 15:09
1
KUDOS
Official Explanation
The question is: Is k > n ?
Statement (1) is insufficient. It tells us n = 2k - 200. If k is greater than 200, then n is greater. For instance, if k = 300, then n = 2(300) - 200 = 400. However, if k is smaller than 200, the opposite is true. If k = 150, then n = 2(150) - 200 = 100. We don't know the range of values for either k or n, so we can't answer the question.
Statement (2) is also insufficient. This doesn't tell us anything about the relationship between the donations, or about Nell's amount.
Taken together, the statements are sufficient. Since we know k is greater than 250, it must be greater than 200, meaning that Nell's contribution was greater, according to the equation in (1). Choice (C) is correct.
Re: Karl and Nell made donations to the same charity last year. [#permalink] 02 Oct 2013, 15:09
Similar topics Replies Last post
Similar
Topics:
4 In each of the last five years, Company K donated p percent of its 5 16 Jun 2016, 02:49
7 What was the amount of money donated to a certain charity? 9 03 Jan 2014, 05:13
What was the amount of money donated to a certain charity? 3 21 Mar 2013, 04:34
83 Beginning in January of last year, Carl made deposits of 18 22 Jan 2012, 01:28
80 Beginning in January of last year, Carl made deposits of 12 08 Sep 2009, 10:58
Display posts from previous: Sort by
|
{}
|
## anonymous 5 years ago e^.07(10-t)[10,1) find the definite integral
1. amistre64
[10,1) is a little backwards... (1,10] is that e^.07 times (10-t)? or is the last part included in the exponent?
2. anonymous
$\int\limits_{1}^{10} e^.07(10-t)$
3. amistre64
so it aint included as the exponent..makes it alittle easier :)
4. amistre64
e^.07 is a constant, so pull it out of the way
5. amistre64
(S) (10-t) dt -> 10t - (1/2)t^2 F(t) = [10t - (t^2)/2] e^.07
6. amistre64
F(10) - F(1) will be the answer
7. amistre64
(100 - 50)e^.07 - (10-.5)e^.07 (50 - 9.5) e^.07 (40.5) e^.07 if I did it right in me head :)
8. amistre64
that might be alittle off, did you mean to exclude 1 as an option?
9. anonymous
no 1 is included. now I'm understanding
10. amistre64
whew!!.... cause in the top {10,1) means everything from 10 to 1 but not including 1 :)
11. anonymous
no i just want it for 10 and 1
12. amistre64
then were good with it :) I got some idea for how to find it if it approaches 1, but nothing ti be sure about :)
13. anonymous
Ive got another one its its kinda hard what you think
14. amistre64
I can take a stab at it..... im ok with failure :)
15. amistre64
is it one thats already posted? or you need to write it up?
16. anonymous
im going to write it now
17. anonymous
$\int\limits_{0}^{15}$e^0.05e^.06(15-t)
18. amistre64
to clean it up... is that: e (e) (15-t) ??
19. amistre64
or is that: e^e^(15-t)?
20. amistre64
i assume the first one :)
21. amistre64
recall that like bases when multiplies add exponents, for example: 5^3 5^5 = 5^(3+5) = 5^8 so, e^.06 e^.06 = e^.11 which is still a constant and can be pulled out...
22. amistre64
you can see the typo right....
23. amistre64
that leaves us with: (S) 15-t dt -> 15t - (t^2)/2 | [15,0] F(t) = [15t - (t^2)/2] e^.11 since F(0) = 0 the only important on is F(15) F(15) = [15(15) - 15(15)/2] e^.11 = [225 - 112.5] e^.11 F(15)= 112.5 e^.11
24. anonymous
$\int\limits_{0}^{15}e^.05e.06(15-t)$
25. anonymous
this is how it looks exactly will u get the same answer or does this makes a difference
26. amistre64
is that .06 an exponent?
27. amistre64
(e^.05) (e^.06) (15-t) is what I integrated :)
28. amistre64
that equation option down there is useful for some things, but I can never get it to do what I want....
29. amistre64
$\int\limits_{0}^{15} e^{.05} e^{.06} \left( 15-t \right)$
30. amistre64
$= (112.5) e ^{.11}$
31. anonymous
it its e^0.5.e^.06 the 15-t is beside the .06 in exponential form
32. anonymous
ok so what will happen to the ^(15-t)
33. amistre64
$\int\limits_{0}^{15} e ^{.05} e^{.06\left( 15-t \right)}$
34. anonymous
yes that's exactly what it is
35. amistre64
ok....we can pull out that first e, its nothing but a constant
36. amistre64
go ahead and distribute the .06 thru the (...)
37. amistre64
e^(.8 - .06t) right?
38. amistre64
(.9 - .06t)
39. amistre64
If we can get this into the form: (S) Du e^u , then it suits up to e^u....does that make sense?
40. amistre64
think back to derivatives... e^2x goes down to 2 e^2x right? does that ring a bell?
41. amistre64
so... (S) 2 e^2x would equal just e^2x does that help out?
42. amistre64
our exponent here (.8-.06t) derives to: -.06 right? so we need to modify this set up to include an (-.06) without actually changing the value of the set up... what number can we multiply ANY number by to get the same value back? x times ? equals x ??
43. amistre64
d (e^x) dx --- = ---- e^x dx dx Or to right it another way: Dx(e^x) -> Dx e^x Dx(e^7x) -> Dx(7x) e^7x -> 7 e^7x right>
44. anonymous
not sure im getting the last part
45. amistre64
ok.... tell me what your having difficulites with and I can help iron out the wrinkles :)
46. anonymous
ok the (.8-.06t) not getting the dirivitive of this
47. amistre64
im thinking it involves the chain rule...which I think should be called the "gear"rule...
48. amistre64
I want you to think of a box of gears that are all meshed together so that when you turn the very first one, it has an effect on all the rest of them. each gear in turn is turning another correct?
49. anonymous
yes i understand how you get the -.06 is this where we use the [15,0]
50. amistre64
not yet, we havent gotten to our initial function yet, we still need to find a way to get there first.
51. amistre64
we have a function: e^u that depends on "u" for its value right? u = .8 -.06t and so the value of "u" depends on the value of "t" right?
52. anonymous
why did we substititute?
53. amistre64
we substitued values to see how the original function behaves. What we need in order to integrate this original function is to modify the way it looks without changing its "value". we can easily integrate the function Du e^u to get e^u so this is our goal. We need to modify the "shape" of our original function so that it matches the "easy to integrate" function without changing the "value" of our original function.
54. amistre64
what number do we know of that is used to obtain the same "value" but a different "shape" of a function? x times ? = x ??
55. amistre64
for example: say you only have a 20 dollar bill, and you want to buy some bubble gum for 1.00 but the cashier has no change to plit your $20. How can you purchase the bubble gum? by changing your$20 into a bunch of \$1 bills... that is all we are wanting to do here.... so that we can make life easier for us
56. anonymous
ok
57. amistre64
We have e^u we WANT Du e^u what value does Du have to be?
58. amistre64
Du is probably a bad name for that.... let me ask this.. what do we have to multiply (e^u) by in order to keep the same "value" e^u times ____ = e^u ?
59. amistre64
think 1....think 1.... 1 times e^u = e^u right?
60. anonymous
right
61. amistre64
does Du/Du = 1?
62. amistre64
does (-.06)/(-.06) = 1 ??
63. amistre64
-.06 e^.05 (S) ----- e^(.8 -.06t) dt -.06 look close to what we want it to be?
64. amistre64
if your unsure, tell me what your doubts are.... cause I can be rather stupid at times and mess alot of simple things up :)
65. amistre64
in essense, we want to slide that bottom (-.06) out of the way since it is a constant we can do that and put it under the left side. that leaves the integrand to be -(e^.05)/.06 [S] -.06 e^(.8-.06t) dt -> e^(.8-.06t) F(t) = - [(e^.05)/.06] [e^(.8-.06t)]
66. amistre64
or put another way: - e^(.85 - .06t) F(t) = ------------- .06
67. amistre64
(Constants) [S] Du e^u du becomes (Constants) (e^u) is all we did in a nutshell
68. amistre64
I gotta head to class for the next few hours... if your still lost, go ahead and post it for everyone to see :) Ciao
|
{}
|
Math Notation Help
This glossary will help you build complex mathematical equations using the Tex markup language. This will involve using @@ or $$before and after the expression to display the desired results. Browse the glossary using this index Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL Page: (Previous) 1 ... 3 4 5 6 7 8 9 10 11 12 (Next) ALL P Psi (upper case greek letter)$$\Psi$$gives $\Psi$ R relativity $E=mc^2$ Keyword(s): relativity rho (lower case greek letter)$$\rho$$gives $\rho$ right only brace • Syntax: \left.{...\right} (note the dot!) • Ex.:$$\left.{{\rm~term1\atop\rm~term2}\right}=y$$gives $\left.{{\rm~term1\atop\rm~term2}\right}=y$ (\rm~something switches to roman style) root • Syntax: \sqrt[n]{arg} or simply \sqrt{arg} for \sqrt[2]{arg} • Ex.:$$\sqrt[3]{8}$$gives $\sqrt[3]{8}$ • Ex.:$$\sqrt{-1}$$gives $\sqrt{-1}$ • Nesting of roots (and combining with fractions, ...etc.) are possible. • Ex.:$$\sqrt[n]{\frac{x^n-y^n}{1+u^{2n}}}$$gives $\sqrt[n]{\frac{x^n-y^n}{1+u^{2n}}}$ • Ex.:$$\sqrt[3]{-q+\sqrt{q^2+p^3}}$$gives $\sqrt[3]{-q+\sqrt{q^2+p^3}}$ Keyword(s): square root S s.u.m$$\sum_{n+2}^x$$is $\sum_{n+2}^x$ Keyword(s): sum sigma (lower case greek letter)$$\sigma$$gives $\sigma$ Sigma (upper case greek letter)$$\Sigma$$gives $\Sigma$ smiley$$~\unitlength{.6}~\picture(100){~~(50,50){\circle(99)}~ ~(20,55;50,0;2){+1$\hat\bullet}~~(50,40){\bullet}~~(50,35){\circle(50,25;34)}~ ~(50,35){\circle(50,45;34)}}$$is $~\unitlength{.6}~\picture(100){~~(50,50){\circle(99)}~ ~(20,55;50,0;2){+1\hat\bullet}~~(50,40){\bullet}~~(50,35){\circle(50,25;34)}~ ~(50,35){\circle(50,45;34)}}$ Keyword(s): smiley square bracket • Synatx: \left[...\right] • Ex.:$$\left[a,b\right]$\$ gives $\left[a,b\right]$
Page: (Previous) 1 ... 3 4 5 6 7 8 9 10 11 12 (Next)
ALL
|
{}
|
Hi, I wonder whether the expectation operator of the term E[R[w_t]] in equations (11.4.12) and (11.4.13) is unnecessary. And the “E” in (11.4.15) and (11.4.16) seems to be “R”. Thanks a lot.
In inequality 11.4.12, I guess we imply that
E_wt[l(xt, wt)] >= E_wt[E_xt[l(xt, wt)]] = E_wt[R(wt)]
If this is the case, I would appreciate to see a more thorough explanation.
In 11.4.15 and 16, it should be E[R(\bar{wt})] instead of E[\bar{wt}] . After all, we seek an upper bound for the deviation of the expected value of the risk from the minimum risk, which we obtain in 11.4.16.
Hi @wwwu and @sanjaradylov, thanks for the discussions. We’ve just revised the proof and it can be previewed at http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_optimization/sgd.html
Just let me know if you have any further questions on it.
|
{}
|
# Linear Transformation and spanning set
I have the following question below:
Let $V$ and $W$ be real vector spaces and $T:V\to W$ be a linear transformation such that $\ker(T) = {0} \subset V$. Let vectors $v_1,v_2,v_3,v_4$ belong to $V$, and $\{T(v_1), T(v_2)\}$ is a generating set for W. Is the set of vectors $v_1,v_2,v_3,v_4$ belonging to $V$ a generating set for $V$?
I found this question from an old book. The more current books I think use the word spanning set instead of generating set. I am not sure how to start this proof off. My initial guess is to assume
$w1$ = $T(v1)$, $w2$ = $T(v2)$, so $\{w1,w2\}$ is the spanning set in $W$. Such that some vector $wi$ in $W$ can be represented as $wi$ = a$w1$ + b$w2$. Then by linearity I guess that we can write $wi$ = $T( a$v1$+ b$v2$)$ .
From this point I am stuck to how to show that these 2 vectors are a spanning set for V, and if I can do that, I guess I can then add v3 and v4 to the first two and claim this is a spanning set for V, because a spanning set can have redundant vectors, this is not asking for a basis set, but a spanning set. Hope someone can help here.
Palu
First of all, mentioning the vectors $v_3$ and $v_4$ is pure smoke and mirrors, since we know nothing about them (they could even be zero). What is important is the fact that $\ker(T)=0$. This shows that $T$ is injective, so that the image $T(V)$ is isomorphic to $V$. Moreover the fact that $T(v_1)$ and $T(v_2)$ generate $W$ implies that $T$ is also surjective, so $T$ is an isomorphism, and $V$ and $W$ are isomorphic and thus $T(V)=W$. To show that $v_1$ and $v_2$ span $V$ take any vector $v \in V$, take it's image $T(v)$ in $W$ and decompose $T(v)=aw_1+bw_2$ and then take inverse images $v:=av1+bv_2$. This can only happen when $T$ is an isomorphism. An example where $T$ is not injective is given is T is the map $T:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ given by the matrix $\left(\begin{array}{cc} 1&-2\\ 2&-4\\ \end{array}\right)$. The vectors $v_1=\left(\begin{array}{c} 1\\ 0\\ \end{array}\right)$ and The vectors $v_2=\left(\begin{array}{c} 0\\ 1\\ \end{array}\right)$ span the vectorspace $V=\mathbb{R}^2$ but their images by $T$ do not.
• I just want to add a note. We don't necessarily know that $v_{1}$ and $v_{2}$ are linearly independent from each other, unless I'm mistaken. I don't think $\{ T(v_{1}), T(v_{2}) \}$ being a "generating set" necessarily means it is a basis. It doesn't affect your answer, but it is just an extra note. – layman Aug 30 '14 at 17:08
• Thanks Nimda. So you have shown that only v1 and v2 span V and hence all of v1,v2,v3,v4 do not span V. Am I concluding correctly. – Palu Aug 30 '14 at 17:34
• Hi user46944, yes you are correct, w1,w2 is a spanning set, not a basis and it is also correct that we don't know if these vectors, either vi or wi are independent. – Palu Aug 30 '14 at 17:36
• Hi Nimba, yes since one takes inverse images, and get that a general vector v=av1+bv2, then we have a vector spanned by v1,v2 and then I guess one can by extension add in v3, v4, and hence have all of them become the spanning set (we can do this since we are looking for a spanning set and not a basis, we can allow these redundant vectors). So hence all 4 vectors we can say span the space V. Please let me your thoughts on this. – Palu Aug 31 '14 at 14:13
Hints: Since $W$ have a spanning set with two elements this means $\dim(W)\leq2$
Also note that $T$ is $1-1$ since it have a trivial kernel, this means $\dim(V)\leq\dim(W)\leq2$
Now - what in general can we say about $\dim(T(V))$ ?
Look at the set of inequalities we got: $$\dim(W)=\dim(T(V))\leq\dim(V)\leq\dim(W)$$ and thus $\dim(V)=\dim(W)$ and since $T$ is $1-1$ it is also onto. Now, since $\{T(v_{1}),T(v_{2})\}$ spans $W$ it have a subset that is a basis of $W$ (if $\dim(W)=2$ then it is simply $\{T(v_{1}),T(v_{2})\}$ and if $\dim(W)=1$ it is $\{T(v_{1})\}$ or $\{T(v_{2}\}$)
and $T^{-1}$ is also an isomorphism since $T$ is and it will map a basis to a basis so $\{v_{1},v_{2}\}$ or $\{v_{1}\}$ or $\{v_{2}\}$ is a basis for $V$
• @Palu $\dim(T(V))\leq\dim(V)$ and yes – Belgi Aug 30 '14 at 16:57
• @Palu - In general no, but look at the set of inequalities we got: $\dim(W)=\dim(T(V))\leq\dim(V)\leq\dim(W)$ and thus $\dim(V)=\dim(W)$ and since $T$ is $1-1$ it is also onto. Now, since $\{T(v_{1}),T(v_{2})\}$ spans $W$ it have a subset that is a basis of $W$ (if $\dim(W)=2$ then it is simply $\{T(v_{1}),T(v_{2})\}$ and if $\dim(W)=1$it is $\{T(v_{1})\}$ or $\{T(v_{2}\}$) and $T^{-1}$ is also an isomorphism since $T$ is and it will map a basis to a basis so $\{v_{1},v_{2}\}$ or $\{v_{1}\}$ or $\{v_{2}\}$ is a basis for $V$ – Belgi Aug 30 '14 at 17:14
|
{}
|
# [Tuglist] subfigure
Mohit Agarwal tuglist@gnu.org.in
Thu, 28 Jun 2001 15:11:09 +0530 (IST)
*** S. Nagesh Kini wrote in the TUGIndia list today at 10:10 +0530:
: In chapter 13 of tutorial chapters,
: title: fun with floats, subfigure environment
: It is not mentioned where to put the name of the file containing the
: figure. Can I get some help?
\subfigure[Something]{\label{fig:something}\includegraphics{myfigure}}
would give you a subfigure with the caption Something' and myfigure'
would be the included figure with the label fig:something'. You'll need
to put the above line in the figure environment as in the tutorial. The
three dots (after the label) in the tutorial seem to suggest that it is
the place for placing your graphic.
mohit
p.s. I suppose that the tutorial sources are also available.
`
|
{}
|
# What is the force, in terms of Coulomb's constant, between two electrical charges of 8 C and -15 C that are 9 m apart?
##### 1 Answer
Jun 19, 2018
$- 1.48 \setminus {\text{C"^2"/m}}^{2} \cdot k$
#### Explanation:
Coulomb's law states that:
$F = k \frac{{q}_{1} {q}_{2}}{r} ^ 2$
where:
• $k$ is Coulomb's constant
• ${q}_{1} , {q}_{2}$ are the charges of the two charges
• $r$ is the distance between the two charges
So, we get:
F=k(8 \ "C"*-15 \ "C")/((9 \ "m")^2)
$= \left(- 120 \setminus {\text{C"^2*k)/(81 \ "m}}^{2}\right)$
$= - 1.48 \setminus {\text{C"^2"/m}}^{2} \setminus k$
|
{}
|
# [icq-devel] Re: Contents of icq-devel digest Vol 23, Issue 1
Trevor gemaspecial at yahoo.com.hk
Thu Jan 13 02:55:41 CET 2005
--- icq-devel-request at blipp.com ¤º®e¡G
> Send icq-devel mailing list submissions to
> icq-devel at blipp.com
>
> To subscribe or unsubscribe via the World Wide Web,
> visit
> http://vic20.blipp.com/mailman/listinfo/icq-devel
> or, via email, send a message with subject or body
> 'help' to
> icq-devel-request at blipp.com
>
> You can reach the person managing the list at
> icq-devel-owner at blipp.com
>
> is more specific
> than "Re: Contents of icq-devel digest..."
>
>
> Today's Topics:
>
> 1. Encryption of direct-connection message on
> ICQ2003 (Chen Kidheart)
> 2. David Kren/UK/Symbian is out of the office.
> (David.Kren at symbian.com)
>
>
>
----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 12 Jan 2005 15:01:32 +0800
> From: "Chen Kidheart" <kidheartchen at msn.com>
> Subject: [icq-devel] Encryption of direct-connection
> message on
> ICQ2003
> To: icq-devel at blipp.com
> Message-ID:
> <BAY11-F24D19F3F1098C30751DFCDC9890 at phx.gbl>
> Content-Type: text/plain; charset=big5;
> format=flowed
>
> Excuse me!
>
> I have some questions about direct connections of
> ICQ PRO 2003b client.
>
> ( Product of ICQ (TM).2003 b.5.56.1.3916.85 )
>
> There is a DC message packet below:
>
> 0000005D 02 a5 53 97 b7 a8 84 e0 0a aa 7c ee 0a 4d
> 83 ee ..S.....
> ..|..M..
> 0000006D 0a 58 83 ee 0a 5c 83 f4 0a 05 83 cf 0a 47
> 83 ee .X...\..
> .....G..
> 0000007D 26 57 3d 85 79 56 8c 2c 1a 18 25 30 47 e0
> 60 b8 &W=.yV.,
> ..%0G..
> 0000008D 41 40 83 ee 0d 57 83 ee 47 2b f0 9d 6b 21
> e6 ee A at ...W..
> G+..k!..
> 0000009D 0a 52 83 ee 0b 49 83 ee 0a 48 83 ee 0a 05
> b4 ef .R...I..
> .H......
> 000000AD 0a 59 82 ef 0a 2e f8 b2 78 27 e5 df 56 35
> ed 9d .Y......
> x'..V5..
> 000000BD 63 59 e2 80 79 69 63 70 67 39 35 30 5c 64
> 65 66 cY..yicp
> g950\def
> 000000CD 66 30 5c 64 65 66 6c 61 6e 67 31 30 33 33
> 5c 64 f0\defla
> ng1033\d
> 000000DD 65 66 6c 61 6e 67 66 65 31 30 32 38 7b 5c
> 66 6f eflangfe
> 1028{\fo
> 000000ED 6e 74 74 62 6c 7b 5c 66 30 5c 66 73 77 69
> 73 73 nttbl{\f
> 0\fswiss
> 000000FD 5c 66 63 68 61 72 73 65 74 30 20 41 72 69
> 61 6c \fcharse t0
> Arial
> 0000010D 3b 7d 7b 5c 66 31 5c 66 6e 69 6c 5c 66 63
> 68 61 ;}{\f1\f
> nil\fcha
> 0000011D 72 73 65 74 31 33 36 20 5c 27 62 37 5c 27
> 37 33 rset136
> \'b7\'73
> 0000012D 5c 27 62 32 5c 27 64 33 5c 27 61 39 5c 27
> 66 61 \'b2\'d3
> \'a9\'fa
> 0000013D 5c 27 63 35 5c 27 65 39 3b 7d 7d 0d 0a 7b
> 5c 63 \'c5\'e9
> ;}}..{\c
> 0000014D 6f 6c 6f 72 74 62 6c 20 3b 5c 72 65 64 32
> 31 5c olortbl
> ;\red21\
> 0000015D 67 72 65 65 6e 34 33 5c 62 6c 75 65 37 37
> 3b 7d green43\
> blue77;}
> 0000016D 0d 0a 5c 76 69 65 77 6b 69 6e 64 34 5c 75
> 63 31 ..\viewk
> ind4\uc1
> 0000017D 5c 70 61 72 64 5c 63 66 31 5c 6c 61 6e 67
> 31 30 \pard\cf
> 1\lang10
> 0000018D 32 38 5c 66 30 5c 66 73 32 32 20 54 65 73
> 74 20 28\f0\fs 22 Test
>
> 0000019D 4d 65 73 73 61 67 65 20 23 32 5c 66 31 5c
> 70 61 Message
> #2\f1\pa
> 000001AD 72 0d 0a 7d 0d 0a 00 00 00 00 00 ff ff ff
> 00 26 r..}....
> .......&
> 000001BD 00 00 00 7b 39 37 42 31 32 37 35 31 2d 32
> 34 33 ...{97B1
> 2751-243
> 000001CD 43 2d 34 33 33 34 2d 41 44 32 32 2d 44 36
> 41 42 C-4334-A
> D22-D6AB
> 000001DD 46 37 33 46 31 34 39 32 7d
> F73F1492 }
>
>
> In this packet, the encrypted section from 0x02 to
> 0x79 before the RTF
> message :
>
> 0000005D 02 a5 53 97 b7 a8 84 e0 0a aa 7c ee 0a 4d
> 83 ee ..S.....
> ..|..M..
> 0000006D 0a 58 83 ee 0a 5c 83 f4 0a 05 83 cf 0a 47
> 83 ee .X...\..
> .....G..
> 0000007D 26 57 3d 85 79 56 8c 2c 1a 18 25 30 47 e0
> 60 b8 &W=.yV.,
> ..%0G..
> 0000008D 41 40 83 ee 0d 57 83 ee 47 2b f0 9d 6b 21
> e6 ee A at ...W..
> G+..k!..
> 0000009D 0a 52 83 ee 0b 49 83 ee 0a 48 83 ee 0a 05
> b4 ef .R...I..
> .H......
> 000000AD 0a 59 82 ef 0a 2e f8 b2 78 27 e5 df 56 35
> ed 9d .Y......
> x'..V5..
> 000000BD 63 59 e2 80 79
> cY..y
>
>
> How do I analyze/decrypt this section ?
>
> I only know this kind of DC message packets always
> ICQ PRO 2003 client.
>
> Can i get the UIN/ICQ# from the encrypted section ?
> or get any concrete
> information ?
>
> Is this encrypted section just provided for the
> receiver to check that the
> sender is legal or not ?
>
> Thx in advance. Have a nice day!
>
>
>
> -Chun Hang Chen
>
>
_________________________________________________________________
> MSN Messenger 7.0
> http://messenger.msn.com.tw/beta
>
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 12 Jan 2005 10:00:33 +0000
> From: David.Kren at symbian.com
> Subject: [icq-devel] David Kren/UK/Symbian is out of
> the office.
> To: icq-devel at blipp.com
> Message-ID:
>
>
<OFE9484A24.33AB643F-ON80256F87.0036FB7D-80256F87.0036FB7D at Symbian.com>
>
> Content-Type: text/plain; charset="US-ASCII"
>
>
>
>
>
> I will be out of the office starting 11/01/2005 and
> will not return until
> 14/01/2005.
>
> On training till Friday.
>
=== message truncated ===
_______________________________________________________________________
·s¦~Ä@±æ²Ä¤@¦ì : §ä¨ì¤Í½t¤H
http://personals.yahoo.com.hk
|
{}
|
# Category Archives: Science and Math
## Exact solution of the 2D Ising model via Grassmann variables
1. Introduction
In 1980 Stuart Samuel gave what I consider to be one of the most elegant exact solutions of the 2D Ising model. He used Grassmann variables to formulate the problem in terms of a free-fermion model, via the fermionic path integral approach. The relevant Grassmann action is quadratic, so that the solution can be found via well known formulas for Gaussian integrals.
In previous articles, I derived Onsager’s celebrated expression for the partition function of the 2D Ising model, using two different methods. First, I used a combinatorial method of graph counting that exploits the theory of random walks (see here). In the second article (see here), I reviewed the method of Schultz, Mattis and Lieb of treating the 2D Ising model as a problem of many fermions. Here I further explore the connection between the Ising model and fermions.
This article is based on my study notes. I more or less have followed Samuel’s solution of the 2D Ising model using Grassmann variables, but I had expanded the calculations for my own convenience.
Whereas on the one hand the behavior of bosons can be described using standard path integrals, known as bosonic path integrals, on the other hand the path integral formulation for fermions requires the use of Grassmann variables. Grassmann numbers can be integrated, but not using the standard approach based on Riemann sums and Lebesgue integrals, etc. Instead, they must be integrated using Berezin integration. I have previously written about Grassmann variables and Berezin integrals here.
In what follows, I assume familiarity with the 2D Ising model. Readers who find it difficult to understand the details are referred to the two previously mentioned articles, which are introductory and easier to understand.
The partition function can be approached as a power series either in a high temperature variable or a low temperature one. Consider first the high temperature expansion. It is well known that the partition function of the 2D Ising model is proportional to the generating function of connected or disconnected graphs in which all nodes have even degree, and edges link only nearest neighbors on the infinite square lattice. Graph nodes correspond to sites and links correspond to bonds. In such graphs, every incoming bond at a site has at least one corresponding outgoing bond, because each site has an even number of bonds. To each such “closed graph,” there are corresponding directed graphs, where the links are directed. Since each bond appears at most once in any such directed graph, but never twice or more, we can enumerate such closed directed graphs by assigning pairs of Grassmann variables to each bond. In this case, the generating function, when evaluated in a suitable high temperature variable (such as ${u=\tanh \beta J}$), gives the Ising model partition function up to a known prefactor.
Here, I do not use the above high temperature expansion, in terms of loops or multipolygons. Instead, I use a low temperature expansion and enumerate non-overlapping magnetic domains, following Samuel’s original work. Specifically, each configuration of the Ising model corresponds to a specific domain wall configuration. So summing over all domain wall configurations is equivalent to a sum over all spin states, excepting for a 2-fold degeneracy since each domain wall configuration corresponds to 2 possible spin configurations.
Since the Ising model on the square lattice is self-dual, the high temperature approach using overlapping multipolygons and the low temperature appproach using non-overlapping multipolygons on the dual lattice are equivalent, of course. Either way, explicit evaluation of a Berezin integral gives the exact solution. Specifically, we assign 4 Grassmann variables to each site of the dual lattice. Equivalently, one can instead think as follows: there are two types of Bloch walls, “vertical” domain walls and “horizontal” walls and each wall segment has two ends. A Bloch wall can thus be represented by Grassmann variables at 2 different sites. A corner of a Bloch domain can be represented by matching a vertical and horizontal variable at a single site. Finally, so-called “monomer” terms consist of 2 horizontal or 2 vertical variables. They represent the absence of a corresponding bond, and also of corners. A single monomer term at a site represents a domain wall that goes straight through a site, but perpendicularly to the monomer and without going around a corner. Two monomer terms at the same site represent a site interior to domain walls, i.e, not on a domain wall. Using the terms corresponding to Bloch walls, corners and monomers, we then construct an action.
2. Overview of the strategy
The solution strategy is as follows. We will exploit 2 properties of Grassmann variables. The first property is anticommutativity, so that the square of a Grassmann variable is zero. This nilpotency property can be exploited to count every domain wall unit (or bond in the high temperature method) no more than once. The second property we will exploit is a feature that is specific to the Berezin integral. Recall that a Berezin multiple integral is nonzero only if the Grassmann variables being integrated exactly match or correspond to the integration variables. If there is a single integration variable that is not matched, the whole multiple integral vanishes, because ${\int d\psi=0}$ for the Berezin integral of any Grassmann variable ${\psi}$. From now onwards, we will say that the integrand “saturates” the Berezin integral if all integration variables are matched by integrand variables. So the second property is the ability of the multiple Berezin integral to select only terms that saturate the integral.
The essense of the trick is to exponentiate the suitably chosen action. The Taylor series for a function of a finite number of Grassmann variables is a polynomial (because of the nilpotency). This polynomial can code all kinds of graphs and other strange groupings of parts of graphs, such as isolated corners or monomers. By Berezin integration, we can then select only the terms that saturate the integral. If the action is chosen appropriately, the saturating terms are precisely those that correspond to the non-overalapping Bloch domains. (In the high temperature variant, the action instead generates the desired multipolygons.)
It will turn out that for the 2D Ising model this action is a quadratic form, so that we essentially have a free-fermion model. Specifically, the quadratic action is partially “diagonalized” by the Fourier transform, immediately leading to the exact solution originally found by Onsager. Of all the methods of solution of the Ising model, I find this method to be the most simple, beautiful and powerful.
What I also found quite fascinating is that for the cubic lattice one obtains a quartic action instead, as is well known to experts in the field. So, in this sense, the cubic Ising model is not directly equivalent to a free-fermion model, but rather to a model of interacting fermions.
We assume the reader is familiar with the Ising model on the square lattice. Let the Boltzmann weight ${t= e^{-\beta J}}$ be our chosen low temperature variable. Then we can write the partition function as
$\displaystyle \Xi= \sum_{\sigma} t^{H(\sigma)} \ \ \ \ \ (1)$
where ${H}$ is the Ising model Hamiltonian and the sum is over all spin states. The choice of symbol ${\Xi}$ for the partition funtion was made so that the letter ${Z}$ can be reserved for the partition function per site in the thermodynamic limit, to be defined further below.
For a given spin configuration, let us consider the set of all bonds in the excited state, i.e. with antiparallel spins. These excited bonds form the Bloch walls separating domains of opposiite magnetization. On the dual lattice, these Bloch walls form non-overlapping loops or polygons. Moreover, every Bloch wall configuration corresponds to exactly 2 spin configurations, so that we can rewrite the partition function as
$\displaystyle \Xi= 2\sum _{\mbox{\tiny loops}} t^{\gamma} t^{-(2N-\gamma)} = 2t^{-2N}\sum _{\mbox{\tiny loops}} t^{2\gamma} ~. \ \ \ \ \ (2)$
Here, the factor ${t^\gamma}$ is due to the Bloch walls and ${2N-\gamma}$ is the number of bonds inside the Bloch domains. We thus see the partition function can be calculated by ennumerating non-overlapping loops and summing them with proper Boltzmann weights.
Let us define a modified partition function for these loops by
$\displaystyle \Xi'= \sum _{\mbox{\tiny loops}} t^{2\gamma} \ \ \ \ \ (3)$
so that ${\Xi=2 t^{-2N} \Xi'}$. Our goal henceforth is thus to calculate ${\Xi'}$. To do so, we will use Grassmann numbers and Berezin integration.
Let us define at each site ${x}$ of the 2D lattice a pair of Grassmann variables in the vertical direction, ${\eta_{\pm 1}(x)}$ and another pair for the horizontal direction ${\eta_{\pm 2}(x)}$.
We can now define an action for our fermionic path integral as follows. Each configuration of nonoverlapping loops consists of (i) individual segments of loops that link neighboring sites on the dual lattice (Bloch wall units), (ii) sites where the domain wall goes through the site and (iii) sites where the Bloch domain cuts a corner. We will thus write the total action as a sum of 3 terms: (i) the Bloch wall or “line” term ${S_L}$ , (ii) the “monomer” term ${S_M}$ and (iii) the corner term ${S_C}$:
$\displaystyle S= S_L + S_M + S_C ~. \ \ \ \ \ (4)$
We will then exponentiate this action and use a Berezin integral to obtain ${\Xi'}$:
$\displaystyle \Xi'= (-1)^N \int e^{\beta S} \prod_x d\eta_{-1}(x) d\eta_{+1}(x) d\eta_{-2}(x) d\eta_{+2}(x) ~ \ \ \ \ \ (5)$
We will use the same convention used by Samuel, so
$\displaystyle \int d\eta_{-1} d\eta_{+1} \eta_{-1} \eta_{+1} =1~. \ \ \ \ \ (6)$
Let us define the Bloch wall terms by
$\displaystyle S_L= t^2 \sum_x[ \eta_{+1}(x) \eta_{-1}(x +\hat 1) + \eta_{+2}(x) \eta_{-2}(x +\hat 2) ] ~. \ \ \ \ \ (7)$
To remain consistent with this definition the corner term must be defined as
$\displaystyle S_C = \sum_x[ \eta_{+1}(x) \eta_{-2}(x) + \eta_{+2}(x) \eta_{-1}(x) + \eta_{+2}(x) \eta_{+1}(x) + \eta_{-2}(x) \eta_{-1}(x) ] ~. \ \ \ \ \ (8)$
To see why, consider a corner formed along a path going horizontally forward followed by vertically forward. You thus have 2 Bloch walls segments meeting at the corner. We want to saturate first the horizontal wall, then the vertical. The horizontal wall contributes with ${\eta_{-1}(x)}$ and the vertical wall with ${\eta_{+2}(x)}$. So we want to saturate the Berezin integral at the site ${x}$ with the corner factor ${\eta_{+1}\eta_{-2}}$. This is the first term in the corner action. The 3 other terms are similarly deduced.
Meanwhile, the monomer terms are even simpler. We want the monomer terms to “do nothing”, i.e. to contribute with a factor of 1 when (and only when) needed. From the sign convention (6) we thus obtain simply
$\displaystyle S_M= \sum_x[ \eta_{-1}(x) \eta_{+1}(x) + \eta_{-2}(x) \eta_{+2}(x) ] ~ . \ \ \ \ \ (9)$
Note that corner and monomer terms have an even number of Grassmann variables per site, while the line term has only one Grassmann variable on each of two neighboring sites. So to saturate the Berezin integral, an even number of line terms (so 0,2, or 4) must come together at a given site.
The Berezin integral for a fixed site ${x}$ can only saturate in the following ways:
1. Two monomer terms, one horizontal and one vertical.
2. Two lines and a monomer.
3. Two lines and a corner.
4. Four lines.
The following are prohibited at any site:
1. An odd number of lines (because there is no way to saturate missing Grassmann variables).
2. One corner and one monomer (because one Grassmann variable will necessarily be repeated, so that the nilpotency kills the term, and similarly one variable will be missing, also leading to zero).
3. A double corner of 4 domain walls and 2 corners (because then you have 6 Grassmann variables at the site).
There is one other interesting case: 2 corner terms with no lines. In this case, each Grassmann variable at the site appears exactly once, so such terms do in fact contribute. Moreover, every such term is matched by another term with the two other kinds of corners. For example the two-corner term
$\displaystyle [\eta_{+1}(x) \eta_{-2}(x) ][ \eta_{+2}(x) \eta_{-1}(x) ]$
is matched by the term
$\displaystyle [\eta_{-2}(x) \eta_{-1}(x) ][ \eta_{+2}(x) \eta_{+1}(x) ] ~.$
But
$\displaystyle \begin{array}{rcl} [\eta_{-2}(x) \eta_{-1}(x) ][ \eta_{+2}(x) \eta_{+1}(x) ] &=& -\eta_{-2}(x) \eta_{+2}(x) \eta_{-1}(x) \eta_{+1}(x) \\ &=& +\eta_{-2}(x) \eta_{+2}(x) \eta_{+1}(x) \eta_{-1}(x) \\ &=& -\eta_{-2}(x) \eta_{+1}(x) \eta_{+2}(x) \eta_{-1}(x) \\ &=& +\eta_{+1}(x) \eta_{-2}(x) \eta_{+2}(x) \eta_{-1}(x) ~. \end{array}$
So the two ways of combining 2 corners lead to double the contribution. But the first double corner is actually the negative of the term with two monomers:
$\displaystyle \begin{array}{rcl} [\eta_{+1}(x) \eta_{-2}(x) ][ \eta_{+2}(x) \eta_{-1}(x) ] &=& \eta_{+2}(x) \eta_{-1}(x) \eta_{+1}(x) \eta_{-2}(x) \\ &=& - \eta_{-1}(x) \eta_{+2}(x) \eta_{+1}(x) \eta_{-2}(x) \\ &=& + \eta_{-1}(x) \eta_{+1}(x) \eta_{+2}(x) \eta_{-2}(x) \\ &=& - \eta_{-1}(x) \eta_{+1}(x) \eta_{-2}(x) \eta_{+2}(x) \end{array}$
so that at each site the double monomer term plus the two double corner terms produce a net contribution of ${+1-2=-1}$. If there are ${N}$ sites, then the number of sites on the lines is necessarily even, so that the number ${N'}$ of sites not on the walls satisfies ${(-1)^{N'}=(-1)^N}$. So there is an overall factor of ${(-1)^N}$, as seen in (5).
Meanwhile, every domain wall segment has weight ${t^2}$, so that a graph of non-overlappig loops of total loop length ${\gamma}$ will have a weight of ${t^{2\gamma}}$. There are many different ways to permute the lines, corners and monomers, but this is cancelled by the factorial in the denominator of the Taylor expansion of the exponential function. The final result is that the right hand sides of (5) and (3) are equal.
4. Diagonalization and exact solution
The quadratic action is translationally invariant, so the Fourier transform will diagonalize it, i.e. in the new Fourier conjugate variable the action is diagonal. By this we mean that the action does not mix different Fourier frequencies, upto sign.
So let us define the (unitary) Fourier transform ${\hat \eta}$ of the Grassmann variables ${\eta}$:
$\displaystyle \eta_i(x) = \frac 1 {\sqrt{N}} \sum_k e^{ik\cdot x} \hat \eta_i(k) ~. \ \ \ \ \ (10)$
Here ${x}$ and ${k}$ are 2-dimensional vectors. It will be more convenient for us to write
$\displaystyle k = k_x \hat 1 + k_y \hat 2\ \ \ \ \ (11)$
where
$\displaystyle \begin{array}{rcl} k_x&=& \frac {2\pi n_x} {L_x} \\ k_y&=& \frac {2\pi n_y} {L_x} ~, \end{array}$
where ${N_x}$ and ${N_y}$ are the number of rows and columns of the lattice and ${N_x N_y=N}$.
We could take ${n_x}$ ranging as ${0\ldots (L_x-1)}$, but we will instead take ${N_x}$ and ${N_y}$ as odd and use negative wavenumbers as follows. Let ${N_x=2 L_x+1}$ and ${N_y=2 L_y+1}$. Then,
$\displaystyle \begin{array}{rcl} n_x&=& -L_x, -(L_x-1), \ldots, L_x-1, L_x\\ n_x&=& -L_x, -(L_x-1), \ldots, L_x-1, L_x ~. \end{array}$
It is easy to check that the ${\hat \eta}$ are Grassmann numbers, a fact that follows from the unitarity of the Fourier transform. Unitarity also guarantees that
$\displaystyle \prod_x d\eta_{-1}(x) d\eta_{+1}(x) d\eta_{-2}(x) d\eta_{+2}(x) = \prod_k d\hat\eta_{-1}(k) d\hat\eta_{+1}(k) d\hat\eta_{-2}(k) d\hat\eta_{+2}(k) \ \ \ \ \ (12)$
so that we can explicitly evaluate the Berezin integral provided we can rewrite the Grassmann action in simple enough form in the Fourier transformed Grassmann variables. This we do next.
Notice that the action is quadratic in the ${\eta}$. Specifically, the action is a quadratic form in the Grassmann variables that we can write as
$\displaystyle S= \sum_{xyij} \eta_i(x) A_{ij}(x,y) \eta_j(y) ~. \ \ \ \ \ (13)$
We will explicitly define ${A_{ij}(x,y)}$ below, but for the moment let us not worry about it, except to note that it is simply a matrix or linear operator kernel. Note that the “matrix” ${A}$ is actually a tensor of rank 4, because there are 4 indices: ${i,~j, ~x,~y}$. However, for a fixed pair of not-necessarily-distinct sites ${x}$ and ${y}$, ${A}$ is a genuine (4${\times}$4) matrix with only two indices. Similarly, for fixed ${i, ~j}$, ${A}$ is an ${N\times N}$ matrix. We will apply the Fourier transform on the site variables, ${x,~y}$, noting that the action has the form of an inner product, and by the unitarity of the Fourier transform, it is possible to rewrite the action as
$\displaystyle S= \sum_{kk'ij} \hat\eta^*_i(k) \hat A_{ij}(k,k') \hat\eta_j(k') ~. \ \ \ \ \ (14)$
All we have done here is apply Plancherel’s theorem, sometimes also known as Parseval’s theorem. Hatted symbols represent Fourier transformed quantities.
I have previously written about the magical powers of the Fourier transform. For example: (i) Convolutions become products and vice versa under Fourier transformation; (ii) differential operators transform to simple algebraic multipliers; (iii) translations hecome phase shifts. It is this last property which is the most important for our purposes here.
The translationally invariant quadratic action for the 2D Ising model only connects neighboring sites. The only non-diagonal part of the action is due to this shift to neighboring sites. But upon Fourier transformation, we get rid of the shifts and obtain an action which is diagonal in the Fourier transform variable ${k}$. In other words, ${\hat A_{ij}(k,k') = \hat A_{ij}(k,k) \delta(k,k')}$ where ${\delta(k,k')}$ is the Kronecker delta function.
Let us introduce more compact notation to make the calculations easier. Let us define the vector
$\displaystyle \eta= \left[\begin{array}{c} \eta_{-1} \\ \eta_{+1} \\ \eta_{-2} \\ \eta_{+2} \end{array} \right] \ \ \ \ \ (15)$
and similarly for ${\hat \eta}$. Then we can write
$\displaystyle \begin{array}{rcl} S_L&=& \sum_{xy} \eta^T(x) A_L(x,y) \eta(y) \\ S_C&=& \sum_{xy} \eta^T(x) A_C(x,y) \eta(y) \\ S_M&=& \sum_{xy} \eta^T(x) A_M(x,y) \eta(y) ~. \end{array}$
The superscript ${\cdot^T}$ denotes transpose. The matrices ${A_C}$ and ${A_M}$ are given by (8) and (9) and are diagonal in the site indices, i.e. in ${x,~y}$.
$\displaystyle \begin{array}{rcl} A_C(x,y)&=& \delta(x,y) \left[ \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \end{array} \right] \\ A_L(x,y)&=& \delta(x,y) \left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{array} \right] ~, \end{array}$
where ${\delta(x,y)}$ is a Kronecker delta function, not to be confused with the Dirac delta. The matrix ${A_L}$ is not diagonal in the site indices because it connects neighboring sites:
$\displaystyle \begin{array}{rcl} A_L(x,y)&=& t^2\delta(x+\hat 1,y) \left[ \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right] +t^2 \delta(x+\hat 2,y) \left[ \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{array} \right] ~. \end{array}$
In terms of ${\hat \eta}$, we can thus write the action as
$\displaystyle S= \sum_{k}\hat \eta^\dagger(k) B(k,k) \hat \eta(k) ~, \ \ \ \ \ (16)$
Here ${B=\hat A}$. The reason the sum is now over a single index ${k}$ is that ${A}$ is translationally invariant, and application of Parseval’s (or Plancherel’s) allows us to convert the site shift to a phase, so that ${\hat A(k_1,k_2)=\delta(k_1,k_2) \hat A(k_1,k_1)=B(k_1,k_1)}$. In other words, ${\hat A=B}$ is diagonal in the Fourier variables such as ${k}$, which is conjugate to the site index ${x}$.
As an illustrative example, let us show this last point explicitly. Consider the Fourier transform of ${\sum_y C(x,y) \eta(y)}$, where ${C(x,y)=C_0 \delta(x+\hat 1,y)}$ is translationaly invariant:
$\displaystyle \begin{array}{rcl} & & \frac 1 {\sqrt{N}} \sum_{x=1}^N e^{-ik\cdot x} \sum_y C_0 \delta(x+\hat 1,y) \eta(y) \\ & & = \frac {C_0} {\sqrt{N}} \sum_{x=1}^N e^{-ik \cdot x} \eta(x+\hat 1) \\ & & = \frac {C_0} {\sqrt{N}} \sum_1^N e^{-ik \cdot (x-\hat 1)} \eta(x) \\ & & = \frac {C_0 e^{i k\cdot \hat 1}} {\sqrt{N}} \sum_1^N e^{-ik \cdot x} \eta(x) \\ & & = e^{i k\cdot \hat 1}\hat \eta \end{array}$
Hence ${B}$ is diagonal in the sense previously explained. It is easily calclated to be
$\displaystyle \begin{array}{rcl} B(k,k) &=& \left[ \begin{array}{cccc} ~~0 & 1 & 0 & 0 \\ 0 & ~~0 & 1 & 0 \\ 1 & 0 & ~~0 & 1 \\ 1 & 1 & 0 & ~~0 \end{array} \right] \nonumber \\ & & \quad +t^2 e^{ik\cdot \hat 1} \left[ \begin{array}{cccc} ~~0 & 0 & ~~0 & ~~0 \\ 1 & ~~0 & ~~0 & ~~0 \\ ~~0 & ~~0 & ~~0 & ~~0 \\ ~~0 & ~~0 & ~~0 & ~~0 \end{array} \right] +t^2 e^{ik\cdot \hat 2} \left[ \begin{array}{cccc} ~~0 & ~~0 & ~~0 & ~~0 \\ ~~0 & ~~0 & ~~0 & ~~0 \\ ~~0 & ~~0 & ~~0 & 0 \\ ~~0 & ~~0 & 1 & ~~0 \end{array} \right] \\ &=& \left[ \begin{array}{cccc} ~~0 & 1 & 0 & 0 \\ t^2 e^{ik_x} & ~~0 & 1 & 0 \\ 1 & 0 & ~~0 & 1 \\ 1 & 1 & t^2 e^{ik_y} & ~~0 \end{array} \right] ~. \end{array}$
Notice that a daggered variable ${\hat \eta^\dagger(k)}$ (i.e., a conjugate transpose) appears in the action. We need to take care of this. Since ${\eta(x)}$ is, in principle, a “real” Grassmann variable, its Fourier transform satisfies ${[\hat \eta(k)]^* = \hat \eta(-k)}$. This “Hermitian” property of the Fourier transform of real functions allows us to write the action as
$\displaystyle S= \sum_{k}\hat \eta^T(-k) B(k,k) \hat \eta(k) ~. \ \ \ \ \ (17)$
We can explicitly evaluate the needed Berezin integral. But some care is necessary. Observe that the action “mixes” frequencies ${k}$ and ${-k}$. When we rewrite the exponential of the sum in the above action as a product of exponentials of the summands, these exponentials will contain not only the 4 grassman variables that make up the vector ${\hat \eta(k)}$, but also the other 4 variables in ${\hat \eta(-k)}$. So the full Berezin integral will factor not into Berezin integrals over 4 variables, but over 8 variables. So we will regroup the Grassmann differentials in groups of 8 rather than in groups of four, as follows:
$\displaystyle \begin{array}{rcl} & & \prod_k d\hat\eta_{-1}(k) d\hat\eta_{+1}(k) d\hat\eta_{-2}(k) d\hat\eta_{+2}(k)\\ & & = \prod_{k\geq 0} d\hat\eta_{-1}(k) d\hat\eta_{+1}(k) d\hat\eta_{-2}(k) d\hat\eta_{+2}(k) ~ d\hat\eta_{-1}(-k) d\hat\eta_{+1}(-k) d\hat\eta_{-2}(-k) d\hat\eta_{+2}(-k) ~. \end{array}$
In order to correctly factor the full Berezin integral, we need to collect all terms with ${k}$ and ${-k}$, which we can do by rewriting the sum in the action half the values of ${k}$. We will abuse the notation and write ${k\geq 0}$ to mean that ${k>0}$ and ${-k>0}$ are not both included in the summation. For instance, we can accomplish this by taking ${k_x \geq 0}$. With this notation, we can write,
$\displaystyle \begin{array}{rcl} S &=& \sum_{k\geq0}\left( \hat \eta^T(-k) B(k,k) \hat \eta(k) + \hat \eta^T(k) B(-k,-k) \hat \eta(-k) \right) \\ &=& \sum_{k\geq0}\left( \hat \eta^T(-k) B(k,k) \hat \eta(k) - \hat \eta^T(-k) B^T(-k,-k) \hat \eta(k) \right) \\ &=& \sum_{k\geq0}\left( \hat \eta^T(-k) B(k,k) \hat \eta(k) - \hat \eta^T(-k) B^T(-k,-k) \hat \eta(k) \right) \\ ~. \end{array}$
If we define
$\displaystyle B'(k) = B(k,k)-B^T(-k,-k) \ \ \ \ \ (18)$
then we can write the action as
$\displaystyle S= \sum_{k\geq0} \hat \eta(-k) B'(k) \hat \eta (k) ~. \ \ \ \ \ (19)$
Let us write
$\displaystyle d\hat \eta(k) d\hat \eta(-k) = d\hat\eta_{-1}(k) d\hat\eta_{+1}(k) d\hat\eta_{-2}(k) d\hat\eta_{+2}(k) ~ d\hat\eta_{-1}(-k) d\hat\eta_{+1}(-k) d\hat\eta_{-2}(-k) d\hat\eta_{+2}(-k)~. \ \ \ \ \ (20)$
The full Berezin integral now factors and can be written
$\displaystyle \begin{array}{rcl} & & \Xi'= \prod_{k\geq 0} \int { d\hat\eta(k) d\hat\eta(-k) } e^{\hat \eta(-k) B'(k) \hat \eta (k)} ~. \end{array}$
Each of the Berezin integral factors is a Gaussian integral. Recall that
$\displaystyle \int e^{-\sum_{ij}x_i A_{ij} y_j} dx dy = \det(A) \ \ \ \ \ (21)$
for Grassmann numbers ${x_i}$ and ${y_i}$ and a complex matrix ${A}$ (see here for an explanation). So
$\displaystyle \Xi'= \prod_{k\geq 0} \det B' \ \ \ \ \ (22)$
Removing the restriction to ${k \geq 0}$ we have
$\displaystyle \Xi'= \prod_{k} \sqrt {\det B'} ~. \ \ \ \ \ (23)$
Taking logarithms and dividing by the number of sites ${N}$ we have
$\displaystyle \frac 1 N \log \Xi' = \frac 1 2 \frac 1 N \sum_k \log \det B' ~. \ \ \ \ \ (24)$
In the thermodynamic limit, the sum becomes an integral and the partition function per site
$\displaystyle Z=\lim_{N\rightarrow \infty} \Xi^{1/N} \ \ \ \ \ (25)$
is thus given by
$\displaystyle \log Z = \log t^{-2}~ + \frac 1 2 \int_{-\pi}^{\pi}\int_{-\pi}^{\pi} \log \det B'~ \frac{dk_x}{2\pi} \frac{dk_y}{2\pi} \ \ \ \ \ (26)$
The anti-hermitian matrix ${B'}$ is easily found:
$\displaystyle B'(k)= \left[ \begin{array}{cccc} ~~0 & 1- t^2 e^{-ik_x} & -1 & -1 \\ -(1-t^2 e^{ik_x}) & ~~0 & 1 & -1 \\ 1 & -1 & ~~0 & (1-t^2e^{-ik_y}) \\ 1 & 1 & -(1-t^2e^{ik_y}) & ~~0 \end{array} \right] ~. \ \ \ \ \ (27)$
Its determinant is
$\displaystyle \det B'= (1 + t^4)^2 - 2 t^2 (-1 + t^4) (\cos k_x + \cos k_y) ~. \ \ \ \ \ (28)$
Substituting into the integral, we arrive at the expression for the partition function per site:
$\displaystyle \begin{array}{rcl} \log Z &=& \log t^{-2}~ + \frac 1 2 \iint_{-\pi}^{\pi} \log \big[(1 + t^4)^2 - 2 t^2 (-1 + t^4) (\cos k_x + \cos k_y)\big] ~ \frac{dk_x}{2\pi} \frac{dk_y}{2\pi} \\ &=& \frac 1 2 \iint_{-\pi}^{\pi} \log \big[(t^{-2} + t^2)^2 - 2 (-t^{-2} + t^2) (\cos k_x + \cos k_y)\big] ~ \frac{dk_x}{2\pi} \frac{dk_y}{2\pi} \\ &=& \frac 1 2 \iint_{-\pi}^{\pi} \log \big[4 \cosh^2 2\beta J - 4 \sinh 2\beta J ~(\cos k_x + \cos k_y)\big] ~ \frac{dk_x}{2\pi} \frac{dk_y}{2\pi} \end{array}$
We are done! This is Onsager’s famous result, specialized to the case of of equal coupĺings ${J=J_x=J_y}$.
Exercise: Repeat the calculation using the high temperature variable ${u}$ instead of the low tempoerature variable ${t}$. (The final answer is of course the same.)
5. The cubic Ising model
Ever since 1944 when Onsager published his seminal paper, tentative “exact solutions” have been proposed over the years for the 3D or cubic Ising model. As mentioned eariler, the Grassmann action is quartic for the cubic Ising model. In quantum field theory, quartic Grassmann actions are associated with models of interacting fermions, whereas quadratic actions are associated with free-fermion models. The latter are easily solved via Pfaffian and determinant formulas, as we have done above, but at the present time there are no methods known to be able to give exact solutions (in the thermodynamic limit) of lattice models with quartic Grassmann actions. Hence, anybody claiming an exact solution to the cubic Ising model must explain how they overcame the mathematical difficulty of dealing with quartic actions, or at least how the new method bypasses this mathematical obstruction.
Barry Cipra, in an article in Science, referred to the Ising problem as the “Holy Grail of statistical mechanics.” The article lists a number of other reasons why we may never attain the goal of finding an explicit exact solution of the cubic Ising model.
Exact solution of the cubic Ising model may be an impossible problem!
I thank Francisco “Xico” Alexandre da Costa and Sílvio Salinas for calling my attention to the Grassmann variables approach to solving the Ising model.
## Berezin integration of Grassmann variables
1. Introduction
When I first came across the presentation of linear algebra in terms of Grassmann’s exterior products, I was struck by its elegance. An introduction to linear algebra in terms of Grassmann’s ideas can be found here. The Grassmann approach is so much more intuitive that, once learned, there is no going back to the old way of thinking. For example, although in college I found determinants and permanents relatively easy to understand using the conventional approach, I only really came to understand the meaning of Pfaffians after learning exterior algebra (a.k.a Grassmann algebra).
Curiously, Grassmann did not have a university education in mathematics. Rather, he was actually trained as a linguist. Yet, his contributions to mathematics are widely recognized today. Terms such as Grassmann numbers, Grassmann algebra and Grassmanian manifold are all named in his honor.
Also fascinating is how Grassmann numbers make their appearance very naturally in quantum field theory. Specifically, they are used in the path integral formulaton for fermionic fields. Readers interested in finding out more about the connection with fermionic path integrals should refer to standard textbooks, for instance the books by Weinberg or the one by Ryder.
Some years ago I derived, for my own convenience, a few of the basic identities satisfied by Grassmann variables, in the context of differentiation and integration. Integration of Grassmann variables is known as Berezin integration. This article is based on my old study notes.
2. Grassmann numbers
Consider the algebra generated by ${N}$ grassmann numbers ${x_i}$, ${i=1,2,\ldots N}$ that anticommute according to
$\displaystyle \{x_i, x_j\} := x_i x_j + x_j x_i =0 ~. \ \ \ \ \ (1)$
Such elements are nilpotent with degree 2 due to the antisymmetry property: ${x_i^2=0}$.
What is the dimension of this algebra (as a vector space)? Consider the set of all monomials. There are ${N}$ generators ${x_1\ldots x_N}$ which are each nilpotent with degree 2. Hence, a general nonzero monomial can have a generator as factor only once. Hence, there are exactly ${2^N}$ monomials. It is then easy to check that the most general function of the generators can be expressed as a linear combination of these monomials. Indeed, all power series (e.g., Taylor series) terminate, i.e. the most general function is a polynomial in the generators. The dimension of the Grassmann algebra is thus equal to the number of linearly independent monomials, ${2^N}$.
It is worth calling attention to an antisymmetry property of the coefficients of monomials. Consider the monomial term ${x_1 x_2}$. By definition, it equals ${-x_2 x_1}$. So the representation of a function as a linear combination of monomials is not unique. However, if we define the coefficients to be antisymmetric, then we recover uniqueness. For instance, ${a_{12} x_1 x_2 = a_{21} x_2 x_1}$ if the coefficients satisfy ${a_{12}=-a_{21}}$.
3. Differentiation
Because Grassmann variables do not commute, we can define derivatives acting from the right and from the left. Here, I consider only derivatives acting to the right. We define the derivative as follows for a single genrator:
$\displaystyle {\partial x_i \over \partial x_i} =1 ~. \ \ \ \ \ (2)$
To extend the derivative to a monomial, we must first bring the matching generator ${x_i}$ all the way to the left, multiplying by ${(-1)^k}$ where ${k}$ is the number of generators to the left of ${x_i}$ in the original monomial. Then, the derivative is obtained by dropping ${x_i}$. For example,
$\displaystyle {\partial \over \partial x_2} x_1 x_2 x_3 = {\partial \over \partial x_2} - x_2 x_1 x_3 = - x_1 x_3 ~. \ \ \ \ \ (3)$
The derivative then extends to all functions via linearity, i.e. differentiation is a linear operator.
The chain rule holds in the usual manner.
The product rule holds as usual if the first factor ${f_1}$ is of even degree in the generators:
$\displaystyle {\partial \over \partial x_p} f_1 f_2 = \left ( {\partial \over \partial x_p} f_1 \right) f_2 + f_1 {\partial \over \partial x_p} f_2 ~. \ \ \ \ \ (4)$
However, if the first factor has odd degree, then there is a sign change in the second term. Since a general funtion need not be homogeneous and may have terms of both odd and even degree, I consider it safer to assume that the product rule does not hold in general, and instead to calculate term by term explicitly, unless you know what you are doing.
4. Berezin integration
Now we come to the most interesting part of this article! Is it possible to define an integral for Grassmann numbers? The usual antiderivative of a variable ${x}$
$\displaystyle \int x dx = \frac 1 2 x^2$
would be zero if ${x}$ were a Grassmann variable, so it does not make sense to define integration in this manner. However, one can define the equivalent of a definite Riemann integral. The definite integral
$\displaystyle \int_{-\infty} ^\infty f(x) dx \ \ \ \ \ (5)$
has the following properties:
(i) Translation invariance:
$\displaystyle \int_{-\infty} ^\infty f(x) dx = \int_{-\infty} ^\infty f(x+y) dx \ \ \ \ \ (6)$
(ii) Linearity:
$\displaystyle \int (a+ b f(x)) dx = a \int dx + b \int f(x) dx ~. \ \ \ \ \ (7)$
Hence, we will require that that the integral of a Grassmann number also have these two properties. Let ${x}$ and ${y}$ now denote Grassmann numbers.
Then first we require translation invariance.
$\displaystyle \int f(x) dx = \int f(x+y) dx . \ \ \ \ \ (8)$
Given that ${f(x)}$ is at most a linear function of ${x}$, let us write
$\displaystyle f(x) = a + b x \ \ \ \ \ (9)$
where ${a}$ and ${b}$ are complex numbers. Substituting, and invoking linearity, we get
$\displaystyle \begin{array}{rcl} & & \int f(x+y) dx \\ & & =\int (a + b(x+y)) dx \\ & & =(by)\int dx + \int (a+bx) dx\\ & & = (by)\int dx + \int f(x) dx~. \end{array}$
Hence
$\displaystyle \int f(x+y) dx - \int f(x) dx = b y \int dx = 0 ~. \ \ \ \ \ (10)$
so that we are forced to assume
$\displaystyle \int dx = 0 ~. \ \ \ \ \ (11)$
Hence, we get
$\displaystyle \int (a+ bx) dx= b \int x dx~. \ \ \ \ \ (12)$
Berezin chose the convention that
$\displaystyle \int x dx = 1 \ \ \ \ \ (13)$
although other conventions are possible for the constant. Below I use Berezin’s convention.
From these observations, we define for Grassmann numbers,
$\displaystyle \int dx_i =0 ~. \ \ \ \ \ (14)$
(If this is too difficult or “wierd” to accept, try to imagine that supposed integrated quantity ${x_i}$ vanishes at the boundary.)
Next we define
$\displaystyle \int x_i dx_i = 1 ~. \ \ \ \ \ (15)$
Note that the integral is independent of quantities such as ${x_i^2/2}$ which is what you would expect for a Riemann integral for normal (i.e., non-Grassmann) variables, since that would be zero due to the nilpotent property. Instead, Berezin integration is similar to standard differentiation: the usual derivative of ${x_i}$ is 1 and the usual derivative of ${1}$ is 0.
Moreover, the differential ${dx_i}$ anticommmutes with ${x_i}$. In fact, the anticommutation property holds generally:
$\displaystyle \{dx_i, dx_j\} = \{dx_i, x_j\} =0 ~. \ \ \ \ \ (16)$
Multiple integrals are defined, in analogy with Fubini’s theorem, as iterated integrals:
$\displaystyle \iint x_j x_i dx_i dx_j \equiv \int x_j \left( \int x_i dx_i \right) dx_j ~. \ \ \ \ \ (17)$
Note that there are other sign conventions for Berezin integrals. Physicists usually use the convention
$\displaystyle \iint x_i x_j dx_i dx_j =1 ~. \ \ \ \ \ (18)$
In what follows I use the former sign convention.
Next, we come to one of the most interesting and unexpected properties of the Berezin integral. Let ${f(x)}$ represent a function of all the generators. Recall that the most general function of the Grassmann algebra generators is a polynomial. Hence, the most general function can be written
$\displaystyle f (x) = f_0 + \sum_k f_1(k) x_{k} + \sum_{k_1
Indeed, every element of the Grassmann algebra can be written as such.
Now consider the multiple Berezin integral
$\displaystyle \int f(x) ~ dx_1 dx_2 \ldots dx_N ~. \ \ \ \ \ (20)$
Note that all monomial terms of degree ${k}$ with ${k will vanish, because each of the ${N-k}$ iterated integrals for the variables not appearing in the monomial vanish separately. Only monomials of degree ${N}$ survive:
$\displaystyle \int f(x) ~ dx_1 dx_2 \ldots dx_N =f_N(1,2,\ldots,N) ~. \ \ \ \ \ (21)$
5. Change of variables
Riemann integrals satisfy
$\displaystyle \int f(ax) dx = \frac 1 a \int f(x) dx ~. \ \ \ \ \ (22)$
We will show that Berezin integrals satisfy intead
$\displaystyle \int f(ax) dx = a \int f(x) dx ~. \ \ \ \ \ (23)$
The reason for this opposite behavior is that Berezin integration is actually similar to (standard) differentiation.
Let ${y=ax}$ for Grassmann variables ${x}$ and ${y}$ and consider that by definition
$\displaystyle \int y \,dy = \int x \,dx =1 ~. \ \ \ \ \ (24)$
So
$\displaystyle \int y \,dy = \int a x \,dy = \int x \,dx ~. \ \ \ \ \ (25)$
Hence
$\displaystyle a \,dy = dx ~, \ \ \ \ \ (26)$
which means that
$\displaystyle dy = \frac {dx} a ~. \ \ \ \ \ (27)$
In standard calculus, we would instead have ${dy= a\,dx}$, so Berezin differentials scale opposite to what one would expect from standard calculus.
Now let us generalize to ${N}$ generators. Let ${y_i= \sum_{j} a_{ij} x_j}$ Those with some familiarity with exterior products will recogize that the product
$\displaystyle y_1 y_2 \ldots y_N \ \ \ \ \ (28)$
corresponds to the exterior product of maximal grade, so that we naturally expect the determinant to make an appearance:
$\displaystyle y_1 y_2 \ldots y_N = \det(a) ~x_1 x_2 \ldots x_N ~ . \ \ \ \ \ (29)$
Moreover, because the differentials scale inversely to the generators, we have
$\displaystyle \det(a)~dy_1 dy_2 \ldots dy_N = dx_1 dx_2 \ldots dx_N ~ . \ \ \ \ \ (30)$
6. Gaussian integrals
Consider the following Riemann integrals:
$\displaystyle \begin{array}{rcl} \int_{\Bbb R} e^{-ax^2} ~dx & = & \sqrt {\frac \pi a} \\ \iint_{\Bbb R^2} e^{-a(x^2 + y^2)} ~dxdy & =& \frac \pi a ~. \end{array}$
We will next derive an analog of the above for the Berezin integral. The analog of the first integral is zero, due to the nilpotent property. We thus look at the Berezin analog of the second:
$\displaystyle \begin{array}{rcl} \iint e^{-axy} ~dx dy &=& \iint (1 -axy ) ~dx dy \\ &=& -\iint axy ~dx dy \\ &=& \iint ay x ~dx dy \\ & =& a ~. \end{array}$
Note that the ${a}$ is in the numerator rather than the denominator. Morever, there is no more factor of ${\pi}$. (In fact, there are conventions that I do not discuss here, as previously mentioned.)
Let ${x}$ and ${y}$ be generators, so that there are ${2N}$ generators total. Moreover, let
$\displaystyle dx dy = dx_1 dy_1 dx_2 dy_2\ldots dx_N dy_N~.$
Now consider the multiple Gaussian Berezin integral:
$\displaystyle \int e^{-\sum_{ij}x_i A_{ij} y_j} dx dy ~ . \ \ \ \ \ (31)$
Let us change basis in order to diagonalize the matrix ${A_{ij}}$, via a unitary transformation. In the new variables ${x'}$ and ${y' }$, the transformed matrix ${A'}$ is diagonal, so that the exponential factors into a product of terms such as ${\exp[- x'_i A'_{ii} y'_i]}$. Hence, the full integral can be written as a product of simple Gaussian integrals and the value of the full Gaussian integral will simply be
$\displaystyle \int e^{-\sum_{ij}x_i A_{ij} y_j} dx dy = \prod _i A'_{ii} = \det A' = \det A~. \ \ \ \ \ (32)$
Here we have used the fact that unitary transformations leave the determinant invariant.
In contrast, for a Riemann integral the correct expression is
$\displaystyle \int_{\Bbb R^N} e^{-x^T Ax} ~d^Nx = \sqrt {\frac {\pi^N} {\det(A)}} ~. \ \ \ \ \ (33)$
So, besides the factors of ${\pi}$, the determinant of ${A}$ appears in the numerator instead of in the denominator for the Berezin integral.
7. A Gaussian integral in terms of a Pfaffian
Let us use a change of variable and define
$\displaystyle \begin{array}{rcl} x_j &=& {\frac 1 {\sqrt 2}}~ (z_j^{(1)} + i z_j^{(2)}) \\ y_j &=& {\frac 1 {\sqrt 2}} ~(z_j^{(1)} + i z_j^{(2)}) ~. \end{array}$
$\displaystyle \begin{array}{rcl} dx_j dy_j &=& \det \left[ \begin{array}{cc} \frac 1{\sqrt 2} & \frac i {\sqrt 2} \\ \frac 1{\sqrt 2} & -\frac i {\sqrt 2} \end{array} \right]^{-1} dz_j^{(1)} dz_j^{(2)} \\ &=& i dz_j^{(1)} dz_j^{(2)} ~. \end{array}$
Note that the reason the Jacobian matrix is inverted is due to the strange way that Grassmann variables behave under change of variables, as explained above.
Let ${A}$ be an antisymmetric matrix of dimension ${N\times N}$ with ${N}$ even. Consider
$\displaystyle \sum_{ij} x_i A_{ij} y_j = \sum_{ij} \frac {A_{ij}} 2 [ z_i^{(1)} z_j^{(1)} - i z_i^{(1)} z_j^{(2)} + i z_i^{(2)} z_j^{(1)} + z_i^{(2)} z_j^{(2)} ] ~. \ \ \ \ \ (34)$
Since ${A}$ is antisymmetric, the cross terms will cancel, so that
$\displaystyle \sum_{ij} x_i A_{ij} y_j = \frac 1 2 ( z_i^{(1)} A_{ij }z_j^{(1)} + z_i^{(2)} A_{ij }z_j^{(2)} )~ . \ \ \ \ \ (35)$
Integrating the exponential of the above and substituting into (32). and remembering that ${N}$ is even we get,
$\displaystyle \begin{array}{rcl} \int e^{-\sum_{ij}x_i A_{ij} y_j} dx dy &=& i^{N} \int e^{\sum_{ij} \frac 1 2 ( z_i^{(1)} A_{ij }z_j^{(1)} + z_i^{(2)} A_{ij }z_j^{(2)} ) } dz_1^{(1)} dz_1^{(2)} \ldots dz_N^{(1)} dz_N^{(2)} \\ &=& (-1)^{N/2} \int e^{\sum_{ij} \frac 1 2 ( z_i^{(1)} A_{ij }z_j^{(1)} + z_i^{(2)} A_{ij }z_j^{(2)} ) } (-1)^{\frac 1 2 N (N-1)} dz_1^{(1)} \ldots dz_N^{(1)} dz_1^{(2)} \ldots dz_N^{(2)} \nonumber \\ &=& \left[ \int e^{\sum_{ij} \frac 1 2 ( z_i A_{ij }z_j ) } dz \right]^2 ~. \end{array}$
Recall the following identity for the Pfaffian of an antisymmetric even dimensional matrix:
$\displaystyle {\rm Pf}(A) = \sqrt {\det(A)} ~. \ \ \ \ \ (36)$
We thus obtain
$\displaystyle \int e^{\sum_{ij} \frac 1 2 ( z_i A_{ij }z_j ) } dz = {\rm Pf}(A) ~ \ \ \ \ \ (37)$
for any even dimensional antisymmetric matrix ${A}$.
8. Berezin integration as a contraction
My favorite way of thinking about Berezin integration is in terms of interior products in Grassmann algebras. (Note: interior products are not the same as inner products.) In fact, interior products are how I explicitly calculate more difficult Berezin integrals in practice. If time permits, I may write something up in the future on this topic. This idea is of course not new. It is known that Berezin integrals are a type of contraction, see here.
## Writing a paper in E-prime
Many top scientists communicate clearly, sometimes seemingly effortlessly. The papers by Einstein flow elegantly in clear and logical steps, almost as if choreographed, from one idea to the next. Some articles even have qualities more commonly seen in great works of art, for example, Dirac’s seminal book on quantum mechanics or Shannon’s paper introducing his celebrated entropy. What a pleasure to read! Most physicists similarly recognize Feynman as a master of clear communication.
Before I became a grad student, I had underestimated the importance of good and effective communication. My former PhD advisor, an excellent communicator, taught me the crucial role played by communication in scientific discourse and debate.
Let me explain this point in greater detail. As an illustrative example, imagine if Einstein had not written clearly. Then it may very well have taken much longer for his ideas to percolate and gain acceptance throughout the scientific community. Indeed, Boltzmann, in contrast to Einstein, wrote lengthy and admittedly difficult-to-read texts. Some of his critics perhaps failed to grasp his seminal ideas. Disappointed and possibly depressed, he eventually committed suicide while still in his prime. Today, the top prize in the field of statistical physics honors his name— the Boltzmann Medal. Nevertheless, it took many years and the efforts of other scientists (e.g. Gibbs) for the physics community to recognize the full extent of Boltzmann’s contributions. Clear exposition can make a big difference.
In this blog post, I do not give tips or advice about how to write clearly. Good tips on how to write clearly abound. Instead, I want to draw your attention to how this article does not contain a single instance of the verb “to be” or any of its conjugations or derived words, such as “being,” “was,” “is,” and so forth — excepting this sentence, obviously. The subset of the English language that remains after the removal of these words goes by the name E-prime, often written E’. In other words, E’ equals English minus all words derived from the above-mentioned verb.
Writing in E’ usually forces a person to think more carefully. Scientists need to communicate not only clearly, but with a slightly higher degree of precision than your typical non-scientist. I have found that fluency in E’ helps me to spot certain kinds of errors of reasoning. The key error of reasoning attenuated by the use of E’ relates to identification. Too often, the referents of the grammatical subject and object become identified in standard English, where in fact no such identification exists in the real world. E’ helps to reduce this improper identification, or at least to call attention to it. The topic of E’, and of related subjects, such as its ultimate historical origins in general semantics, the study of errors of reasoning, the nature of beliefs, cognitive biases, etc., would require too broad a digression for me to discuss here, so I recommend that interested readers research such topics on their own.
In my early 30s, soon after I obtained tenure in my first faculty position, I decided to write a full article entirely in E’. What a wonderful and interesting exercise! Of course, I did not find it easy to write in E’, but with few exceptions, the finished paper contained only E’ sentences. Forcing myself to think and write in E’ helped me to give a better description of what we, as scientists, really did. I would cautiously claim that writing in E’ benefited our paper, at least as far as concerns clarity and precision. No longer do I publish papers in E’, but I learned a lot about how to write (and think) a little bit more clearly.
That paper, about an empirical approach to music, appeared in print in 2004 in the statistical physics journal Physica A. It eventually ended up cited very well: 33 citations according to Thomson Reuters’ Web of Science and 60 citations on Google Scholar, as of May 2016. Most incredibly, it even briefly shot up to the top headline at Nature.com (click here to see)! We had never expected this.
In that paper, my co-authors and I proposed a method for studying rhythmic complexity. The collaboration team included as first author Heather Jennings, a professor of music (and also my spouse). We took an empirical approach for comparing the rhythmic structures of Javanese Gamelan, Jazz, Hindustani music, Western European classical music, Brazilian popular music (MPB), techno (dance), New Age music, the northeastern Brazilian folk music known as Forró and last but not least: Rock’n Roll. Excepting a few sentences, the paper consists entirely of E’ sentences.
You can read the paper by clicking here for the PDF. A fun exercise: as you read the paper, (1) try to imagine how you would normally rephrase the E’ sentences in ordinary English; (2) try to spot the subtle difference in meaning between the English and E’ sentences.
## Colóquio na USP sobre movimento de animais
Aqui está o link para o video de um Colóquio que proferi na USP em 09/04/2015. A palestra está em português, embora o título esteja em inglês. Esse assunto representa o “feijão com arroz” das minhas pesquisas na área de física estatística aplicada.
Vale a pena também destacar que o professor que me apresenta no início do video é o professor titular Mario de Oliveira, autor do livro sobre termodinâmica que virou referência no Brasil. Seu livro é frequentemente usado como texto principal junto a disciplinas de termodinâmica nos cursos de graduação em física.
## Scale invariance, random walks and complex networks
Here is the link to a youtube video of a talk I gave at the International Institute of Physics (IIP) at UFRN, in Natal, Brazil. It is one of many talks given by invited lecturers at the school on Physics and Neuroscience, which was held at the IIP during 11 to 17 of August 2014.
This talk touches on the bread and butter of my research activities. It should be completely or almost completely understandable to anyone at least midway through an undergraduate degree in the sciences. Since the participants in the conference came from diverse backgrounds, I had made a special effort to avoid the use of jargon and to speak in as clear a language as I could. (It is probably the longest talk I have given about my research.)
An explanation about the initial statement regarding elves and hobbits, etc.: These comments refer to a running “inside joke” at the school, contrasting the distinct scientific cultures of the participants, for example biologists vs. applied mathematicians and physicists etc.
## Fermionization of the 2-D Ising model: The method of Schultz, Mattis and Lieb
F. A da Costa, R. T. G. de Oliveira, G. M. Viswanathan
This blog post was written in co-authorship with my physics department colleague Professor Francisco “Xico” Alexandre da Costa and Professor Roberto Teodoro Gurgel de Oliveira, of the UFRN mathematics department. Xico obtained his doctorate under Professor Sílvio Salinas at the University of São Paulo. Roberto was a student of Xico many years ago, but left physics to study mathematics at IMPA in Rio de Janeiro in 2010. During 2006–2007, Roberto and Xico had written up a short text in Portuguese that included the exact solution of the Ising model on the infinite square lattice using the method of fermion operators developed by Schultz, Mattis and Lieb. With the aim of learning this method, I adapted their text and expanded many of the calculations for my own convenience. I decided to post it on this blog since others might also find it interesting. I have previously written an introduction to the 2-D Ising model here, where I review a combinatorial method of solution.
1. Introduction
The spins in the Ising model can only take on two values, ${\pm 1}$. This behavior is not unlike how the occupation number ${n}$ for some single particle state for fermions can only take on two values, ${n=0,1}$. It thus makes sense to try to solve the Ising model via fermionization. This is what Schultz, Mattis and Lieb accomplished in their well-known paper of 1964. In turn, their method of solution is a simplified version of Bruria Kaufman’s spinor analysis method, which is in turn a simplification of Onsager’s original method.
We will proceed as follows. First we will set up the transfer matrix. Next we will reformulate it in terms of Pauli’s spin matrices for spin-${\tfrac 1 2}$ particles. Recall that in quantum field theory boson creation and annihilation operators satisfy the well-known commutation relations of the quantum harmonic oscillator, whereas fermion operators satisfy analogous anticommutation relations. The spin annihilation and creation operators ${\sigma_j^\pm }$ do not anticommute at distinct sites ${j}$ but instead commute, whereas fermion operators must anticommute at different sites. This problem of mixed commutation and anticommutation relations can be solved using a method known as the Jordan-Wigner transformation. This step completes the fermionic reformulation of the 2-D Ising model. To obtain the partition function in the thermodynamic limit, which is the largest eigenvalue of the transfer matrix, one diagonalizes the fermionized transfer matrix using appropriate canonical transformations.
Image
## Are science and religion compatible?
This blog post explores whether or not science and religion are compatible. I use the term religion in the usual sense, to mean a system of faith, worship and sacred rituals or duties. Religions typically consist of an organized code or collection of beliefs related to the origins and purpose of humanity (or subgroups thereof), together with a set of practices based on those beliefs. Can such belief systems be compatible with science?
Since this topic is controversial, I only reluctantly decided to write about it. Being a physics professor and a research scientist, I decided not to flee debate on this issue (which is like the third rail of science). Instead, here I detail my thoughts in writing.
Actually, I spent decades trying to reconcile science and (organized) religion, however I made little or no significant progress. Eventually, after much hesitation and discomfort, I was forced to conclude that full reconciliation between science and organized religion may not be possible, even in principle. Although this realization was initially surprising (and unpleasant) to me, I soon discovered new and more fulfilling ways of approaching issues such as ethics, morals and the purpose or meaning of life, which religion has traditionally monopolized.
1. Short answer: science and religion are incompatible
Religion is a culture of faith and science is a culture of doubt.’ This statement is usually attributed to Richard Feynman. Faith and doubt are indeed antagonistic, like water and fire. How can it be possible to fully reconcile religious views, which are based on faith, with the systematic doubt and the skeptical questioning that are intrinsic to the scientific method? Like many scientists, I too have concluded that full reconciliation of science and religion is not possible.
One caveat: obviously, if one removes the element of dogma and faith from religion, then reconciliation might be possible. But religion without dogma is more like a social club than a traditional religion. What would become of Christianity without faith in Jesus Christ? Can you imagine Islam without faith in the Koran? So, by religion I always mean organized religion, with a set of teachings or dogmas.
Below I explore these issues in some detail.
2. Dirac and Feynman on religion
In the list of the all-time greatest physicists, Newton and Einstein invariably take the top positions. Paul A. M. Dirac, of Dirac equation fame, is considered to be an intellectual giant, ranking just a few notches below Einstein or Newton. And Feynman, who usually ranks just below or comparable to Dirac, has rock star status in the physics community.
It doesn’t seem to me that this fantastically marvelous universe, this tremendous range of time and space and different kinds of animals, and all the different planets, and all these atoms with all their motions, and so on, all this complicated thing can merely be a stage so that God can watch human beings struggle for good and evil — which is the view that religion has. The stage is too big for the drama.
If we are honest — and scientists have to be — we must admit that religion is a jumble of false assertions, with no basis in reality. The very idea of God is a product of the human imagination. It is quite understandable why primitive people, who were so much more exposed to the overpowering forces of nature than we are today, should have personified these forces in fear and trembling. But nowadays, when we understand so many natural processes, we have no need for such solutions. I can’t for the life of me see how the postulate of an Almighty God helps us in any way. What I do see is that this assumption leads to such unproductive questions as why God allows so much misery and injustice, the exploitation of the poor by the rich and all the other horrors He might have prevented. If religion is still being taught, it is by no means because its ideas still convince us, but simply because some of us want to keep the lower classes quiet. Quiet people are much easier to govern than clamorous and dissatisfied ones. They are also much easier to exploit. Religion is a kind of opium that allows a nation to lull itself into wishful dreams and so forget the injustices that are being perpetrated against the people. Hence the close alliance between those two great political forces, the State and the Church. Both need the illusion that a kindly God rewards — in heaven if not on earth — all those who have not risen up against injustice, who have done their duty quietly and uncomplainingly. That is precisely why the honest assertion that God is a mere product of the human imagination is branded as the worst of all mortal sins.
I do not accept arguments from authority. But it is nevertheless interesting to read about what these eminent physicists had to say.
3. Scientists abandon God and religion
Most scientists are non-religious. Many are atheist. A Pew survey from 2009 found that while over 80% of Americans believed in God, less than 50% of scientists believed in God. The percentage was actually 33% in that particular survey. These numbers are typical. For instance, among the members of the US National Academy of Sciences, more than 60% of biological scientists had disbelief in God (i.e., were what most people call atheists’) according to a study from 1998. In the physical sciences, 79% had disbelief in God.
This issue is relevant in society because most politicians and people in leadership positions are, at least outwardly, sympathetic to religion if not actively religious. So there is at least this one important difference between the majority of scientists and the rest of society. Whereas most people are religious, most scientists are non-believers.
More worrying is that many politicians actively campaign against science and science education. We have all heard about attempts by the religious to eliminate (or water down) the teaching of Darwinian evolution in schools. At least in the West, these attempts have largely failed.
Fortunately, the voting population does not particularly crave a return to the dark ages. It is easy to understand why. The experience of the last few centuries has shown that social and economic development is only possible when there is political support and commitment to science research and education. Science is responsible for the invention of the Internet, cell phones, radio, TV, cars, trains, airplanes, X-rays, MRIs, the eradication of smallpox, etc. Rich and socially developed countries are precisely those in which science education and research are well funded. Economic pressures have thus led to investment in science and in science education.
At the same time, science has led to unintended consequences. The more a person is exposed to science, the less religious they are likely to become. (Possibly as a consequence, wealth is also negatively correlated with religiosity. In other words, on average the richer you are, the less religious you are likely to be.)
Especially among those with less science education, there is a fear that exposure to science and to “subversive” ideas such as Darwinian evolution will infect the minds of young people and turn them into “Godless infidels.” In fact, fear is a constant theme in religion: fear of God, fear of divine punishment, fear of hell, fear of forbidden knowledge, etc. Science education dispels such fears, and replaces it with the cultivation of curiosity, wonder, questioning, doubt and awe. Since fear is often used as an instrument of control and power, the loss of fear can be a setback for the power structures of organized religion. In this sense, science and science education sometimes directly threaten some religious movements.
Consider, as an example, suicide bombing as a form of jihad by Islamic militant organizations. It is perfectly fair for us to ask: is it even remotely plausible that these hapless suicide bombers correctly understood the scientific method? This is a rhetorical question, of course. A genuinely curious and scientifically literate potential candidate for suicide bombing would immediately ask questions, especially when faced with death by suicide. Is life after death a sure thing? Will Allah really reward a suicide bomber? How it is possible for the big-breasted and hot Houri girls and women to recover their sexual virginity every morning? The young man may then go on to ask: is there a remote possibility, perhaps, that such ridiculous claims are not a sign of pulling the wool over the eyes of the naïve young men in their sexual prime who crave sex and intimacy with women, but who are forbidden by religion to engage in casual sex? And why is recreational and free sex allowed in Paradise but not on earth? A scientifically literate young man would probably say `Thanks but no thanks, I’ll let you go first to set the example!’ Hence the fear and loathing of doubt, curiosity and questioning. Indeed, scientific illiteracy makes people gullible and easier to manipulate.
There is no denying the statistics: exposure to science is correlated with loss of religious faith. This raises two questions: (i) why does this happen and (ii) is this good or bad? I am mostly concerned here with question (i) and only briefly touch upon point (ii).
|
{}
|
## Characteristic Cycle and the Euler Number of a Constructible Sheaf on a Surface
J. Math. Sci. Univ. Tokyo
Vol. 22 (2015), No. 1, Page 387–441.
Saito, Takeshi
Characteristic Cycle and the Euler Number of a Constructible Sheaf on a Surface
|
{}
|
Thread: HW 4 question3 View Single Post
#13
05-02-2012, 05:10 AM
IamMrBB Invited Guest Join Date: Apr 2012 Posts: 107
Re: HW 4 question3
Quote:
Originally Posted by elkka I first thought the same thing about 1. But then, where do we see ? It is the measure of difference between E_in and E_out, which can be small, and can be big depending on the experiment. Suppose you are talking about an experiment with very large numbers, like the number of minutes people use in a month on a cell phone (which, say, average 200). Than it is totally meaningful to consider a prediction that assures you that (or 5, or 10) with probability 0.95. So, it totally makes sense to rate the bounds even if they all are >1
I don't think you are right on this. E_in and E_out in the Vapnik-Chervonenkis Inequality (lecture 6), which is the basis for the VC bound, are fractions and not absolute numbers. I know elsewhere in the course the professor has used E_out also for numbers which can be bigger than 1 (e.g. squared error, lecture 8), however when you lookup the Vapnik-Chervonenkis Inequality, you'll see that E_in and E_out are probabilities/probility measures (i.e. fraction incorrectly classified).
To see that your example probably doesn't make sense (IMHO): replace the minutes in your example with either nanoseconds or, on the other hand, ages, and you would get very different numbers on the left side of the equation (i.e. epsilon) while it wouldn't make a difference for the right side of the equation. This can't be right (it would e.g. be unlikely that E_in and E_out are 60 seconds apart but likely that they are a minute apart?!): it would make the inequalities meaningless.
Also on the slides of lecture 6, it is fractions (in)correctly classified that are used for the Vapnik-Chervonenkis Inequality.
Dislaimer: I'm not an expert on the matter, and perhaps I miss a/the point somewhere, so hope we'll get a verdict by the course staff.
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.