arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
## Description Given a binary tree, you need to compute the length of the diameter of the tree. The diameter of a binary tree is the length of the longest path between any two nodes in a tree. This path may or may not pass through the root. Example: Given a binary tree 1 / \ 2 3 / \ 4 5 Return 3, which is the length of the path [4,2,1,3] or [5,2,1,3]. Note: The length of path between two nodes is represented by the number of edges between them. ## Solutions 题意是要找到树中两个结点之间最长的距离,可以用递归来做,总的最长距离可以拆成两个部分,左子树最长距离和右子树最长距离,然后为了找最大值,需要一个最大值的变量。 ### 1. Recursion 值得注意的是返回值的选定和 max_path 的计算。步骤是先找到最大的左右子树高度,然后在高度上加 1 即可。 # Time: O(n) # Space: O(n) # Definition for a binary tree node. # class TreeNode: # def __init__(self, x): # self.val = x # self.left = None # self.right = None class Solution: def diameterOfBinaryTree(self, root: TreeNode) -> int: max_path = [0] self.find_max_path(root, max_path) return max_path[0] def find_max_path(self, root, max_path): if not root: return 0 left = self.find_max_path(root.left, max_path) right = self.find_max_path(root.right, max_path) max_path[0] = max(max_path[0], left + right) return max(left, right) + 1 # 106/106 cases passed (44 ms) # Your runtime beats 62.34 % of python3 submissions # Your memory usage beats 100 % of python3 submissions (14.4 MB)
• Announcements Archived This topic is now archived and is closed to further replies. very small win32 programs with vc++. Recommended Posts trigger    123 hi all! just some days ago i compiled a program with an empty WinMain function. *shock* totally 24kb exe file only for an empty WinMain program!? after asking on irc i found out that there are some helpfull linker settings that dont linke the not needed stuff. so the exe file shrinked from 24kb to 16kb. when i use upx with the 16kb version i can get the file size down to ~4kb ! the only problem is that i cant really understand how this linker options work! is there anybody out who can explain them to me!? i also got, that when you use that linker option you have to specify the wanted libs or dlls manually. anybody out there who can tell me wich libs are important and why they are???? so you see that i am really interested in how it works and not only in using some linker options somebody told me to use. best regards trigger Share this post Share on other sites Silent    122 Look up a function on MSDN. At the end of every document function, it tells you what header the prototype is contained and what lib is required. Share this post Share on other sites trigger    123 well, that wasn''t my question! i need informations about the following libs and why i have to link them: kernel32.lib user32.lib gdi32.lib opengl32.lib msvcrt.lib for example kernel32 or user32. i think that they arent used for any function but have to be used for a win23 programm. and saying to search the msdn is a very bad answer to this question. searching the internet would be faster. msdn is only a reference. but this is another topic. you guys should know that if msdn really is that good, nobody would need books on programming! hope you got that best regards trigger Share this post Share on other sites NuffSaid    122 Under the linker options, specify /OPT:REF and /OPT:NOWIN98. That''ll shave of some of the size. Also, make sure you compile in Release mode. Now if the size isn''t small enough for you,. you might want to link the C runtime using a DLL. Go to Project->Settings->Compiler->C/C++ and select the Multithreaded DLL. That''ll bring your basic windows app down to 4 KB. Now for some advice. Don''t ever use UPX (or any executable packer). Never EVER. Its bad. It makes your exe smaller (by up to 75%) but it makes the programs memory requirement jump up to 10 times more (about 1000%)!!! That''s not worth it. I''ve seen it happen to my programs. Just look at your program memory usage under a program lik Win2k Task Manager or Norton Utilities System Info. Share this post Share on other sites trigger    123 well, thanx for the first real reply! >Now if the size isn''t small enough for you,. you might want to >link the C runtime using a DLL. Go to >Project->Settings->Compiler->C/C++ and select the Multithreaded >DLL. using the c runtime using a dll!? doesnt this mean that the dll has to be in the system folder?? i heared that only windows 98 and higher has this runtime. win95 wont work! is that right? later Share this post Share on other sites S1CA    1418 Go here: http://msdn.microsoft.com/msdnmag/issues/01/01/hood/hood0101.asp Read. Enjoy. -- Simon O''''Connor Creative Asylum Ltd www.creative-asylum.com Share this post Share on other sites Guest Anonymous Poster thanx for all! i really enjoyed reading the text :D ! Share this post Share on other sites Guest Anonymous Poster Add this to yer main .c file, it will not add in all the useless windows stuff that it doesnt need. but this does require you to specifically add windows includes... #define WIN32_LEAN_AND_MEAN Share this post Share on other sites NuffSaid    122 That just speeds up the compiler time (if at all). It doesn''t reduce the exe size, unless of course, my copy of MSVC is broken. Share this post Share on other sites Nytegard    839 Actually, it will shrink the executable size a little bit, but usually since the exe is a multiple of 4Kb, you won''t notice any size change (and also because the compiler does optimize most of it out of the final exe), unless you include functions that are part of the non lean and mean versions. If you do define win32_lean_and_mean, certain header files are not built in, such as windowsx.h. Share this post Share on other sites gph-gw    122 quote: Original post by Nytegard Actually, it will shrink the executable size a little bit, but usually since the exe is a multiple of 4Kb, you won''t notice any size change (and also because the compiler does optimize most of it out of the final exe), unless you include functions that are part of the non lean and mean versions. If you do define win32_lean_and_mean, certain header files are not built in, such as windowsx.h. Nope, windowsx.h has nothing to do with WIN32_LEAN_AND_MEAN. That def only means that you won''t be using any MFC in your program. Share this post Share on other sites Nytegard    839 From MSDN: -------------------------------------------------------------- To speed the build process, Visual C++ provides the following defines that reduce the size of the Win32 header files. VC_EXTRALEAN WIN32_LEAN_AND_MEAN Newly generated Visual C++ AppWizard applications automatically benefit from VC_EXTRALEAN. You can also manually define VC_EXTRALEAN to speed the build process of many legacy MFC applications. Non-MFC C++ and C applications can define WIN32_LEAN_AND_MEAN and any applicable NOservice defines, such as NOSOUND (see ProgramFiles\Microsoft Visual Studio\VC98\include\Windows.h and ProgramFiles\Microsoft Visual Studio\VC98\MFC\Include\afxv_w32.h), to reduce their build times. -------------------------------------------------------------- Certain functions will be included in header files such as mmsystem.h and other header files that will force you to incorporate them into the program if WIN32_LEAN_AND_MEAN is defined. It doesn't only have to do with MFC. Sorry, my mistake, it was MMSYSTEM.H. Granted, its defined for MFC, but according to the definition of WIN32_LEAN_AND_MEAN, the following would not be included in the build: #ifndef WIN32_LEAN_AND_MEAN #include <cderr.h> #include <dde.h> #include <ddeml.h> #include <dlgs.h> #ifndef _MAC #include <lzexpand.h> #include <mmsystem.h> #include <nb30.h> #include <rpc.h> #endif #include <shellapi.h> #ifndef _MAC #include <winperf.h> #if(_WIN32_WINNT >= 0x0400) #include <winsock2.h> #include <mswsock.h> #else #include <winsock.h> #endif /* _WIN32_WINNT >= 0x0400 */ #endif #ifndef NOCRYPT #include <wincrypt.h> #endif #ifndef NOGDI #include <commdlg.h> #ifndef _MAC #include <winspool.h> #ifdef INC_OLE1 #include <ole.h> #else #include <ole2.h> #endif /* !INC_OLE1 */ #endif /* !MAC */ #endif /* !NOGDI */ #endif /* WIN32_LEAN_AND_MEAN */ If you include it, it will just exclude those header files. And one last thing, if you want to see #define WIN32_LEAN_AND_MEAN in action, in a non MFC program, go to my homepage http://www.geocities.com/nytegard and download the sample midi player. Try taking out the win32_lean_and_mean, along with the comdlg.h and mmsystem.h, and then put back #define win32_lean_and_mean, and you will see how it's used. (Plus, it also will demonstrate a openfile dialog box and the mciSendCommand, which seem to be common questions). Bah, my html skills are no where near up to par. Edited by - Nytegard on June 30, 2001 6:25:34 PM Share this post Share on other sites SkyRat    122 I like to use WIN32_SKINNY_AND_PISSED It''s the same like lean and mean but it just sounds better Humanity''s first sin was faith; the first virtue was doubt Share this post Share on other sites Beer Hunter    712 NuffSaid: are you sure about upx''s memory usage? The homepage claims otherwise: UPX is a free, portable, extendable, high-performance executable packer for several different executable formats. It achieves an excellent compression ratio and offers very fast decompression. Your executables suffer no memory overhead or other drawbacks. Share this post Share on other sites NuffSaid    122 And cigarette companies don't say much about cancer . Do a simple experiment yourself. Run a compressed and an uncompressed program and look at the run time image under Task Manager or some other program if you don't have Win2K (I'd suggest Norton System Info, it comes with Norton Utilities). From experience, memory footprint can increase from anywhere between 30% - 1000%. For my programs, it is usually somewhere in the region of 300 - 500%. To me, that's HUGE. Edited by - NuffSaid on July 1, 2001 12:31:34 PM Share this post Share on other sites Kylotan    9860 Just a quick point, but excluding header files will rarely have an effect on the final executable''s size. Header files are almost entirely composed of function prototypes and external variable declarations. These things are just provided so that the compiler knows what types to use when a function or variable is used in your code. If you never use those globals or functions in your source file, they won''t get linked in anyway. Share this post Share on other sites Beer Hunter    712 Microsoft Excel 97''s executable is about 5.5 megs. After upx, just under 3 megs. Let''s take a look using the System Information tool... System resources beforehand: 50% free Running excel.exe: 48% free Running upx''d excel.exe: 45% free You''re absolutely right. Share this post Share on other sites Cyberdrek    100 quote: Original post by trigger well, that wasn''t my question! i need informations about the following libs and why i have to link them: kernel32.lib user32.lib gdi32.lib opengl32.lib msvcrt.lib for example kernel32 or user32. i think that they arent used for any function but have to be used for a win23 programm. and saying to search the msdn is a very bad answer to this question. searching the internet would be faster. msdn is only a reference. but this is another topic. you guys should know that if msdn really is that good, nobody would need books on programming! hope you got that best regards trigger You can find all the answers to the questions you were asking in ( you guessed it ) MSDN. Now, I don''t think that anybody will want to write a 15 page message explaining all those libs when you can easily look it up in MSDN which comes with Visual Studio. ( Which I take for granted that you have since you were talking about it''s compiler options. ) "And that''s the bottom line cause I said so!" Cyberdrek Headhunter Soft A division of DLC Multimedia Share this post Share on other sites avianRR    100 You can also find out what lib''s your program NEEDS by running the depends program that comes with MSVC it''s under the tool menu. It''ll look at the executable and tell you what dll''s the program is actually useing, you can then remove all the rest from the libs and remove the extra size from the exeutable. Depends also list''s what functions are being used from what dll for your program and any dll''s the dll is loading will show up in the tree and show what functions are being used by that dll, etc... It''s actually quite usefull. Share this post Share on other sites JDog    122 /FILEALIGN:512 Add that line to build options when linking. Will shrink your file size greatly. Share this post Share on other sites Beer Hunter    712 avianRR: Anything in the libraries that are not used by your program will simply not be placed in the executable when it is generated. Removing unused libraries will speed up linking time, but have no effect on file size. Share this post Share on other sites alexmoura    450 Ok - exactly why this obsession about reducing the size of a 24k program anyway? You know, I never wanted to be a programmer... Alexandre Moura Share this post Share on other sites trigger    123 lo all! why this obsession about reducing the size of a 24k program anyway? - simple question. its needed for a little 64k intro. every free byte is needed so i need to get all possible ones ! best regards trigger Share this post Share on other sites NuffSaid    122 /FILEALIGN:512 is the same is /OPT:NOWIN98. At least, thats what I got from the docs. The effects of UPX will be even more scary when you get the memory runtime size in kilobytes (or Megabytes), instead of percentages of system resources. That''s because most of us developers have hundreds of megs of ram installed, and a 2% increase on our machines may actually be a very big jump for the end user''s machine. Share this post Share on other sites Beer Hunter    712 Nuffsaid: I agree. 2% of my memory is about 4.5 megs. Sorry if I sounded sarcastic or anything in my last post - that wasn''t my intention. Makes me wonder what the decompressor''s doing.
## Communications in Applied Analysis ### An International Journal for Theory and Applications Short Title: Commun. Appl. Anal. Publisher: Dynamic Publishers, Atlanta, GA ISSN: 1083-2564 Online: https://acadsol.eu/caa/contentshttp://www.dynamicpublishers.com/CAA/caacontent.htm Comments: No longer indexed; This journal is available open access. Documents Indexed: 767 Publications (1997–2016) all top 5 ### Latest Issues 20, No. 4 (2016) 20, No. 2-3 (2016) 20, No. 1 (2016) 18, No. 3-4 (2014) 18, No. 1-2 (2014) 17, No. 3-4 (2013) 17, No. 2 (2013) 17, No. 1 (2013) 16, No. 4 (2012) 16, No. 3 (2012) 16, No. 2 (2012) 16, No. 1 (2012) 15, No. 2-4 (2011) 15, No. 1 (2011) 14, No. 4 (2010) 14, No. 3 (2010) 14, No. 2 (2010) 14, No. 1 (2010) 13, No. 4 (2009) 13, No. 3 (2009) 13, No. 2 (2009) 13, No. 1 (2009) 12, No. 4 (2008) 12, No. 3 (2008) 12, No. 2 (2008) 12, No. 1 (2008) 11, No. 3-4 (2007) 11, No. 2 (2007) 11, No. 1 (2007) 10, No. 4 (2006) 10, No. 2-3 (2006) 10, No. 1 (2006) 9, No. 4 (2005) 9, No. 3 (2005) 9, No. 2 (2005) 9, No. 1 (2005) 8, No. 4 (2004) 8, No. 3 (2004) 8, No. 2 (2004) 8, No. 1 (2004) 7, No. 4 (2003) 7, No. 3 (2003) 7, No. 2 (2003) 7, No. 1 (2003) 6, No. 4 (2002) 6, No. 3 (2002) 6, No. 2 (2002) 6, No. 1 (2002) 5, No. 4 (2001) 5, No. 3 (2001) 5, No. 2 (2001) 5, No. 1 (2001) 4, No. 4 (2000) 4, No. 3 (2000) 4, No. 2 (2000) 4, No. 1 (2000) 3, No. 4 (1999) 3, No. 3 (1999) 3, No. 2 (1999) 3, No. 1 (1999) 2, No. 4 (1998) 2, No. 3 (1998) 2, No. 2 (1998) 2, No. 1 (1998) 1, No. 4 (1997) 1, No. 3 (1997) 1, No. 2 (1997) 1, No. 1 (1997) all top 5 ### Authors 22 O’Regan, Donal 17 Ahmad, Bashir 16 Anastassiou, George Angelos 16 Benchohra, Mouffak 15 Bainov, Drumi Dimitrov 15 Henderson, Johnny Lee 13 Ntouyas, Sotiris K. 13 Sivasundaram, Seenith 11 Agarwal, Ravi P. 10 Simeonov, Pavel Sergeev 9 Zaslavski, Alexander Yakovlevich 8 Graef, John R. 8 Medhin, Negash G. 8 Vasundhara Devi, Jonnalagadda 7 Argyros, Ioannis Konstantinos 7 Choi, Q-Heung 7 Markova, N. T. 7 Minchev, Emil 6 Ahmed, Nasir Uddin 6 Grace, Said Rezk 6 Lakshmikantham, Vangipuram 6 Vatsala, Aghalaya S. 5 Dhage, Bapurao C. 5 Furi, Massimo 5 Infante, Gennaro 5 Khristova, Snezhana G. 5 Kong, Lingju 5 Ladde, Gangaram S. 5 Motreanu, Dumitru 5 Sambandham, Masilamani 5 Yang, Bo 4 Eloe, Paul W. 4 Erbe, Lynn Harry 4 Islam, Muhammad Nazmul 4 Kong, Qingkai 4 Lar’kin, Nikolaj Andreevich 4 Nieto Roig, Juan Jose 4 Padhi, Seshadev 4 Qian, Chuanxi 4 Verma, Ram U. 4 Watson, G. Alistair 3 Ahrendt, Chris R. 3 Aiki, Toyohiko 3 Almira, Jose María 3 Anderson, Douglas Robert 3 Avery, Richard I. 3 Baxley, John V. 3 Botelho, Fernanda 3 Calamai, Alessandro 3 Carvalho, Luiz A. V. 3 Chu, Jifeng 3 De Pascale, Luigi 3 Deekshitulu, Gunturu Venkata Sita Rama 3 Del Toro, Naira 3 Dhaigude, Dnyanoba Bhaurao 3 Djebali, Smail 3 Hart Murdock, Julie Angela 3 Hassan, Taher S. 3 Jamison, James Edward 3 Jonnalagadda, Jagan Mohan 3 Jung, Tacksun 3 Kaufmann, Eric R. 3 Lan, Kunquan 3 Leela, Srinivasa G. 3 Martynyuk, Anatoliĭ Andriĭovych 3 Myshkis, Anatoliĭ Dmitrievich 3 Nakagawa, Kiyokazu 3 Pati, Smita 3 Pera, Maria Patrizia 3 Pinelas, Sandra 3 Reich, Simeon 3 Saker, Samir H. 3 Seifert, George 3 Simão, Isabel 3 Sun, Jianping 3 Tsamatos, Panagiotis Ch. 3 Uthayakumar, Ramasamy 3 Varga, Csaba György 3 Vu Van Khuong 3 Wang, Peiguang 3 Wang, Zhi-Cheng 3 Wong, Patricia J. Y. 3 Wu, Jianhong 2 Adivar, Murat 2 Agrafiotou, Xanthipi 2 Ahrendt, Kevin 2 Angelov, Vasil Georgiev 2 Appell, Jürgen 2 Apreutesei, Narcisa C. 2 Axelsson, Axel Owe Holger 2 Baoguo, Jia 2 Benabidallah, Rachid 2 Benevieri, Pierluigi 2 Benmezaï, Abdelhamid 2 Borges, Manoel Ferreira 2 Boussayoud, Ali 2 Cavalcanti, Marcelo Moreira 2 Cerrai, Sandra 2 Chen, Zhangxin 2 Cheng, Sui Sun ...and 805 more Authors all top 5 ### Fields 300 Ordinary differential equations (34-XX) 219 Partial differential equations (35-XX) 141 Operator theory (47-XX) 62 Calculus of variations and optimal control; optimization (49-XX) 59 Numerical analysis (65-XX) 51 Real functions (26-XX) 43 Difference and functional equations (39-XX) 40 Systems theory; control (93-XX) 39 Probability theory and stochastic processes (60-XX) 37 Integral equations (45-XX) 33 Approximations and expansions (41-XX) 27 Mechanics of deformable solids (74-XX) 27 Biology and other natural sciences (92-XX) 24 Fluid mechanics (76-XX) 23 Functional analysis (46-XX) 23 Operations research, mathematical programming (90-XX) 22 Global analysis, analysis on manifolds (58-XX) 13 Dynamical systems and ergodic theory (37-XX) 11 Special functions (33-XX) 11 Harmonic analysis on Euclidean spaces (42-XX) 11 Classical thermodynamics, heat transfer (80-XX) 11 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 10 General topology (54-XX) 8 Differential geometry (53-XX) 8 Quantum theory (81-XX) 8 Information and communication theory, circuits (94-XX) 7 Statistical mechanics, structure of matter (82-XX) 6 Computer science (68-XX) 5 Number theory (11-XX) 5 Functions of a complex variable (30-XX) 5 Statistics (62-XX) 5 Optics, electromagnetic theory (78-XX) 4 History and biography (01-XX) 4 Measure and integration (28-XX) 4 Sequences, series, summability (40-XX) 4 Abstract harmonic analysis (43-XX) 4 Integral transforms, operational calculus (44-XX) 3 General and overarching topics; collections (00-XX) 3 Combinatorics (05-XX) 3 Linear and multilinear algebra; matrix theory (15-XX) 3 Potential theory (31-XX) 3 Mechanics of particles and systems (70-XX) 3 Relativity and gravitational theory (83-XX) 2 Order, lattices, ordered algebraic structures (06-XX) 2 Astronomy and astrophysics (85-XX) 1 Mathematical logic and foundations (03-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Convex and discrete geometry (52-XX) 1 Algebraic topology (55-XX) 1 Geophysics (86-XX) ### Citations contained in zbMATH Open 406 Publications have been cited 2,074 times in 1,947 Documents Cited by Year Eigenvalue intervals and double positive solutions of certain discrete boundary value problems. Zbl 0923.39002 Wong, P. J. Y.; Agarwal, R. P. 1999 Theory of fractional differential inequalities and applications. Zbl 1159.34006 Lakshmikantham, V.; Vatsala, A. S. 2007 A multiplicity result for the nonlinear Schrödinger-Maxwell equations. Zbl 1085.81510 Coclite, Giuseppe Maria 2003 Nonlocal boundary value problems with two nonlinear boundary conditions. Zbl 1198.34025 Infante, Gennaro 2008 Exponential stability in linear heat conduction with memory: a semigroup approach. Zbl 1084.35547 Giorgi, Claudio; Naso, Maria Grazia; Pata, Vittorino 2001 Lyapunov theory for fractional differential equations. Zbl 1191.34007 Lakshmikantham, V.; Leela, S.; Sambandham, M. 2008 Regular solutions for Landau-Lifschitz equation in $$\mathbb R^ 3$$. Zbl 1084.35519 Carbou, Gilles; Fabrie, Pierre 2001 Nonlinear fractional implicit differential equations. Zbl 1300.34014 Benchohra, Mouffak; Lazreg, Jamal E. 2013 Some existence results for fractional integro-differential equations with nonlinear conditions. Zbl 1179.45009 2008 Measure of noncompactness and fractional differential equations in Banach spaces. Zbl 1182.26007 Benchohra, Mouffak; Henderson, Johnny; Seba, Djamila 2008 Positive solutions of systems of Caputo fractional differential equations. Zbl 1298.34014 Lan, K. Q.; Lin, W. 2013 Remarks on general infinite dimensional duality with cone and equality constraints. Zbl 1209.90289 Daniele, Patrizia; Giuffré, Sofia; Maugeri, Antonino 2009 Sharp weighted Rellich and uncertainty principle inequalities on Carnot groups. Zbl 1202.26031 Kombe, Ismail 2010 Existence and uniqueness results for nonlinear boundary value problems of fractional differential equations with separated boundary conditions. Zbl 1180.34003 2009 Controlled McKean-Vlasov equations. Zbl 1084.49506 Ahmed, N. U.; Ding, X. 2001 Approximation of signals using measured sampled values and error analysis. Zbl 1089.94503 Butzer, Paul L.; Lei, Junjiang 2000 Recession bifunction and solvability of noncoercive equilibrium problems. Zbl 1085.49501 Mansour, M. Ait; Chbani, Z.; Riahi, H. 2003 Impulsive fractional differential equations with state-dependent delay. Zbl 1203.26007 Benchohra, Mouffak; Berhoun, Farida 2010 Variational analysis of quasistatic viscoplastic contact problems with friction. Zbl 1084.74541 Sofonea, Mircea; Shillor, Meir 2001 On the abstract model of the Kirchhoff-Carrier equation. Zbl 0894.35069 Cousin, A. T.; Frota, C. L.; Larkin, N. A.; Medeiros, L. A. 1997 Laplace transforms for the nabla-difference operator and a fractional variation of parameters formula. Zbl 1277.39007 Ahrendt, K.; Castle, L.; Holm, M.; Yochman, K. 2012 Functional differential equations for cell-growth models with dispersion. Zbl 1084.34545 Wake, Graeme C.; Cooper, Shaun; Kim, Hee-Kyung; van Brunt, Bruce 2000 Boundary value problems with vanishing Green’s function. Zbl 1192.34031 Webb, J. R. L. 2009 Exponential stability of some scalar impulsive delay differential equations. Zbl 0901.34068 Berezansky, L.; Idels, L. 1998 On pinching of curves moved by surface diffusion. Zbl 0894.35049 Giga, Y.; Ito, K. 1998 Blow up in the Cauchy problem for a nonlinearly damped wave equation. Zbl 1085.35108 Messaoudi, Salim A. 2003 Impulsive integral inequalities with delay. Zbl 1136.26003 Bainov, D. D.; Simeonov, P. S. 2006 The Dykstra algorithm with Bregman projections. Zbl 0897.90155 Censor, Y.; Reich, S. 1998 Perspectives of fuzzy initial value problems. Zbl 1152.34041 Bede, Barnabás; Bhaskar, T. Gnana; Lakshmikantham, V. 2007 Locating Cerami sequences in a mountain pass geometry. Zbl 1232.58007 Stuart, C. A. 2011 Quasilinearization for fractional differential equations. Zbl 1184.34015 Vasundhara Devi, J.; Suseela, Ch. 2008 The $$\infty$$-eigenvalue problem and a problem of optimal transportation. Zbl 1189.35214 Champion, Thierry; De Pascale, Luigi; Jimenez, Chloé 2009 Existence theory for $$(\phi(y'))'=qf(t,y,y'),\quad 0<t<1$$. Zbl 0887.34019 O’Regan, Donal 1997 Existence of solutions for fractional semilinear evolution boundary value problem. Zbl 1218.34004 Anguraj, A.; Karthikeyan, P. 2010 Existence results for hemivariational inequalities involving relaxed $$\eta-\alpha$$ monotone mappings. Zbl 1214.47060 Costea, Nicuşor; Rădulescu, Vicenţiu 2009 Multiple solutions for Dirichlet problems which are superlinear at $$+\infty$$ and (sub)linear at $$-\infty$$. Zbl 1181.35053 Motreanu, D.; Motreanu, V. V.; Papageorgiou, N. S. 2009 Qualitative and quantitative estimates for large solutions to semilinear equations. Zbl 1090.35526 Berhanu, S.; Porru, G. 2000 Homogenization of the Ginzburg-Landau equation in a domain with oscillating boundary. Zbl 1085.37500 Gaudiello, Antonio; Hadiji, Rejeb; Picard, Colette 2003 Numerical schemes for random ODEs via stochastic differential equations. Zbl 1294.60091 Asai, Y.; Kloeden, P. E. 2013 Monotone method for periodic boundary value problems of Caputo fractional differential equations. Zbl 1208.34009 McRae, F. A. 2010 Positive solutions for regular and singular fourth-order boundary value problems. Zbl 1123.34015 Chu, Jifeng; O’Regan, Donal 2006 On solvability of some quadratic functional-integral equation in Banach algebra. Zbl 1137.45004 2007 Periodic boundary value problems for impulsive hyperbolic systems. Zbl 0961.35083 Bainov, D.; Minchev, E.; Myshkis, A. 1997 Nonoscillatory solutions to second-order neutral functional dynamic equations on time scales. Zbl 1343.34206 Deng, Xun-Huan; Wang, Qi-Ru 2014 Generalized monotone method for periodic boundary value problems of Caputo fractional differential equations. Zbl 1184.34014 Vasundhara Devi, J. 2008 New discrete Halanay inequalities: stability of difference equations. Zbl 1155.26013 Agarwal, Ravi P.; Kim, Young-Ho; Sen, S. K. 2008 On asymptotic stability of solutions to third order nonlinear differential equations with retarded argument. Zbl 1139.34054 Tunç, Cemil 2007 Nonlinear boundary value problems for shallow membrane caps. Zbl 0933.74041 Baxley, J. V.; Gu, Y. 1999 Existence of positive solutions for integral equations with vanishing kernels. Zbl 1236.45004 Ma, Ruyun; Zhong, Chengkui 2011 Oscillation of superlinear and sublinear neutral delay dynamic equations. Zbl 1192.34077 Saker, S. H. 2008 Positive solutions for systems of second order four-point nolinear boundary value problems. Zbl 1166.34006 Henderson, J.; Ntouyas, S. K.; Purnaras, I. K. 2008 A Galerkin method for time-dependent MHD flow with nonideal boundaries. Zbl 0931.76099 Schmidt, P. G. 1999 Boundedness and continuity properties of nonlinear composition operators: a survey. Zbl 1255.47059 Appell, J.; Guanda, N.; Merentes, N.; Sanchez, J. L. 2011 A nested class of analytic functions defined by fractional calculus. Zbl 0897.30003 Srivastava, H. M.; Mishra, A. K.; Das, M. K. 1998 A 2$$n$$th order linear difference equation. Zbl 0903.39001 Anderson, D. 1998 Local regularity of solutions to quasilinear elliptic equations with general structure. Zbl 0922.35050 Ragusa, M. A.; Zamboni, P. 1999 Multiple positive solutions for focal boundary value problems. Zbl 0887.34018 Henderson, Johnny; Kaufmann, Eric R. 1997 Abstract stochastic problems with generators of regularized semigroups. Zbl 1176.60046 2009 Existence results for general critical growth semilinear elliptic equations. Zbl 1090.35529 Gazzola, Filippo; Lazzarino, Marco 2000 Global stability in a well known delayed chemostat model. Zbl 1089.34546 Beretta, Edoardo; Kuang, Yang 2000 The Cauchy problem for a class of Markov-type semigroups. Zbl 1084.47517 Priola, Enrico 2001 A degenerate parabolic equation arising in image processing. Zbl 1099.35050 Citti, G.; Manfredini, M. 2004 Global existence for a class of quasilinear reaction-diffusion systems. Zbl 1122.35058 Morgan, Jeff; Waggonner, Sheila 2004 Asymptotic behavior of forced delay equations with periodic coefficients. Zbl 0903.34061 Graef, J. R.; Qian, C. 1998 On the qualitative behaviors of solutions to a kind of nonlinear third-order differential equation with delay. Zbl 1348.34128 Remili, Moussadek; Oudjedi, Lynda D.; Beldjerd, Djamila 2016 Multi-point boundary value problems of fractional order. Zbl 1184.34012 Allison, John; Kosmatov, Nickolai 2008 Monotone methods and fourth order Lidstone boundary value problems with impulse effects. Zbl 1084.34507 Eloe, P. W.; Islam, M. N. 2001 Boundary value problems for second order nonlinear ordinary differential equations. Zbl 1085.34514 Graef, John R.; Yang, Bo 2002 Basic and $$s$$-convexity Ostrowski and Grüss type inequalities involving several functions. Zbl 1296.26063 Anastassiou, George A. 2013 Representation and stability of solutions of systems of difference equations with multiple delays and linear parts defined by pairwise permutable matrices. Zbl 1304.39019 Medveď, Milan; Pospíšil, Michal 2013 Existence results for nonautonomous evolution equations with nonlocal initial conditions. Zbl 1147.34043 Aizicovici, Sergiu; Lee, Haewon 2007 Periodic solutions of Lagrangian systems of relativistic oscillators. Zbl 1235.34130 Brezis, Haïm; Mawhin, Jean 2011 Stability of Caputo fractional differential equations with non-instantaneous impulses. Zbl 1353.34005 Agarwal, Ravi; O’Regan, Donal; Hristova, S. 2016 Approximation of solutions of the forced duffing equation with $$m$$-point boundary conditions. Zbl 1182.34011 2009 Boundary value problems for fractional functional differential equations of mixed type. Zbl 1179.26020 Darwish, Mohamed Abdalla; Ntouyas, Sotiris K. 2009 Fractional difference inequalities of Bihari type. Zbl 1225.39009 Deekshitulu, G. V. S. R.; Mohan, J. Jagan 2010 Eventual practical stability and cone valued Lyapunov functions for differential equations with “maxima”. Zbl 1219.34093 Henderson, Johnny; Hristova, Snezhana 2010 Absolute extrema of invariant optimal control problems. Zbl 1177.49011 Silva, Cristiana J.; Torres, Delfim F. M. 2006 Existence of a global solution for an impulsive semilinear parabolic equation and its asymptotic behaviour. Zbl 1084.35549 Nakagawa, Kiyokazu 2000 Special solutions of a new class of water wave equations. Zbl 1084.76511 Marinakis, V.; Bountis, T. C. 2000 On the quasistatic flexure of a thermoelastic rod. Zbl 1084.74526 Bouziani, Abdelfatah 2002 Blow-up versus quenching. Zbl 1085.35069 Deng, Keng; Zhao, Cheng-Lin 2003 Strongly damped wave equations on $$\mathbb R^3$$ with critical nonlinearities. Zbl 1096.35090 Conti, Monica; Pata, Vittorino; Squassina, Marco 2005 Approximation of triple stochastic integrals through region subdivision. Zbl 1294.60077 Allen, Edward 2013 Fractional difference inequalities. Zbl 1200.26027 Deekshitulu, G. V. S. R.; Mohan, J. Jagan 2010 Criteria for the stability of second-order difference equations with periodic coefficients. Zbl 0933.39013 Atici, F.; Guseinov, G. Sh. 1999 Source-type solutions to porous medium equations with convection. Zbl 0894.35057 Laurençot, Ph.; Simondon, F. 1997 Some unified presentations of the generalized Voigt functions. Zbl 0894.33009 Srivastava, H. M.; Pathan, M. A.; Kamarujjama, M. 1998 Cores for second-order differential operators on real intervals. Zbl 1198.47061 Altomare, Francesco; Leonessa, Vita; Milella, Sabina 2009 A nonsmooth equivariant minimax principle. Zbl 0922.49003 Motreanu, Dumitru; Varga, Csaba 1999 Complete symmetric functions and $$k$$-Fibonacci numbers. Zbl 1367.05205 Boussayoud, Ali; Harrouche, Nesrine 2016 New stochastic integrals, oscillation theorems and energy identities. Zbl 1189.60120 Schurz, Henri 2009 Eigenvalue comparisons for boundary value problems of the discrete elliptic equation. Zbl 1179.39008 Ji, Jun; Yang, Bo 2008 A topological approach for generalized nonlocal models for a confined plasma in a tokamak. Zbl 1084.35512 Ferone, A.; Jalal, M.; Rakotoson, J. M.; Volpicelli, R. 2001 Self-similar blow-up patterns in supercritical semilinear heat equations. Zbl 1085.35517 Matos, Júlia 2001 Blow-up property of the solutions of an impulsive reaction-diffusion model. Zbl 1085.35074 Bainov, Drumi D.; Kolev, Dimitar A.; Nakagawa, Kiyokazu 2003 Local bifurcation analysis and stability of steady-state solutions for diffusive logistic equations with nonlinear boundary conditions. Zbl 1139.35043 Umezu, Kenichiro 2004 Fuzzy solutions for impulsive differential equations. Zbl 1149.34002 Benchohra, Mouffak; Nieto, Jaun J.; Ouahab, Abdelghani 2007 The Wentzell telegraph equation: asymptotics and continuous dependence on the boundary conditions. Zbl 1239.35099 Clarke, Ted; Goldstein, Gisèle Ruiz; Goldstein, Jerome A.; Romanelli, Silvia 2011 Periodic solutions of retarded functional perturbations of autonomous differential equations on manifolds. Zbl 1235.34188 Furi, Massimo; Pera, Maria Patrizia; Spadini, Marco 2011 On the qualitative behaviors of solutions to a kind of nonlinear third-order differential equation with delay. Zbl 1348.34128 Remili, Moussadek; Oudjedi, Lynda D.; Beldjerd, Djamila 2016 Stability of Caputo fractional differential equations with non-instantaneous impulses. Zbl 1353.34005 Agarwal, Ravi; O&rsquo;Regan, Donal; Hristova, S. 2016 Complete symmetric functions and $$k$$-Fibonacci numbers. Zbl 1367.05205 Boussayoud, Ali; Harrouche, Nesrine 2016 Pulsatile constant and characterisation of first order neutral impulsive differential equations. Zbl 1359.34067 Tripathy, A. K.; Santra, S. S. 2016 About stability conditions for retarded fractional differential systems with distributed delays. Zbl 1365.34136 Veselinova, Magdalena; Kiskinov, Hristo; Zahariev, Andrey 2016 Periodic solutions of fractional nabla difference equations. Zbl 1376.39006 2016 New oscillation criteria for fourth order neutral dynamic equations. Zbl 1348.34145 Tripathy, A. K. 2016 A relook at queueing-inventory system with reservation, cancellation and common life time. Zbl 1364.60123 Shajin, Dhanya; Benny, Binitha; Deepak, T. G.; Krishnamoorthy, A. 2016 Systems-disconjugacy of a fourth-order differential equation with a middle term. Zbl 1477.47032 Amara, Jamel Ben 2016 On $${\varphi}_h$$-preinvex functions. Zbl 1364.26015 2016 Some approximation results for the Stancu type $$q$$-Bernstein-Schurer-Kantorovich operators. Zbl 1364.41015 Mursaleen, M.; Khan, Taqseer 2016 Existence of minimal and maximal solutions for a quasilinear differential equation with nonlocal boundary condition on the half-line. Zbl 1379.34027 Derhab, Mohammed; Mekni, Hayat 2016 Generalization of some Hadamard product. Zbl 1365.05289 Abderrezzak, Abdelhamid; Kerada, Mohamed; Boussayoud, Ali 2016 Bifurcation analysis of an $$S I R$$ model. Zbl 1362.92078 Karimi Amaleh, M.; Dasi, A. 2016 Bounded orbits and $$G$$-contractive fixed points. Zbl 1365.54033 Phaneendra, T.; Saravanan, S. 2016 Nonoscillatory solutions to second-order neutral functional dynamic equations on time scales. Zbl 1343.34206 Deng, Xun-Huan; Wang, Qi-Ru 2014 Eigenvalues of regular self-adjoint Sturm-Liouville problems. Zbl 1339.05209 Zettl, Anton 2014 Optimal control on manifolds: optimality conditions via nonsmooth analysis. Zbl 1297.49082 Kipka, Robert K.; Ledyaev, Yuri S. 2014 On solvability of neutral stochastic functional differential equations with infinite delay. Zbl 1343.34176 Teng, Lingying; Long, Shujun; Xu, Daoyi 2014 A Wong-type necessary and sufficient condition for nonoscillation of second order linear dynamic equations on time scales. Zbl 1343.34208 Erbe, Lynn; Mert, Raziye 2014 Nonlinear differential equations with discontinuous right-hand sides: Filippov solutions, nonsmooth stability and dissipativity theory, and optimal discontinuous feedback control. Zbl 1295.93036 2014 Comparison criterion for even order forced nonlinear functional dynamic equations. Zbl 1343.34211 Hassan, Taher S. 2014 Convex solutions of systems of Monge-Ampère equations. Zbl 1343.34062 Wang, Haiyan 2014 A Hopf bifurcation analysis for a Kaldor-Kalecki model of business cycles with two different delays. Zbl 1343.34188 Wu, Xiaoqin P.; Wang, Liancheng 2014 Oscillation of certain even order nonlinear functional differential equations. Zbl 1343.34155 He, Hai-Jin; Wang, Qi-Ru 2014 Forced oscillation of nonlinear impulsive functional hyperbolic differential system. Zbl 1297.35263 Harikrishnan, S.; Prakash, P.; Nieto, J. J. 2014 Stability analysis on an economic epidemiology model of syphilis. Zbl 1346.92066 Avusuglo, Wisdom S.; Abdella, Kenzu; Feng, Wenying 2014 Oscillation criteria for third-order nonlinear neutral dynamic equations with several terms. Zbl 1343.34207 Elabbasy, E. M.; Hassan, T. S.; Elmatary, B. M. 2014 Positive solutions of a doubly nonlocal boundary value problem. Zbl 1343.34066 Infante, Gennaro 2014 Existence of solutions for a nonlinear second-order equation with periodic boundary conditions at resonance. Zbl 1343.34059 Kaufmann, Eric R. 2014 On differentiation of solutions of boundary value problems for second order dynamic equations on a time scale. Zbl 1343.34214 Lyons, Jeffrey W. 2014 Existence of positive solutions of a right focal fractional boundary value problem. Zbl 1343.34020 Neugebauer, Jeffrey T. 2014 Global attractivity of periodic solutions in a delay differential equation. Zbl 1343.34156 Qian, Chuanxi 2014 Nonlinear fractional implicit differential equations. Zbl 1300.34014 Benchohra, Mouffak; Lazreg, Jamal E. 2013 Positive solutions of systems of Caputo fractional differential equations. Zbl 1298.34014 Lan, K. Q.; Lin, W. 2013 Numerical schemes for random ODEs via stochastic differential equations. Zbl 1294.60091 Asai, Y.; Kloeden, P. E. 2013 Basic and $$s$$-convexity Ostrowski and Grüss type inequalities involving several functions. Zbl 1296.26063 Anastassiou, George A. 2013 Representation and stability of solutions of systems of difference equations with multiple delays and linear parts defined by pairwise permutable matrices. Zbl 1304.39019 Medveď, Milan; Pospíšil, Michal 2013 Approximation of triple stochastic integrals through region subdivision. Zbl 1294.60077 Allen, Edward 2013 Global existence results for functional differential equations with delay. Zbl 1319.34131 2013 Direct Lyapunov method on time scales. Zbl 1307.34137 Martynyuk, A. A. 2013 Boundedness results for impulsive set differential equations involving causal operators with memory. Zbl 1298.34127 Devi, J. Vasundhara; Naidu, Ch. Appala 2013 Measure valued solutions for stochastic neutral differential equations on Hilbert spaces and their optimal control. Zbl 1292.49023 Ahmed, N. U. 2013 Multiple positive periodic solutions of first order ordinary differential equations with unbounded Green’s kernel. Zbl 1303.34030 2013 Existence results for a fully nonlinear nonlocal fractional boundary value problem. Zbl 1293.34006 2013 Existence results for higher-order fractional differential inclusions with Riemann-Stieltjes type integral boundary conditions. Zbl 1298.34004 2013 Properties of the solutions of a system of differential equations with maxima, via weakly Picard operator theory. Zbl 1298.34113 Otrocol, Diana 2013 Laplace transforms for the nabla-difference operator and a fractional variation of parameters formula. Zbl 1277.39007 Ahrendt, K.; Castle, L.; Holm, M.; Yochman, K. 2012 On a generalized discrete beam equation via variational methods. Zbl 1278.39005 Graef, John R.; Kong, Lingju; Kong, Qingkai 2012 Positive solution for a third-order three-point boundary value problem with sign-changing Green’s function. Zbl 1272.34032 Sun, Jian-Ping; Zhao, Juan 2012 Hyers-Ulam stability of second-order linear dynamic equations on time scales. Zbl 1273.34094 Anderson, Douglas R.; Gates, Ben; Heuer, Dylan 2012 Pseudo almost automorphic solutions to fractional differential and integro-differential equations. Zbl 1264.43005 Cuevas, Claudio; N&rsquo;Guérékata, G. M.; Sepulveda, A. 2012 A Leggett-Williams type theorem applied to a fourth order problem. Zbl 1273.34024 Avery, Richard; Eloe, Paul; Henderson, Johnny 2012 Oscillation of odd-order half-linear advanced differential equations. Zbl 1280.34068 Tang, Shuhong; Li, Tongxing; Agarwal, Ravi P.; Bohner, Martin 2012 Classification of positive solutions of nonlinear systems of Volterra integro-dynamic equations on time scales. Zbl 1277.45017 Adivar, Murat; Koyuncuoğlu, H. Can; Raffoul, Youssef N. 2012 Some results on the convergence of the generalized exponential function on time scales. Zbl 1280.34092 Ahrendt, Chris; Ahrendt, Kevin 2012 Existence and uniqueness of solutions of a conjugate fractional boundary value problem. Zbl 1281.39004 Awasthi, Pushp 2012 Impulsive partial integro-differential equations of fractional order. Zbl 1273.26006 Abbas, Saïd; Benchohra, Mouffak 2012 Nonlocal four-point integral boundary value problem of nonlinear fractional differential equations and existence results. Zbl 1263.26012 2012 On a Lotka-Volterra predator-prey reaction diffusion system with density-dependent diffusion. Zbl 1288.35297 Pao, C. V. 2012 A note on inexact infinite products. Zbl 1287.54044 Reich, Simeon; Zaslavski, Alexander J. 2012 Enrichment effects in a simple stoichiometric producer-consumer population growth model. Zbl 1329.92111 Stech, Harlan; Peckham, Bruce; Pastor, John 2012 Multiple fixed point theorems utilizing operators and functionals. Zbl 1294.47074 Anderson, Douglas; Avery, Richard; Henderson, Johnny; Liu, Xueyan 2012 Existence of positive solutions of a nonlinear singular semipositone dynamic equation system. Zbl 1280.34093 Dahal, Rajendra 2012 On discrete fractional boundary value problems with nonlocal, nonlinear boundary conditions. Zbl 1273.26010 Goodrich, Christopher S. 2012 Fixed point theorems for some operator equations and inclusions in Banach spaces relative to the weak topology. Zbl 1319.47043 Djebali, Smaïl; O&rsquo;Regan, Donal; Sahnoun, Zahira 2012 The existence of positive solutions to neutral delay impulsive differential equations. Zbl 1264.34155 Isaac, I. O.; Lipscey, Z. 2012 Impulsive functional differential inclusions with state-dependent delay and variable times. Zbl 1257.34047 Benchohra, Moufeak; Hedia, Benaouda 2012 A study on $$(N,p_n) (E,q)$$ product summability. Zbl 1254.42007 Nigam, H. K.; Sharma, Kusum 2012 Oscillation of nonlinear difference equations with delayed argument. Zbl 1258.39004 2012 Oscillation and nonoscillation in neutral delay dynamic equations with positive and negative coefficients. Zbl 1258.34182 Karpuz, Bagak; Öcalan, Özkan 2012 Locating Cerami sequences in a mountain pass geometry. Zbl 1232.58007 Stuart, C. A. 2011 Existence of positive solutions for integral equations with vanishing kernels. Zbl 1236.45004 Ma, Ruyun; Zhong, Chengkui 2011 Boundedness and continuity properties of nonlinear composition operators: a survey. Zbl 1255.47059 Appell, J.; Guanda, N.; Merentes, N.; Sanchez, J. L. 2011 Periodic solutions of Lagrangian systems of relativistic oscillators. Zbl 1235.34130 Brezis, Haïm; Mawhin, Jean 2011 The Wentzell telegraph equation: asymptotics and continuous dependence on the boundary conditions. Zbl 1239.35099 Clarke, Ted; Goldstein, Gisèle Ruiz; Goldstein, Jerome A.; Romanelli, Silvia 2011 Periodic solutions of retarded functional perturbations of autonomous differential equations on manifolds. Zbl 1235.34188 Furi, Massimo; Pera, Maria Patrizia; Spadini, Marco 2011 Positive solutions of systems of Hammerstein integral equations. Zbl 1235.45005 Lan, K. Q. 2011 A new theme in nonlinear analysis: continuation and bifurcation of the unit eigenvectors of a perturbed linear operator. Zbl 1243.47083 Chiappinelli, Raffaele; Furi, Massimo; Pera, Maria Patrizia 2011 A boundary value problem on a half-line for differential equations with indefinite weight. Zbl 1244.34045 Došlá, Zuzana; Marini, Mauro; Matucci, Serena 2011 On the uniqueness of the degree for nonlinear Fredholm maps of index zero between Banach manifolds. Zbl 1239.47048 Benevieri, Pierluigi; Furi, Massimo 2011 On a bistable quasilinear parabolic equation: well-posedness and stationary solutions. Zbl 1234.35139 Burns, Martin; Grinfeld, Michael 2011 Positive and nondecreasing solutions to a singular boundary value problem for nonlinear fractional differential equations. Zbl 1235.34009 Caballero, J.; Harjani, J.; Sadarangani, K. 2011 Some recent results on the spectrum of multi-point eigenvalue problems for the $$p$$-Laplacian. Zbl 1257.34069 Genoud, François; Rynne, Bryan P. 2011 Practical stability in terms of two measures for impulsive differential equations with “supremum”. Zbl 1232.34100 Bainov, Drumi; Hristova, Snezhana 2011 Asymptotic behavior of $$n$$-th order sublinear dynamic equations on time scales. Zbl 1235.34239 Baoguo, Jia; Erbe, Lynn; Peterson, Allan 2011 Branches of harmonic solutions for a class of periodic differential-algebraic equations. Zbl 1235.34029 Calamai, Alessandro 2011 Some remarks on Mather’s theorem and Aubry-Mather sets. Zbl 1237.37045 Capietto, Anna; Soave, Nicola 2011 Nonlinear boundary value problems with $$p$$-Laplacian. Zbl 1242.34032 Kong, Qingkai; Wang, Xiaofei 2011 Periodic solutions of Volterra type integral equations with finite delay. Zbl 1232.45003 2011 Extremal solutions and continuous dependence for set differential equations involving causal operators with memory. Zbl 1232.34090 Vasundhara Devi, J. 2011 On the solvability of some operator equations and inclusions in Banach spaces with the weak topology. Zbl 1241.47050 Djebali, Smaïl; O&rsquo;Regan, Donal; Sahnoun, Zahira 2011 Nonlinear Schrödinger equations on $$\mathbb{R}$$: global bifurcation, orbital stability and nonlinear waveguides. Zbl 1231.35228 Genoud, François 2011 Higher order boundary value problems with two point separated nonhomogeneous boundary conditions. Zbl 1242.34037 Graef, John R.; Kong, Lingju; Kong, Qingkai; Yang, Bo 2011 Vortex filaments and 1D cubic Schrödinger equations: singularity formation. Zbl 1235.35262 Gutiérrez, Susana 2011 Optimal interval lengths for nonlocal boundary value problems for second order Lipschitz equations. Zbl 1238.34032 Henderson, Johnny 2011 Sharp weighted Rellich and uncertainty principle inequalities on Carnot groups. Zbl 1202.26031 Kombe, Ismail 2010 Impulsive fractional differential equations with state-dependent delay. Zbl 1203.26007 Benchohra, Mouffak; Berhoun, Farida 2010 Existence of solutions for fractional semilinear evolution boundary value problem. Zbl 1218.34004 Anguraj, A.; Karthikeyan, P. 2010 Monotone method for periodic boundary value problems of Caputo fractional differential equations. Zbl 1208.34009 McRae, F. A. 2010 ...and 306 more Documents all top 5 ### Cited by 2,386 Authors 70 Agarwal, Ravi P. 46 O’Regan, Donal 40 Benchohra, Mouffak 38 Wong, Patricia J. Y. 28 Goodrich, Christopher S. 27 Henderson, Johnny Lee 19 Ahmad, Bashir 19 Graef, John R. 18 Ge, Weigao 18 Khristova, Snezhana G. 18 Nieto Roig, Juan Jose 17 Ntouyas, Sotiris K. 16 Abbas, Said 12 Bai, Zhanbing 12 Băleanu, Dumitru I. 12 Cabada, Alberto 12 Pata, Vittorino 12 Remili, Moussadek 11 Sadarangani, Kishin B. 10 Banaś, Józef 10 Dell’Oro, Filippo 10 Jonnalagadda, Jagan Mohan 10 Liu, Yuji 10 Maugeri, Antonino 10 Qiu, Yangcong 9 Bohner, Martin J. 9 Giuffrè, Sofia 9 Kafini, Mohammad Mustafa 9 Kong, Lingju 9 N’Guérékata, Gaston Mandata 9 Stamova, Ivanka Milkova 9 Tang, Xianhua 8 Al-saedi, Ahmed Eid Salem 8 Anderson, Douglas Robert 8 Barbagallo, Annamaria 8 Darwish, Mohamed Abdalla 8 Di Fazio, Giuseppe 8 Eloe, Paul W. 8 Grace, Said Rezk 8 Infante, Gennaro 8 Li, Tongxing 8 Ma, Ruyun 8 Philos, Christos G. 8 Sun, Wenchang 8 Zhou, Zhan 7 Bouriah, Soufyane 7 Caraballo Garrido, Tomás 7 Daniele, Patrizia 7 Davis, John M. 7 Fanciullo, Maria Stella 7 Jankowski, Tadeusz 7 Kloeden, Peter Eris 7 Malinowski, Marek T. 7 Ouahab, Abdelghani 7 Papageorgiou, Nikolaos S. 7 Purnaras, Ioannis K. 7 Rossi, Julio Daniel 7 Torres, Delfim Fernando Marado 7 Wang, Peiguang 7 Zhou, Xingwei 6 Agarwal, Praveen 6 Ardjouni, Abdelouaheb 6 Avery, Richard I. 6 Boussayoud, Ali 6 Calamai, Alessandro 6 Cao, Junfei 6 Costea, Nicuşor 6 Cuevas, Claudio 6 Dabas, Jaydev 6 Gautam, Ganga Ram 6 Goldstein, Jerome Arthur 6 Han, Zhenlai 6 Kuznetsov, Dmitriĭ Feliksovich 6 Liu, Xinzhi 6 Ma, To Fu 6 Matei, Andaluzia Cristina 6 Pietramala, Paolamaria 6 Shi, Haiping 6 Spadini, Marco 6 Sun, Ji Tao 6 Wang, Qiru 6 Yang, Liu 6 Zhang, Yuanbiao 6 Zhao, Leiga 5 Alves, Claudianor Oliveira 5 Anastassiou, George Angelos 5 Ao, Jijun 5 Assanova, Anar Turmaganbetkyzy 5 Averboukh, Yuriĭ Vladimirovich 5 Chen, Jianqing 5 Chen, Shaowei 5 Choi, Q-Heung 5 Chyan, Chuan Jen 5 Ding, Xiaoli 5 Došlá, Zuzana 5 Huang, Lirong 5 Jung, Tacksun 5 Kosmatov, Nickolai 5 Lakshmikantham, Vangipuram 5 Lara, Felipe ...and 2,286 more Authors all top 5 ### Cited in 357 Journals 161 Journal of Mathematical Analysis and Applications 143 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 91 Computers & Mathematics with Applications 73 Applied Mathematics and Computation 68 Advances in Difference Equations 51 Journal of Computational and Applied Mathematics 46 Applied Mathematics Letters 43 Journal of Differential Equations 39 Abstract and Applied Analysis 38 Boundary Value Problems 31 Mathematical and Computer Modelling 25 Nonlinear Analysis. Real World Applications 22 Journal of Applied Mathematics and Computing 19 Journal of Optimization Theory and Applications 19 Fractional Calculus & Applied Analysis 19 Journal of Function Spaces 16 Mathematical Methods in the Applied Sciences 16 Journal of Inequalities and Applications 15 Mediterranean Journal of Mathematics 14 Rocky Mountain Journal of Mathematics 14 Chaos, Solitons and Fractals 13 Communications in Nonlinear Science and Numerical Simulation 13 Communications on Pure and Applied Analysis 13 Journal of Fixed Point Theory and Applications 12 Ukrainian Mathematical Journal 12 NoDEA. Nonlinear Differential Equations and Applications 12 Differential Equations and Dynamical Systems 11 Applicable Analysis 11 ZAMP. Zeitschrift für angewandte Mathematik und Physik 11 Dynamics of Continuous, Discrete & Impulsive Systems. Series A. Mathematical Analysis 11 Fractional Differential Calculus 10 Annali di Matematica Pura ed Applicata. Serie Quarta 10 Mathematische Nachrichten 10 Journal of Difference Equations and Applications 10 Journal of Nonlinear Science and Applications 9 Proceedings of the American Mathematical Society 9 Rendiconti del Circolo Matemàtico di Palermo. Serie II 9 Calculus of Variations and Partial Differential Equations 9 Advanced Nonlinear Studies 8 Results in Mathematics 8 Journal of Integral Equations and Applications 8 Discrete and Continuous Dynamical Systems 8 Discrete Dynamics in Nature and Society 7 Archive for Rational Mechanics and Analysis 7 Journal of Mathematical Physics 7 Applied Mathematics and Optimization 7 Journal of Functional Analysis 7 Applied Numerical Mathematics 7 Journal of Global Optimization 7 Journal of Contemporary Mathematical Analysis. Armenian Academy of Sciences 7 SIAM Journal on Mathematical Analysis 7 Journal of Mathematical Sciences (New York) 7 Filomat 7 Mathematical Problems in Engineering 7 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 7 Nonlinear Dynamics 7 Discrete and Continuous Dynamical Systems. Series S 7 Differential Equations and Applications 7 International Journal of Differential Equations 7 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 6 Quarterly of Applied Mathematics 6 Zeitschrift für Analysis und ihre Anwendungen 6 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 6 Optimization 6 Numerical Algorithms 6 Electronic Journal of Differential Equations (EJDE) 6 Turkish Journal of Mathematics 6 Opuscula Mathematica 6 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 6 Communications in Mathematical Analysis 6 AIMS Mathematics 5 Journal of the Franklin Institute 5 Fuzzy Sets and Systems 5 SIAM Journal on Control and Optimization 5 Transactions of the American Mathematical Society 5 Acta Applicandae Mathematicae 5 Automation and Remote Control 5 The Journal of Analysis 5 Differential Equations 5 Journal of Evolution Equations 5 Dynamics of Continuous, Discrete & Impulsive Systems. Series B. Applications & Algorithms 5 Discrete and Continuous Dynamical Systems. Series B 5 Nonlinear Analysis. Hybrid Systems 5 Asian-European Journal of Mathematics 5 Set-Valued and Variational Analysis 5 Afrika Matematika 5 Journal of Applied Analysis and Computation 5 Open Mathematics 4 Archiv der Mathematik 4 Mathematics and Computers in Simulation 4 Numerical Functional Analysis and Optimization 4 Applied Mathematical Modelling 4 Computational and Applied Mathematics 4 Georgian Mathematical Journal 4 Positivity 4 Acta Mathematica Sinica. English Series 4 Qualitative Theory of Dynamical Systems 4 Differentsial’nye Uravneniya i Protsessy Upravleniya 4 Acta Mathematica Scientia. Series B. (English Edition) 4 Advances in Differential Equations and Control Processes ...and 257 more Journals all top 5 ### Cited in 52 Fields 841 Ordinary differential equations (34-XX) 586 Partial differential equations (35-XX) 372 Operator theory (47-XX) 216 Real functions (26-XX) 170 Difference and functional equations (39-XX) 146 Integral equations (45-XX) 144 Calculus of variations and optimal control; optimization (49-XX) 140 Numerical analysis (65-XX) 91 Systems theory; control (93-XX) 82 Mechanics of deformable solids (74-XX) 71 Probability theory and stochastic processes (60-XX) 57 Dynamical systems and ergodic theory (37-XX) 56 Biology and other natural sciences (92-XX) 49 Operations research, mathematical programming (90-XX) 44 Approximations and expansions (41-XX) 44 Global analysis, analysis on manifolds (58-XX) 42 Fluid mechanics (76-XX) 36 Functional analysis (46-XX) 35 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 32 Special functions (33-XX) 27 Statistical mechanics, structure of matter (82-XX) 20 Harmonic analysis on Euclidean spaces (42-XX) 20 Differential geometry (53-XX) 19 Information and communication theory, circuits (94-XX) 18 Optics, electromagnetic theory (78-XX) 16 Classical thermodynamics, heat transfer (80-XX) 16 Quantum theory (81-XX) 14 Abstract harmonic analysis (43-XX) 11 Topological groups, Lie groups (22-XX) 11 Functions of a complex variable (30-XX) 10 Measure and integration (28-XX) 10 Integral transforms, operational calculus (44-XX) 10 General topology (54-XX) 8 Combinatorics (05-XX) 8 Number theory (11-XX) 8 Sequences, series, summability (40-XX) 8 Mechanics of particles and systems (70-XX) 6 History and biography (01-XX) 5 Several complex variables and analytic spaces (32-XX) 5 Computer science (68-XX) 4 Algebraic topology (55-XX) 4 Statistics (62-XX) 3 General and overarching topics; collections (00-XX) 3 Linear and multilinear algebra; matrix theory (15-XX) 3 Astronomy and astrophysics (85-XX) 2 Convex and discrete geometry (52-XX) 2 Relativity and gravitational theory (83-XX) 1 Mathematical logic and foundations (03-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Commutative algebra (13-XX) 1 $$K$$-theory (19-XX) 1 Potential theory (31-XX)
# Continuity And Differentiability NCERT Solutions : Class 12 Maths ## NCERT Solutions for Class 12 Maths : Continuity And Differentiability Filter Filters : Classes • • • • • • • Chapters • • • • • • • • • • • 3 More Exercises • • • • • • • • • • 0 More ### NCERT Class 12 | CONTINUITY AND DIFFERENTIABILITY | Solved Examples | Question No. 29 Differentiate the following w.r.t. x: (i) e^(-x) (ii) sin" "(log" "x)," "x" "&gt;" "0 (iii) cos^(-1)(e^x) (iv) e^(cosx) ### NCERT Class 12 | CONTINUITY AND DIFFERENTIABILITY | Solved Examples | Question No. 20 Show that the function f defined by f(x)=|1-x+x|, where x is any real number, is a continuous function. ### NCERT Class 12 | CONTINUITY AND DIFFERENTIABILITY | Solved Examples | Question No. 21 Find the derivative of the function given b y" "f(x)" "=" "sin(x^2)dot Latest from Doubtnut Doubtnut one of the best online education platform provides free NCERT Solutions of Maths for Class 12 Continuity And Differentiability which are solved by our maths experts as per the NCERT (CBSE) guidelines. We provide the solutions as video solutions in which concepts are also explained along with answer which will help students while learning , preparing for exams and in doing homework. These Solutions will help to revise complete syllabus and score more marks in examinations. Get here free, accurate and comprehensive NCERT Solutions for Class 12 Maths Continuity And Differentiability which have been reviewed by our maths counsellors as per the latest edition following up the CBSE guidelines. We provide video solutions in which solutions to all the questions of NCERT Class 12 Maths textbook are explained in step by step and detailed way. ## The Topics of the Chapter Continuity And Differentiability are : EVALUATION OF EXPONENTIAL AND LOGARITHMIC LIMITS METHOD OF DIFFERENTIATION CONTINUITY DERIVATIVE OF SOME STANDARD FUNCTIONS DIFFERENTIATION OF ONE FUNCTION W.R.T. OTHER FUNCTION INTRODUCTION FUNCTION GIVEN IN PARAMETER PROPERTIES OF DETERMINANTS DIFFERENTIABILITY CONTINUITY OF FUNCTION TYPES OF DISCONTINUITY HIGHER ORDER DERIVATIES DIFFERENTIATION OF DETERMINANTS MEAN VALUE THEOREMS ALGEBRA OF DIFFERENTIATION ALGEBRA OF CONTINUOUS FUNCTION PROPERTIES OF CONTINUOUS FUNCTIONS It contains these exercises along with solved examples. All the exercises are solved in the video. Select the exercise to view the solutions exercisewise of the Chapter Continuity And Differentiability: ## NCERT Solutions Class 12 Maths Chapter Continuity And Differentiability Exercises: We have covered all the exercises and also Solved examples in the videos. Along with the practise exercise Students should also practise solved examples to clear the concepts of the Continuity And Differentiability. If incase you have any doubt you can watch the solutions for the given questions.Watch the solutions for the given questions in which it is also explained in the video steps to solve the questions along with answers. # NCERT Class 12 Solutions Chapter 5 Continuity and differentiability To understand this chapter, students must be having basic knowledge about calculus, i.e. things about differentiation and integration. Students must know the different conversions and formulae. This chapter deals with the continuity of a function in different types of situations/cases, discontinuity of a function, theorems based on continuity, differentiability and everything related to the chapter. Everything is explained as topics and subtopics in the NCERT mathematics textbook, as it is the best for a student to learn the perfect amount for his/her academics and will help to score the best in their examinations. There are a number of reasons why students should prefer NCERT over any other book for any explanation of the topic related to the chapter like detailed and thorough explanations of the subject matter, and easy to comprehend language. Students should go for NCERT content always for the reference and should make it their duty to be thorough with its content before going to the questions part. And the best way to revise a number of concepts at once is to go through its questions as NCERT questions are designed in a way that a number of concepts are covered in a single question the same way as it's asked in the higher-level examinations like IIT-JEE Mains and Advanced. Though there are a number of books of different authors and publishers being marketed with different claims and promises, but too much ambiguous content leads to ineffective learning experience and confusions regarding the subject. So for the solutions, one should know what to study and how much and what better than the one recommended by the teachers, as they know the pressure a child goes through to perform well in academics. Even they ask the students to stick to NCERT till the end as it’s the best yet the cheapest way to master the subject if one continues to practice with the NCERT book. It also helps a lot in the national level examinations, which eventually decides a child’s future. As the textbook is all filled with colors and graphics to draw the attention of the students even in the higher standards as compared to the other textbooks, which are all black and white. This book is really good for enhancing the skills of a child as the book demonstrates how to do the problems correctly and in save time which is beneficial for examinations point of view and it increases the confidence of the child to keep going in their academics. Doubtnut, an online learning platform, provides detailed explanations on the questions related to any chapter or topic in a comprehended manner. Our learning platform also provides video lectures as well provided by experienced teachers, who have complete and deep knowledge about the subject and have the skills to impart it to the students in the best way. At doubtnut, we make sure that every topic is covered and explained in the form of videos to make it interactive and fun. In addition, we also make sure to attend every comment in the comment section to clear doubts of the students and to know what the students are finding hard to understand. Therefore, the next video is made keeping everything in mind to make the maximum students satisfied with the content and the same is done on our app, which can be easily downloaded with just a click on their phones for the delivery of the top-notch content. So, to experience learning in a fun and easy way, students must visit our website or reach out to our app for the best content. ## Topics and Subtopics of NCERT Maths Class 12 Chapter 5 Continuity and Differentiability The chapter of Continuity and Differentiability is divided into topics and subtopics on the basis of concepts. Each concept is listed as a topic and things regarding the concepts are listed as subtopics: ### Continuity of a Function If we talk about continuity of a function, we can say that intuitively a function is continuous in its domain if its graph is a curve without breaks and jumps throughout its domain. Moreover, a function is continuous at a point in its domain if its graph does not have breaks or jumps in the immediate neighborhood of the point. ### Continuity at a point A function f(x) is said to be continuous at a point x= a of its domain, if f limx→a f(x) = f(a) Thus, (f(x) is continuous at x=a) » limx→a f(x) = f(x) = f(a)» lim(x→a–) f(x)=limx→a+ f(x)=f(a) If f(x) is not continuous at a point x=a, then it is said to be discontinuous at x=a If limx→a- f(x) = limx→a+ f(x) ≠ f(a) , then the discontinuity is known as the removable discontinuity ,because f(x) can be made continuous by redefining it at point x= a in such a way that f(a)=lim x→a f(x) if lim x→a- f(x)≠ lim x→a+ f(x), then f(x) is said to have a discontinuity of first kind A function f(x) is said to have a discontinuity of the second kind at  x = a iff lim x→a- f(x) or , limx→a+ f(x) or, both do not exists A function f(x) is said to be left continuous or continuous from the left at x = a, iff 1. lim x→a- f(x) exists              and                     2. lim x→a+ f(x) = f(a) It follows from the above definitions that f(x) is continuous at x=a iff it is both left as well as right continuous at x=a A function f(x) fails to be continuous at x=a for any of the following reasons 1. limx→a f(x) exists but it is not equal to f(a) does not exists. 2. lim x→a f(x) does not exists . 3. f is not defined at x= a that is f(a) does not exists . ### Continuity in an interval A function say, y=f(x) be defined on [a,b]. Then, the function f(x) is said to be continuous at the left end, x=a If                     f(a) = lim x→a+ f(x) The function f(x) is said to be continuous at the right end Right hand                x=b If                       f(b) = lim x→b- f(x) ### Discontinuity of a function A function, which is not continuous, is said to be a discontinuous function, that is, there is a break in the function of any kind. ### Kinds of discontinuity 1. Removable discontinuity 2. Non-removable discontinuity #### Removable discontinuity In this type of discontinuity, lim x→a, will exist but the case here is either it is not equal to f(a) or   f(a) is not defined. It has two types: • Missing point discontinuity: If at any point the function does not exists but it's limit at that point exists, then this point is called missing point discontinuity. For example: f(x) is a function, for that case f(a) does not exists but lim (x→a) f(x) exists . • Isolated point discontinuity:  In this type, we see that the limit of the function exists at the particular point but the function is also defined at this point, but the case is both are not equal. For example: lim x→a f(x) exists but lim x→a f(x)≠f(a) . #### Non-removable discontinuity In this type of discontinuity, the limit does not exist so it’s not possible to redefine the function in any manner and make it continuous. Such functions are said to have non-removable discontinuity or discontinuity of the second kind. It's classifications: • Infinite discontinuity: Where one or both of the one-sided limits go forward infinity. • Finite discontinuity: The non-negative difference between the two limits is called the jump discontinuity. A function having a finite number of jumps in a given interval is called sectional continuous. • Oscillatory discontinuity: For a function f(x) lim x→a f(x) does not exist but oscillate between two finite quantities, then such function is said to have an oscillatory discontinuity. ### THEOREMS BASED ON CONTINUITY THEOREM 1: Sum, difference, product and quotient of 2 continuous functions is always a continuous function. THEOREM 2: if f(x) is continuous and g(x) is discontinuous at x=a, then the product function ѳ(x) =f(x).g(x) is not necessarily be discontinuous at x=a. THEOREM 3: If f(x) and g(x) both are discontinuous at x= a then the product function ѳ(x)= f(x) g(x) is not necessarily be discontinuous at x=a. ### Intermediate Value Theorem If f(x) is continuous in [a, b] and f(a)≠ f(b) , then for any value c ϵ (f(a), f(b), there is at least one number x1 in (a, b) for which f(x1) = c. ### DIFFERENTIABILITY The instantaneous rate of change of a function with respect to the independent variable is called derivative. For instance, let f(x) be a given function of one variable and Δx denotes a number (can be positive or negative) to be added to the number x. Let Δf denotes the corresponding change of f, then Δf= f (x+ Δx)- f(x) Δf/Δx= f(x+Δx)-f(x)/Δx If Δf/Δx approaches a limit as Δx approaches to zero, this limit is the derivative of f at the point x. The derivative of a function f is denoted by symbols such as f’(x). df/dx’ df(x)/dx →df/dx = limΔx→0 Δf/Δx = limΔx→0 f(x+Δx)- f(x) The derivative evaluated at a point a is written, f’(a), (d f(x)/dx)[x=a] , (f’(x))[x=a] etc. ### RELATION BETWEEN CONTINUITY AND DIFFERENTIABILITY If a function is differentiable at a point, then it should be continuous at that point as well and a discontinuous function cannot be differentiable. This fact is proved in the following theorem: THEOREM: If a function is differentiable at a point, it is necessarily continuous at that point. However, the converse is not necessarily true. ## Significance of NCERT Mathematics Book Class 12 1. The context of the NCERT class 12 books is designed by the experts and practicing teachers so the student need not worry about what to study and what to leave as it has the just perfect amount of the study material for the students. 2. The major benefit of this book is its layout; everything is systematic in the form of topics and sub-topics so that a student gets to know everything before proceeding to the next topic for the finest understanding. 3. It has applications, exercises, proofs and summary at the end so that he becomes well versed with the topic if he/she completes it following the pattern of the book. Moreover, Doubtnut helps the student understand the concepts in a fun and easy manner by the way of video tutorials and the use of graphics. The students develop an interest in the subject if they practice it with the help of the content available on our website. Therefore, it is a piece of sincere advice to all the students that they must go through our website once or download our app for the easier understanding of the subject.
# If f(x)= 2x sin(x) cos(x), how do you find f'(x)? May 15, 2015 Use trigonometry (sine of 2x) to rewrite the function first: $f \left(x\right) = 2 x \sin x \cos x = x \left(2 \sin x \cos x\right) = x \sin \left(2 x\right)$ Now use the product rule and use the chain rule to get: $f ' \left(x\right) = \left(1\right) \left(\sin \left(2 x\right)\right) + \left(x\right) \left(\cos \left(2 x\right) \cdot 2\right)$ Simpply to get: $f ' \left(x\right) = \sin \left(2 x\right) + 2 x \cos \left(2 x\right)$ I know it doesn't look the same as the other answer. Use trigonometric identities to see that the answers are equivalent.
# Allen Hatcher Allen Edward Hatcher (born October 23, 1944) is an American topologist. Allen E. Hatcher Allen Hatcher BornOctober 23, 1944 NationalityAmerican Alma materStanford University Scientific career FieldsMathematics InstitutionsCornell University Doctoral students ## Biography Hatcher received his Ph.D. under the supervision of Hans Samelson at Stanford University in 1971. He went on to become a professor at the University of California, Los Angeles. Since 1983 he has been a professor at Cornell University. ## Mathematical contributions He has worked in geometric topology, both in high dimensions, relating pseudoisotopy to algebraic K-theory, and in low dimensions: surfaces and 3-manifolds, such as proving the Smale conjecture for the 3-sphere. ### 3-manifolds Perhaps among his most recognized results in 3-manifolds concern the classification of incompressible surfaces in certain 3-manifolds and their boundary slopes. William Floyd and Hatcher classified all the incompressible surfaces in punctured-torus bundles over the circle. William Thurston and Hatcher classified the incompressible surfaces in 2-bridge knot complements. As corollaries, this gave more examples of non-Haken, non-Seifert fibered, irreducible 3-manifolds and extended the techniques and line of investigation started in Thurston's Princeton lecture notes. Hatcher also showed that irreducible, boundary-irreducible 3-manifolds with toral boundary have at most "half" of all possible boundary slopes resulting from essential surfaces. In the case of one torus boundary, one can conclude that the number of slopes given by essential surfaces is finite. Hatcher has made contributions to the so-called theory of essential laminations in 3-manifolds. He invented the notion of "end-incompressibility" and several of his students, such as Mark Brittenham, Charles Delman, and Rachel Roberts, have made important contributions to the theory. ### Surfaces Hatcher and Thurston exhibited an algorithm to produce a presentation of the mapping class group of a closed, orientable surface. Their work relied on the notion of a cut system and moves that relate any two systems. ## Selected publications ### Papers • Allen Hatcher and William Thurston, A presentation for the mapping class group of a closed orientable surface, Topology 19 (1980), no. 3, 221—237. • Allen Hatcher, On the boundary curves of incompressible surfaces, Pacific Journal of Mathematics 99 (1982), no. 2, 373—377. • William Floyd and Allen Hatcher, Incompressible surfaces in punctured-torus bundles, Topology and its Applications 13 (1982), no. 3, 263—282. • Allen Hatcher and William Thurston, Incompressible surfaces in ${\displaystyle \scriptstyle 2}$-bridge knot complements, Inventiones Mathematicae 79 (1985), no. 2, 225—246. • Allen Hatcher, A proof of the Smale conjecture, ${\displaystyle \scriptstyle {\mathrm {Diff} }(S^{3})\simeq {\mathrm {O} }(4)}$, Annals of Mathematics (2) 117 (1983), no. 3, 553—607. ### Books • Hatcher, Allen, Algebraic topology. Cambridge University Press, Cambridge, 2002. xii+544 pp. ISBN 0-521-79160-X and ISBN 0-521-79540-0
# Infinite Square Well 1. May 1, 2010 ### Slayer537 I've been working at this problem for about an hour and can't seem to make any progress. Any help would greatly be appreciated. 1. The problem statement, all variables and given/known data Estimate the ground state energy level of a proton in the Al nucleus which has a potential energy of 100 MeV. Compare your answer to that calculated from the infinite square model. The radius of the Al nucleus is 5 fm. 2. The attempt at a solution I thought that for the first part of the question this equation should be used En = n2*h2/(8*m*L2) However, I was getting nowhere close to the answer of 1.72 MeV. For the second part I figure that it would involve Schrodinger's equation and and this equation: $$\psi$$ = (2/L)1/2*sin(n*pi*x/L) Oddly enough using the first equation and using the diameter instead of the radius I got the right answer for the second part of the question of 2.05 Mev; however, I don't think that I solved it correctly. 2. May 1, 2010 ### nickjer The first part isn't an infinite well. The 2nd part is an infinite well, the first equation you listed is the energy levels for an infinite well, that is why it worked. Also, "L" is the width of the well which is the diameter and not the radius (that is why you got the right answer using diameter). The first part sounds like you will be using a finite potential well. Unless you are learning some other method like the shell model. 3. May 2, 2010 ### Slayer537 Thanks, that explains the second part of the question. Still can't figure out how to do the first part. We have done finite potential well, but not shell. I looked up the equations in my book and think that I should use Schrodinger's time independent equation: -(ħ/2m)(d2/dx2)Ψ(x)+U(x)Ψ(x)=EΨ(x) Where I would solve for E. Could I then use this for Ψ(x) : Ψ(x)=(2/L)1/2sin(pi*x/L) ? If so what value would I use for x, or am I still missing something? 4. May 2, 2010 ### Slayer537 Never mind. I just figured it out. First solve for δ: δ=ħ/(2*m*U)1/2 Then use δ to solve for Energy, making sure to use diameter, not radius: E=pi22/(2*m*L2) ---> E = 1.72 MeV
# Autocorrelation Autocorrelation refers to the degree of correlation between the values of the same variables across different observations in the data.  The concept of autocorrelation is most often discussed in the context of time series data in which observations occur at different points in time (e.g., air temperature measured on different days of the month).  For example, one might expect the air temperature on the 1st day of the month to be more similar to the temperature on the 2nd day compared to the 31st day.  If the temperature values that occurred closer together in time are, in fact, more similar than the temperature values that occurred farther apart in time, the data would be autocorrelated. However, autocorrelation can also occur in cross-sectional data when the observations are related in some other way.  In a survey, for instance, one might expect people from nearby geographic locations to provide more similar answers to each other than people who are more geographically distant.  Similarly, students from the same class might perform more similarly to each other than students from different classes.  Thus, autocorrelation can occur if observations are dependent in aspects other than time.  Autocorrelation can cause problems in conventional analyses (such as ordinary least squares regression) that assume independence of observations. In a regression analysis, autocorrelation of the regression residuals can also occur if the model is incorrectly specified.  For example, if you are attempting to model a simple linear relationship but the observed relationship is non-linear (i.e., it follows a curved or U-shaped function), then the residuals will be autocorrelated. How to Detect Autocorrelation A common method of testing for autocorrelation is the Durbin-Watson test.  Statistical software such as SPSS may include the option of running the Durbin-Watson test when conducting a regression analysis.  The Durbin-Watson tests produces a test statistic that ranges from 0 to 4.  Values close to 2 (the middle of the range) suggest less autocorrelation, and values closer to 0 or 4 indicate greater positive or negative autocorrelation respectively.
# Trying to find limit involving continuity concept 1. Jan 27, 2013 ### Torshi 1. The problem statement, all variables and given/known data Find c such that the function f(x) { x^2-9 while x≤ c and 6x-18 x > c } is continuous everywhere. 2. Relevant equations Given above. Basic algebra. 3. The attempt at a solution I made a number line. Showing that x^2-9 is approaching from the left side and 6x-18 is approaching from the right side given the designation of the inequality. If there were numbers, I could easily do it, but the "c" is throwing me off. An attempt to a similar problem with numbers would simply be chug and plug and knowing which side it comes from left or right, and if there was a variable "a" in there as well you can substitute. I have no idea how to start the problem, would it become c^2-9 and 6c-18? and try to solve for c? 2. Jan 27, 2013 ### SithsNGiggles $f(x)=\begin{cases} x^2-9 & \mbox{if } x \leq c\\ 6x-18 & \mbox{if } x>c\end{cases}$ For $f(x)$ to be continuous at $c$, you must satisfy $\displaystyle\lim_{x\to c^{+}} f(x) = \displaystyle\lim_{x\to c^{-}} f(x)$ That is, $\displaystyle\lim_{x\to c^{+}} x^2-9 = \displaystyle\lim_{x\to c^{-}} 6x-18$ That's basically what (it seems) you've done already, but this is the reasoning behind the step you've taken. 3. Jan 27, 2013 ### SammyS Staff Emeritus IMO: It's more important to know why a method works than to know a method without understanding it. So, here are some questions: What is necessary in order that f(x) be continuous at x = c ? What is $\displaystyle \lim_{x\to c^-}\,x^2-9\ ?$ What is $\displaystyle \lim_{x\to c^+}\,6x-18\ ?$ 4. Jan 27, 2013 ### Torshi But I don't know how to move on from there.. I end up getting something like x/x = √-3 5. Jan 27, 2013 ### Torshi do you have to set the two limits equal? 6. Jan 27, 2013 ### SammyS Staff Emeritus If you solve c2-9 =6c-18 for c, how do you get x/x = √-3 ? There is no x in c2-9 =6c-18 , and x/x = 1 . 7. Jan 27, 2013 ### SammyS Staff Emeritus Yes, which should be clear if you answer the question about the function being continuous at x = c . 8. Jan 27, 2013 ### Torshi I put x because the other poster. But, essentially wouldn't it be: c^2-9 = 6c-18 c^2=6c-9 c^2/c = -3 ..idk what to do ... if c was a number in the inequality then i could easily do this problem alright hold on.. 9. Jan 27, 2013 ### Torshi bump nvm got it. Since f is continuous everywhere, it is continuous at x = c. So the left-hand limit must equal the right-hand limit. Thus: c^2 - 9 = 6c - 18 c^2 - 6c + 9 = 0 (c - 3)^2 = 0 c = 3 thank you! Last edited: Jan 27, 2013 10. Jan 27, 2013 ### SithsNGiggles Yes, that's what I was suggesting.
Browse Questions # Among the following which has the most polar character. $(A)\;C-Cl \\ (B)\;C-Br \\(C)\;C-F \\(D)\;C-S$ Since F is most electronegative , therefore $C-F$ bond has more electronegativity difference. Hence C is the correct answer.
## Introduction This article describes how the factorial and Gamma functions for non-integer arguments where implemented for the big-math library. For an introduction into the Gamma function see Wikipedia: Gamma Function ## Attempt to use Euler’s definition as an infinite product Euler’s infinite product definition is easy to implement, but I have some doubts about its usefulness to calculate the result with the desired precision. public static BigDecimal factorialUsingEuler(BigDecimal x, int steps, MathContext mathContext) { MathContext mc = new MathContext(mathContext.getPrecision() * 2, mathContext.getRoundingMode()); BigDecimal product = BigDecimal.ONE; for (int n = 1; n < steps; n++) { BigDecimal factor = BigDecimal.ONE.divide(BigDecimal.ONE.add(x.divide(BigDecimal.valueOf(n), mc), mc), mc).multiply(pow(BigDecimal.ONE.add(BigDecimal.ONE.divide(BigDecimal.valueOf(n), mc), mc), x, mc), mc); product = product.multiply(factor, mc); } return product.round(mathContext); } Running with increasing number of steps shows that this approach will not work satisfactorily. 5! in 1 steps = 1 5! in 10 steps = 49.950049950049950050 5! in 100 steps = 108.73995188474609004 5! in 1000 steps = 118.80775820319167518 5! in 10000 steps = 119.88007795802040268 5! in 100000 steps = 119.98800077995800204 5! in 1000000 steps = 119.99880000779995800 ## Using Spouge’s Approximation After reading through several pages of related material I finally find a promising approach: Spouge’s approximation where a is an arbitrary positive integer that can be used to control the precision and the coefficients are given by Please note that the coefficients are constants that only depend on a and not on the input argument to factorial. The relative error when omitting the epsilon part is bound to It is nice to have a function that defines the error, normally I need to empirically determine the error for a sensible range of input arguments and precision. ### Expected error of Spouge’s Approximation Lets implement the error formula and see how it behaves. public static BigDecimal errorOfFactorialUsingSpouge(int a, MathContext mc) { return pow(BigDecimal.valueOf(a), BigDecimal.valueOf(-0.5), mc).multiply(pow(TWO.multiply(pi(mc), mc), BigDecimal.valueOf(-a-0.5), mc), mc); } Instead of plotting the error bounds directly, I determine the achievable precision using -log10(error). Using the relative error formula of Spouge’s approximation we see that the expected precision is pretty linear to the chosen value of a for the values [1..1000] (which are a sensible range for the precision the users of the function will use). This will make it easy to calculate a sensible value for a from the desired precision. Note: While testing this I found a bug in log(new BigDecimal("6.8085176335035800378E-325")). Fixed it before it could run away. ### Caching Spouge’s coefficients (depending on precision) The coefficients depend only on the value of a. We can cache the coefficients for every value of a that we need: private static Map<Integer, List<BigDecimal>> spougeFactorialConstantsCache = new HashMap<>(); private static List<BigDecimal> getSpougeFactorialConstants(int a) { return spougeFactorialConstantsCache.computeIfAbsent(a, key -> { List<BigDecimal> constants = new ArrayList<>(a); MathContext mc = new MathContext(a * 15/10); BigDecimal c0 = sqrt(pi(mc).multiply(TWO, mc), mc); boolean negative = false; BigDecimal factor = c0; for (int k = 1; k < a; k++) { BigDecimal bigK = BigDecimal.valueOf(k); BigDecimal ck = pow(BigDecimal.valueOf(a-k), bigK.subtract(BigDecimal.valueOf(0.5), mc), mc); ck = ck.multiply(exp(BigDecimal.valueOf(a-k), mc), mc); ck = ck.divide(factorial(k - 1), mc); if (negative) { ck = ck.negate(); } negative = !negative; } return constants; }); } Calculating the coefficients becomes quite expensive with higher precision. This will need to be explained in the javadoc of the method. ### Spouge’s approximation with pre-calculated constants Now that we have the coefficients for a specific value of a we can implement the factorial method: public static BigDecimal factorialUsingSpougeCached(BigDecimal x, MathContext mathContext) { MathContext mc = new MathContext(mathContext.getPrecision() * 2, mathContext.getRoundingMode()); int a = mathContext.getPrecision() * 13 / 10; List<BigDecimal> constants = getSpougeFactorialConstants(a); BigDecimal bigA = BigDecimal.valueOf(a); boolean negative = false; BigDecimal factor = constants.get(0); for (int k = 1; k < a; k++) { BigDecimal bigK = BigDecimal.valueOf(k); negative = !negative; } result = result.multiply(exp(x.negate().subtract(bigA, mc), mc), mc); result = result.multiply(factor, mc); return result.round(mathContext); } Let’s calculate first the factorial function with constant precision over a range of input values. Looks like the argument x does not have much influence on the calculation time. More interesting is the influence that the precision has on the calculation time. The following chart was measured by calculating 5! over a range of precisions: ## Gamma function The implementation of the Gamma function is trivial, now that we have a running factorial function. public static BigDecimal gamma(BigDecimal x, MathContext mathContext) { return factorialUsingSpougeCached(x.subtract(ONE), mathContext); } ## Polishing before adding it to BigDecimalMath Before committing the new methods factorial() and gamma() to BigDecimalMath I need to do some polishing… The access to the cache must be synchronized to avoid race conditions. Most important is optimizing the calculation for the special cases of x being integer values which can be calculated much faster by calling BigDecimalMath.factorial(int). Lots of unit tests of course! As usual Wolfram Alpha provides some nice reference values to prove that the calculations are correct (at least for the tested cases). Writing javadoc takes also some time (and thoughts). You can check out the final version in github: BigComplexMath.java ## Release 2.0.0 of big-math library supports now complex numbers The easter week-end was the perfect time to polish and release version 2.0.0 of the big-math library. The class BigComplex represents complex numbers in the form (a + bi). It follows the design of BigComplex with some convenience improvements like overloaded operator methods. • re • im • subtract(BigComplex) • subtract(BigComplex, MathContext) • subtract(BigDecimal) • subtract(BigDecimal, MathContext) • subtract(double) • multiply(BigComplex) • multiply(BigComplex, MathContext) • multiply(BigDecimal) • multiply(BigDecimal, MathContext) • multiply(double) • divide(BigComplex) • divide(BigComplex, MathContext) • divide(BigDecimal) • divide(BigDecimal, MathContext) • divide(double) • reciprocal(MathContext) • conjugate() • negate() • abs(MathContext) • angle(MathContext) • absSquare(MathContext) • isReal() • re() • im() • round(MathContext) • hashCode() • equals(Object) • strictEquals(Object) • toString() • valueOf(BigDecimal) • valueOf(double) • valueOf(BigDecimal, BigDecimal) • valueOf(double, double) • valueOfPolar(BigDecimal, BigDecimal, MathContext) • valueOfPolar(double, double, MathContext) A big difference to BigDecimal is that BigComplex.equals() implements the mathematical equality and not the strict technical equality. This was a difficult decision because it means that BigComplex behaves slightly different than BigDecimal but considering that the strange equality of BigDecimal is a major source of bugs we decided it was worth the slight inconsistency. If you need the strict equality use BigComplex.strictEquals(). The class BigComplexMath is the equivalent of BigDecimalMath and contains mathematical functions in the complex domain. • sin(BigComplex, MathContext) • cos(BigComplex, MathContext) • tan(BigComplex, MathContext) • asin(BigComplex, MathContext) • acos(BigComplex, MathContext) • atan(BigComplex, MathContext) • acot(BigComplex, MathContext) • exp(BigComplex, MathContext) • log(BigComplex, MathContext) • pow(BigComplex, long, MathContext) • pow(BigComplex, BigDecimal, MathContext) • pow(BigComplex, BigComplex, MathContext) • sqrt(BigComplex, MathContext) • root(BigComplex, BigDecimal, MathContext) • root(BigComplex, BigComplex, MathContext) ## Aurora Borealis (Northern Lights) in Tromsø To celebrate my fiftieth birthday the whole family had a great vacation in Tromsø to see the northern lights. The following videos where all taken using a wireless remote control with programmable interval. The single shots where joined using ffmpeg. #!/bin/sh # $1 = framerate (for aurora timelapse use 1 to 4) #$2 = start number of first image # $3 = output file (with .mp4 extension) ffmpeg -y -r "$1" -start_number "$2" -i IMG_%04d.JPG -s hd1080 -vf "framerate=fps=30:interp_start=0:interp_end=255:scene=100" -vcodec mpeg4 -q:v 1 "$3 In a few cases the images where a bit underexposed and needed to be brightened. This was done with a simple shell script using the imagemagick convert. #!/bin/sh mkdir modulate150 for i in *.JPG do convert $i -modulate 150% modulate150/$i done ## Adaptive precision in Newton’s Method This describes a way to improve the performance of a BigDecimal based implementation of Newton’s Method by adapting the precision for every iteration to the maximum precision that is actually possible at this step. As showcase I have picked the implementation of Newton’s Method to calculate the natural logarithm of a BigDecimal value with a determined precision. The source code is available on github: big-math. Here the mathematical formulation of the algorithm: $$\require{AMSmath}$$ $$\displaystyle y_0 = \operatorname{Math.log}(x),$$ $$\displaystyle y_{i+1} = y_i + 2 \frac{x – e^{y_i} }{ x + e^{y_i}},$$ $$\displaystyle \ln{x} = \lim_{i \to \infty} y_i$$ Here a straightforward implementation: private static final BigDecimal TWO = valueOf(2); public static BigDecimal logUsingNewtonFixPrecision(BigDecimal x, MathContext mathContext) { if (x.signum() <= 0) { throw new ArithmeticException("Illegal log(x) for x <= 0: x = " + x); } MathContext mc = new MathContext(mathContext.getPrecision() + 4, mathContext.getRoundingMode()); BigDecimal acceptableError = BigDecimal.ONE.movePointLeft(mathContext.getPrecision() + 1); BigDecimal result = BigDecimal.valueOf(Math.log(x.doubleValue())); BigDecimal step; do { BigDecimal expY = BigDecimalMath.exp(result, mc); // available on https://github.com/eobermuhlner/big-math step = TWO.multiply(x.subtract(expY, mc), mc).divide(x.add(expY, mc), mc); } while (step.abs().compareTo(acceptableError) > 0); return result.round(mathContext); } The MathContext mc is created with a precision of 4 digits more than the output is expected to have. All calculations are done with this MathContext and therefore with the full precision. The result is correct but we can improve the performance significantly be adapting the precision for every iteration. The initial approximation uses Math.log(x.doubleValue()) which has a precision of about 17 significant digits. We can expect that the precision triples with every iteration so it does not make sense to calculate with a higher precision than necessary. Here the same implementation with a temporary MathContext that is recreated with a different precision every iteration. public static BigDecimal logUsingNewtonAdaptivePrecision(BigDecimal x, MathContext mathContext) { if (x.signum() <= 0) { throw new ArithmeticException("Illegal log(x) for x <= 0: x = " + x); } int maxPrecision = mathContext.getPrecision() + 4; BigDecimal acceptableError = BigDecimal.ONE.movePointLeft(mathContext.getPrecision() + 1); BigDecimal result = BigDecimal.valueOf(Math.log(x.doubleValue())); BigDecimal step = null; do { } MathContext mc = new MathContext(adaptivePrecision, mathContext.getRoundingMode()); BigDecimal expY = BigDecimalMath.exp(result, mc); // available on https://github.com/eobermuhlner/big-math step = TWO.multiply(x.subtract(expY, mc), mc).divide(x.add(expY, mc), mc); } while (adaptivePrecision < maxPrecision || step.abs().compareTo(acceptableError) > 0); return result.round(mathContext); } The performance comparison between the two implementations is impressive. The following chart shows the time in nanoseconds it takes to calculate the log() of values of x in the range from 0 to 1 with a precision of 300 digits. Here some more charts to show the performance improvements of the adaptive precision technique applied to different approximative implementations: This method can only be applied to approximative methods that improve the result with every iteration and discard the previous result, such as Newton’s Method. It does obviously not work on methods that accumulate the results of each iteration to calculate the final result, such as Taylor series which add the terms. ## BigDecimalMath $$\require{AMSmath}$$ Java 8 is out and there are still no Math functions for BigDecimal. After playing around with some implementations to calculate Pi I decided to write some implementation of BigDecimalMath to fill this gap. The result of this is available on github: big-math. The goal was to provide the following functions: • exp(x) • log(x) • pow(x, y) • sqrt(x) • root(n, x) • sin(x), cos(x), tan(x), cot(x) • asin(x), acos(x), atan(x), acot(x) • sinh(x), cosh(x), tanh(x) • asinh(x), acosh(x), atanh(x) The calculations must be accurate to the desired precision (specified in the MathContext) and the performance should be acceptable and stable for a large range of input values. ## Implementation Details ### Implementation exp(x) To implement exp() the classical Taylor series was used: $$\displaystyle e^x = \sum^{\infty}_{n=0} \frac{x^n}{n!} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots$$ ### Implementation log() Note that in Java the function name log() means the natural logarithm, which in mathematical notation is written $$\ln{x}$$. The implementation of log() is based on Newton’s method. We can use the double version Math.log() to give us a good initial value. $$\displaystyle y_0 = \operatorname{Math.log}(x),$$ $$\displaystyle y_{i+1} = y_i + 2 \frac{x – e^{y_i} }{ x + e^{y_i}},$$ $$\displaystyle \ln{x} = \lim_{i \to \infty} y_i$$ Several optimizations in the implementation transform the argument of log(x) so that it will be nearer to the optimum of 1.0 to converge faster. \begin{align} \displaystyle \ln{x} & = \ln{\left(a \cdot 10^b\right)} = \ln{a} + \ln{10} \cdot b & \qquad \text{for } x \leq 0.1 \text{ or } x \geq 10 \\ \displaystyle \ln{x} & = \ln{\left( 2 x \right)} – \ln{2} & \qquad \text{for } x \lt 0.115 \\ \displaystyle \ln{x} & = \ln{\left( 3 x \right)} – \ln{3} & \qquad \text{for } x \lt 0.14 \\ \displaystyle \ln{x} & = \ln{\left( 4 x \right)} – 2 \ln{2} & \qquad \text{for } x \lt 0.2 \\ \displaystyle \ln{x} & = \ln{\left( 6 x \right)} – \ln{2} – \ln{3} & \qquad \text{for } x \lt 0.3 \\ \displaystyle \ln{x} & = \ln{\left( 8 x \right)} – 3 \ln{2} & \qquad \text{for } x \lt 0.42 \\ \displaystyle \ln{x} & = \ln{\left( 9 x \right)} – 3 \ln{3} & \qquad \text{for } x \lt 0.7 \\ \displaystyle \ln{x} & = \ln{\left( \frac{1}{2} x \right)} + \ln{2} & \qquad \text{for } x \lt 2.5 \\ \displaystyle \ln{x} & = \ln{\left( \frac{1}{3} x \right)} + \ln{3} & \qquad \text{for } x \lt 3.5 \\ \displaystyle \ln{x} & = \ln{\left( \frac{1}{4} x \right)} + 2 \ln{2} & \qquad \text{for } x \lt 5.0 \\ \displaystyle \ln{x} & = \ln{\left( \frac{1}{6} x \right)} + \ln{2} + \ln{3} & \qquad \text{for } x \lt 7.0 \\ \displaystyle \ln{x} & = \ln{\left( \frac{1}{8} x \right)} + 3 \ln{2} & \qquad \text{for } x \lt 8.5 \\ \displaystyle \ln{x} & = \ln{\left( \frac{1}{9} x \right)} + 3 \ln{3} & \qquad \text{for } x \lt 10.0 \end{align} The additional logarithmic functions to different common bases are simple: $$\displaystyle \operatorname{log}_2{x} = \frac{\ln{x}}{\ln{2}}$$ $$\displaystyle \operatorname{log}_{10}{x} = \frac{\ln{x}}{\ln{10}}$$ Since the precalculated values for $$\ln{2}, \ln{3}, \ln{10}$$ with a precision of up to 1100 digits already exist for the optimizations mentioned above, the log2() and log10() functions could reuse them and are therefore reasonably fast. ### Implementation pow(x) The implementation of pow() with non-integer arguments is based on exp() and log(): $$\displaystyle x^y = e^{y \ln x}$$ If y is an integer argument then pow() is implemented with multiplications: $$\displaystyle x^y = \prod_{i \to y} x$$ Actually the implementation is further optimized to reduce the number of multiplications by squaring the argument whenever possible. ### Implementation sqrt(x), root(n, x) The implementation of sqrt() and root() uses Newton’s method to approximate the result until the necessary precision is reached. In the case of sqrt() we can use the double version Math.sqrt() to give us a good initial value. $$\displaystyle y_0 = \operatorname{Math.sqrt}(x),$$ $$\displaystyle y_{i+1} = \frac{1}{2} \left(y_i + \frac{x}{y_i}\right),$$ $$\displaystyle \sqrt{x} = \lim_{i \to \infty} y_i$$ Unfortunately the root() function does not exist for double so we are forced to use a simpler initial value. $$\displaystyle y_0 = \frac{1}{n},$$ $$\displaystyle y_{i+1} = \frac{1}{n} \left[{(n-1)y_i +\frac{x}{y_i^{n-1}}}\right],$$ $$\displaystyle \sqrt[n]{x} = \lim_{i \to \infty} y_i$$ ### Implementation sin(x), cos(x), tan(x), cot(x) The basic trigonometric functions where implemented using Taylor series or if this proved more efficient by their relationship with an already implemented functions: $$\displaystyle \sin x = \sum^{\infty}_{n=0} \frac{(-1)^n}{(2n+1)!} x^{2n+1} = x – \frac{x^3}{3!} + \frac{x^5}{5!} – \cdots$$ $$\displaystyle \cos x = \sum^{\infty}_{n=0} \frac{(-1)^n}{(2n)!} x^{2n} = 1 – \frac{x^2}{2!} + \frac{x^4}{4!} – \cdots$$ $$\displaystyle \tan x = \frac{\sin x}{\cos x}$$ $$\displaystyle \cot x = \frac{\cos x}{\sin x}$$ ### Implementation asin(x), acos(x), atan(x), acot(x) The inverse trigonometric functions use a Taylor series for arcsin(). $$\displaystyle \arcsin x = \sum^{\infty}_{n=0} \frac{(2n)!}{4^n (n!)^2 (2n+1)} x^{2n+1}$$ This series takes very long to converge, especially when the argument x gets close to 1. As optimization the argument x is transformed to a more efficient range using the following relationship. $$\displaystyle \arcsin x = \arccos \sqrt{1-x^2} \qquad \text{for } x \gt \sqrt{\frac{1}{2}} \text{ (} \approx 0.707107 \text{)}$$ The remaining functions are implemented by their relationship with arcsin(). $$\displaystyle \arccos x = \frac{\pi}{2} – \arcsin x$$ $$\displaystyle \arctan x = \arcsin \frac{x}{\sqrt{1+x^2}}$$ $$\displaystyle \operatorname{arccot} x = \frac{\pi}{2} – \arctan x$$ ### Implementation sinh(x), cosh(x), tanh(x) Taylor series are efficient for most of the implementations of hyperbolic functions. $$\displaystyle \sinh x= \sum_{n=0}^\infty \frac{x^{2n+1}}{(2n+1)!} = x + \frac{x^3}{3!} + \frac{x^5}{5!} + \frac{x^7}{7!} +\cdots$$ $$\displaystyle \cosh x = \sum_{n=0}^\infty \frac{x^{2n}}{(2n)!} = 1 + \frac{x^2}{2!} + \frac{x^4}{4!} + \frac{x^6}{6!} + \cdots$$ The Taylor series for tanh() converges very slowly, so we use the relationship with sinh() and tanh() instead. $$\displaystyle \tanh x = \frac{\sinh x}{\cosh x}$$ ### Implementation asinh(x), acosh(x), atanh(x) The inverse hyperbolic functions can be expressed using natural logarithm. $$\displaystyle \operatorname{arsinh} x = \ln(x + \sqrt{x^2 + 1} )$$ $$\displaystyle \operatorname{arcosh} x = \ln(x + \sqrt{x^2-1} )$$ $$\displaystyle \operatorname{artanh} x = \frac12\ln\left(\frac{1+x}{1-x}\right)$$ ## Performance calculating different precisions Obviously it will take longer to calculate a function result with a higher precision than a lower precision. The following charts show the time needed to calculate the functions with different precisions. The arguments of the functions where: • log(3.1) • exp(3.1) • pow(123.456, 3.1) • sqrt(3.1) • root(2, 3.1) • root(3, 3.1) • sin(3.1) • cos(3.1) While the time to calculate the results grows worse than linear for higher precisions the speed is still reasonable for precisions of up to 1000 digits. ## Performance calculating different values The following charts show the time needed to calculate the functions over a range of values with a precision of 300 digits. • log(x) • exp(x) • pow(123.456, x) • sqrt(x) • root(2, x) • root(3, x) • sin(x) • cos(x) The functions have been separated in a fast group (exp, sqrt, root, sin, cos) and a slow group (exp, log, pow). For comparison reasons the exp() function is contained in both groups. ### Range 0 to 2 The performance of the functions is in a reasonable range and is stable, especially when getting close to 0 in which some functions might converge slowly. The functions exp(), sin(), cos() need to be watched at the higher values of x to prove that the do not continue to grow. Shows nicely that log() is more efficient when x is close to 1.0. By using divisions and multiplication with the prime numbers 2 and 3 the log() function was optimized to use this fact for values of x than can be brought closer to 1.0. This gives the strange arches in the performance of log(). The pow() function performs fairly constant, except for the powers of integer values which are optimized specifically. ### Range 0 to 10 Shows that sin(), cos() have been optimized with the period of 2*Pi (roughly 6.28) so that they do not continue to grow with higher values. This optimization has some cost which needs to be watched at higher values. exp() has become stable and does no longer grow. log() is stable and shows the typical arches with optimas at 1.0, 2.0 (divided by 2), 3.0 (divided by 3), 4.0 (divided by 2*2), 6.0 (divided by 2*3), 8.0 (divided by 2*2*2) and 9.0 (divided by 3*3). pow() continues stable. ### Range -10 to 10 Positive and negative values are symmetric for all functions that are defined for the negative range. ### Range 0 to 100 All functions are stable over this range. All functions are stable over this range. The pow() function makes the chart somewhat hard to read because of the optimized version for integer powers. The log() function shows here the effect of another optimization using the expoential form. The range from 10 to 100 is brought down to the range 1 to 10 and the same divisions are applied. This has the effect of showing the same arches again in the range from 10 to 100. Posted in Development, Java, Math | | 8 Comments ## Bernoulli Numbers As part of the ongoing development of the BigRational and BigDecimalMath classes I needed to implement a method to calculate the Bernoulli numbers. Since I had a hard time to find a reference list of the Bernoulli numbers I will put the table of the first few calculated numbers here. For a larger list of Bernoulli numbers have a look at the bernoulli.csv file. B0 = 1 B1 = -1   2 B2 = 1   6 B3 = 0 B4 = -1   30 B5 = 0 B6 = 1   42 B7 = 0 B8 = -1   30 B9 = 0 B10 = 5   66 B11 = 0 B12 = -691   2730 B13 = 0 B14 = 7   6 B15 = 0 B16 = -3617   510 B17 = 0 B18 = 43867   798 B19 = 0 B20 = -174611   330 B21 = 0 B22 = 854513   138 B23 = 0 B24 = -236364091   2730 B25 = 0 B26 = 8553103   6 B27 = 0 B28 = -23749461029   870 B29 = 0 B30 = 8615841276005   14322 B31 = 0 B32 = -7709321041217   510 B33 = 0 B34 = 2577687858367   6 B35 = 0 B36 = -26315271553053477373   1919190 B37 = 0 B38 = 2929993913841559   6 B39 = 0 B40 = -261082718496449122051   13530 B41 = 0 B42 = 1520097643918070802691   1806 B43 = 0 B44 = -27833269579301024235023   690 B45 = 0 B46 = 596451111593912163277961   282 B47 = 0 B48 = -5609403368997817686249127547   46410 B49 = 0 B50 = 495057205241079648212477525   66 B51 = 0 B52 = -801165718135489957347924991853   1590 B53 = 0 B54 = 29149963634884862421418123812691   798 B55 = 0 B56 = -2479392929313226753685415739663229   870 B57 = 0 B58 = 84483613348880041862046775994036021   354 B59 = 0 B60 = -1215233140483755572040304994079820246041491   56786730 B61 = 0 B62 = 12300585434086858541953039857403386151   6 B63 = 0 B64 = -106783830147866529886385444979142647942017   510 B65 = 0 B66 = 1472600022126335654051619428551932342241899101   64722 B67 = 0 B68 = -78773130858718728141909149208474606244347001   30 B69 = 0 B70 = 1505381347333367003803076567377857208511438160235   4686 B71 = 0 B72 = -5827954961669944110438277244641067365282488301844260429   140100870 B73 = 0 B74 = 34152417289221168014330073731472635186688307783087   6 B75 = 0 B76 = -24655088825935372707687196040585199904365267828865801   30 B77 = 0 B78 = 414846365575400828295179035549542073492199375372400483487   3318 B79 = 0 ## Using GLSL to generate gas giant planets To be completely honest, the code that is described in this blog is already more than a year old. I just wanted to catch up with the current state of my project. I will therefore try to write several blogs in the next couple of days describing what has been going on… After my first experiments with earth-like planets I wanted to experiment with creating Jupiter-like gas giants in GLSL. The approach is still noise based but instead of creating a height map we now want to create something that looks like turbulent clouds. The following screenshots where created using the GLSL code below: Note how the bands are every time differently distributed and that the turbulence varies from band to band as well as from planet to planet. Again you will need the excellent noise function from noise2D.glsl. #ifdef GL_ES #define LOWP lowp #define MED mediump #define HIGH highp precision highp float; #else #define MED #define LOWP #define HIGH #endif uniform float u_time; uniform vec3 u_planetColor0; uniform vec3 u_planetColor1; uniform vec3 u_planetColor2; uniform float u_random0; uniform float u_random1; uniform float u_random2; uniform float u_random3; uniform float u_random4; uniform float u_random5; uniform float u_random6; uniform float u_random7; uniform float u_random8; uniform float u_random9; varying vec2 v_texCoords0; // INSERT HERE THE NOISE FUNCTIONS ... float pnoise2(vec2 P, float period) { return pnoise(P*period, vec2(period, period)); } float pnoise1(float x, float period) { return pnoise2(vec2(x, 0.0), period); } vec3 toColor(float value) { float r = clamp(-value, 0.0, 1.0); float g = clamp(value, 0.0, 1.0); float b = 0.0; return vec3(r, g, b); } float planetNoise(vec2 P) { vec2 rv1 = vec2(u_random0, u_random1); vec2 rv2 = vec2(u_random2, u_random3); vec2 rv3 = vec2(u_random4, u_random5); vec2 rv4 = vec2(u_random6, u_random7); vec2 rv5 = vec2(u_random8, u_random9); float r1 = u_random0 + u_random2; float r2 = u_random1 + u_random2; float r3 = u_random2 + u_random2; float r4 = u_random3 + u_random2; float r5 = u_random4 + u_random2; float noise = 0.0; noise += pnoise2(P+rv1, 10.0) * (0.2 + r1 * 0.4); noise += pnoise2(P+rv2, 50.0) * (0.2 + r2 * 0.4); noise += pnoise2(P+rv3, 100.0) * (0.3 + r3 * 0.2); noise += pnoise2(P+rv4, 200.0) * (0.05 + r4 * 0.1); noise += pnoise2(P+rv5, 500.0) * (0.02 + r4 * 0.15); return noise; } float jupiterNoise(vec2 texCoords) { float r1 = u_random0; float r2 = u_random1; float r3 = u_random2; float r4 = u_random3; float r5 = u_random4; float r6 = u_random5; float r7 = u_random6; float distEquator = abs(texCoords.t - 0.5) * 2.0; float noise = planetNoise(vec2(texCoords.x+distEquator*0.6, texCoords.y)); float distPol = 1.0 - distEquator; float disturbance = 0.0; disturbance += pnoise1(distPol+r1, 3.0+r4*3.0) * 1.0; disturbance += pnoise1(distPol+r2, 9.0+r5*5.0) * 0.5; disturbance += pnoise1(distPol+r3, 20.0+r6*10.0) * 0.1; disturbance = disturbance*disturbance*2.0; float noiseFactor = r7 * 0.3; float noiseDistEquator = distEquator + noise * noiseFactor * disturbance; return noiseDistEquator; } float jupiterHeight(float noise) { return noise * 5.0; } vec3 planetColor(float distEquator) { float r1 = u_random0 + u_random3; float r2 = u_random1 + u_random3; float r3 = u_random2 + u_random3; float r4 = u_random3 + u_random3; float r5 = u_random4 + u_random3; float r6 = u_random5 + u_random3; float r7 = u_random6 + u_random3; float r8 = u_random7 + u_random3; vec3 color1 = u_planetColor0; vec3 color2 = u_planetColor1; vec3 color3 = u_planetColor2; float v1 = pnoise1(distEquator+r1, 2.0 + r4*15.0) * r7; float v2 = pnoise1(distEquator+r2, 2.0 + r5*15.0) * r8; vec3 mix1 = mix(color1, color2, v1); vec3 mix2 = mix(mix1, color3, v2); return mix2; } void main() { float noise = jupiterNoise(v_texCoords0); vec3 color = planetColor(noise); gl_FragColor.rgb = color; } The colors where picked from real images of the gas and ice giants in our solar system (Jupiter, Saturn, Uranus, Neptune). To produce more interesting results the colors are randomized by up to 10% before passing them to the shader. Every planet receives three random colors which are then randomly interpolated. private static final Color[] JUPITER_COLORS = new Color[] { new Color(0.3333f, 0.2222f, 0.1111f, 1.0f), new Color(0.8555f, 0.8125f, 0.7422f, 1.0f), new Color(0.4588f, 0.4588f, 0.4297f, 1.0f), new Color(0.5859f, 0.3906f, 0.2734f, 1.0f), }; private static final Color[] ICE_COLORS = new Color[] { new Color(0.6094f, 0.6563f, 0.7695f, 1.0f), new Color(0.5820f, 0.6406f, 0.6406f, 1.0f), new Color(0.2695f, 0.5234f, 0.9102f, 1.0f), new Color(0.3672f, 0.4609f, 0.7969f, 1.0f), new Color(0.7344f, 0.8594f, 0.9102f, 1.0f), }; private static final Color[][] GAS_PLANET_COLORS = { JUPITER_COLORS, ICE_COLORS }; public Color[] randomGasPlanetColors() { return randomGasPlanetColors(GAS_PLANET_COLORS[random.nextInt(GAS_PLANET_COLORS.length)]); } public Color[] randomGasPlanetColors(Color[] colors) { return new Color[] { randomGasPlanetColor(colors), randomGasPlanetColor(colors), randomGasPlanetColor(colors) }; } public Color randomGasPlanetColor (Color[] colors) { return randomDeviation(random, colors[random.nextInt(colors.length)]); } private Color randomDeviation (Random random, Color color) { return new Color( clamp(color.r * nextFloat(random, 0.9f, 1.1f), 0.0f, 1.0f), clamp(color.g * nextFloat(random, 0.9f, 1.1f), 0.0f, 1.0f), clamp(color.b * nextFloat(random, 0.9f, 1.1f), 0.0f, 1.0f), 1.0f); } ## Using GLSL Shaders to generate Planets My goal was to write a GLSL shader that would create an earth-like planet. There are lots of good introductions into GLSL shader programming. The following example is based on LibGDX, but it should be easily adapted to another framework. First we need a little program so that we can experiment with the shader programs and see the results. private ModelBatch modelBatch; private PerspectiveCamera camera; private CameraInputController cameraInputController; private Environment environment; private final ModelBuilder modelBuilder = new ModelBuilder(); private final Array instances = new Array(); private static final int SPHERE_DIVISIONS_U = 20; private static final int SPHERE_DIVISIONS_V = 20; @Override public void create () { createTest(new UberShaderProvider("planet")), new Material(), Usage.Position | Usage.Normal | Usage.TextureCoordinates); } camera = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); camera.position.set(10f, 10f, 10f); camera.lookAt(0, 0, 0); camera.near = 1f; camera.far = 300f; camera.update(); cameraInputController = new CameraInputController(camera); Gdx.input.setInputProcessor(cameraInputController); environment = new Environment(); environment.set(new ColorAttribute(ColorAttribute.AmbientLight, Color.DARK_GRAY)); ModelInstance instance = new ModelInstance(model); } @Override public void render () { Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); Gdx.gl.glClearColor(0.0f, 0.0f, 0.8f, 1.0f); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); cameraInputController.update(); modelBatch.begin(camera); modelBatch.render(instances, environment); modelBatch.end(); } } } @Override } } Now we can simply replace the string argument to the UberShaderProvider to test a particular pair of vertex and fragment shader programs in First we will need a simple vertex shader that transforms the local vertex position into a global position and passes the texture coordinates on to the fragment shader. attribute vec3 a_position; attribute vec2 a_texCoord0; uniform mat4 u_worldTrans; uniform mat4 u_projViewTrans; varying vec2 v_texCoords0; void main() { v_texCoords0 = a_texCoord0; gl_Position = u_projViewTrans * u_worldTrans * vec4(a_position, 1.0); } Now we can write the fragment shader. Let’s start by calculating a color directly from the texture coordinates. Since you typically have no debuggers for shader programs the easiest way to figure out what is going on is to visualize the intermediate steps as colors. With some experience you will be able to see the values just by looking at the rendered graphics. #ifdef GL_ES #define LOWP lowp #define MED mediump #define HIGH highp precision mediump float; #else #define MED #define LOWP #define HIGH #endif varying MED vec2 v_texCoords0; void main() { vec3 color = vec3(v_texCoords0.x, v_texCoords0.y, 0.0); gl_FragColor.rgb = color; } You can see that the x coordinate of the texture is mapped to the red color of each pixel. The y coordinate of the texture is mapped to the green color of each pixel. ## Convert noise into colors The next step is to use a noise function that we will then use to create the pseudo-random oceans and continents. You can find an excellent noise function in the webgl-noise project. Copy and paste the source code from the file noise2D.glsl into your shader. #ifdef GL_ES #define LOWP lowp #define MED mediump #define HIGH highp precision mediump float; #else #define MED #define LOWP #define HIGH #endif varying MED vec2 v_texCoords0; // INSERT HERE THE NOISE FUNCTIONS ... float pnoise2(vec2 P, float period) { return pnoise(P*period, vec2(period, period)); } float earthNoise(vec2 P) { vec2 r1 = vec2(0.70, 0.82); // random numbers float noise = 0.0; noise += pnoise2(P+r1, 9.0); return noise; } void main() { float noise = earthNoise(v_texCoords0); gl_FragColor.rgb = noise; } Obviously the noise value 1.0 corresponds to white (= vec3(1.0, 1.0, 1.0)), while noise value 0.0 corresponds to black (= vec3(0.0, 0.0, 0.0)). By now you might be wondering why the black areas are so large – is the noise function faulty? Actually the noise function returns values in the range -1.0 to 1.0. But the conversion to RGB colors clamps all negative values to 0.0, hence the large black areas. As an exercise to prove this (and as a tool to debug negative values in the future) let’s write a function that converts positive values (0.0 to 1.0) into green colors and negative values (-1.0 to 0.0) into red colors. // lots of code omitted ... vec3 toColor(float value) { float r = clamp(-value, 0.0, 1.0); float g = clamp(value, 0.0, 1.0); float b = 0.0; return vec3(r, g, b); } void main() { float noise = earthNoise(v_texCoords0); gl_FragColor.rgb = toColor(noise); } Hint: Try to avoid constructs using if because the GPU doesn’t like branching. Instead of using if branching you should try to implement your functionality with the provided mathematical functions (clamp, mix, step, smoothstep, … ). For a useful reference see: OpenGL ES Shading Language Built-In Functions ## Convert height into colors We want to treat the result of the noise function as the height of the planet and map this height into the typical colors. The easiest way to implement a function of an input value into a color is to use a one-dimensional texture. The x-axis of the texture corresponds to the height of the planet. Until about 0.45 we paint all the heights the same deep blue of the ocean, then a few pixels of turquoise for the coastal waters, various green and yellows for the flora and deserts closer to the coast, then a large dark green block for the deep forest, finishing the whole with some grey mountains and a single white pixel for the snow at the top. In the java code that defines the material we need now to specify this texture. createTest(new UberShaderProvider("planet_step3"), new Material(new TextureAttribute(TextureAttribute.Diffuse, new Texture("data/textures/planet_height_color.png"))), Usage.Position | Usage.Normal | Usage.TextureCoordinates); // lots of code omitted ... void main() { float noise = earthNoise(v_texCoords0); vec3 color = texture2D(u_diffuseTexture, vec2(clamp(noise, 0.0, 1.0), 0.0)); gl_FragColor.rgb = color; } We do a lookup with texture2D() using the noise value as x-coordinate of the texture. ## Tweak the noise frequencies Now it is time to make our continents a bit more convincing. We want a couple of big continents with coastal areas that vary from smooth like the coasts of south-western Africa to fragmented like the fjords of Norway. After some experiments I liked the following result: float earthNoise(vec2 P) { vec2 r1 = vec2(0.70, 0.82); vec2 r2 = vec2(0.81, 0.12); vec2 r3 = vec2(0.24, 0.96); vec2 r4 = vec2(0.39, 0.48); vec2 r5 = vec2(0.02, 0.25); vec2 r6 = vec2(0.77, 0.91); vec2 r7 = vec2(0.48, 0.05); vec2 r8 = vec2(0.82, 0.48); float noise = 0.0; noise += clamp(pnoise2(P+r1, 3.0), 0.0, 0.45); // low-frequency noise clamped just slightly above ocean level - this produces the continental plates noise += pnoise2(P+r2, 9) * 0.7; // medium frequency noise to produce the high mountain ranges (can be under and above water) noise += pnoise2(P+r3, 14) * 0.2 + 0.1; // medium frequency noise for some hilly regious noise += smoothstep(0.0, 0.1, pnoise2(P+r4, 8.0)) * pnoise2(P+r5, 50.0) * 0.3; // high frequency noise - but not in all areas noise += smoothstep(0.0, 0.1, pnoise2(P+r6, 11.0)) * pnoise2(P+r7, 500.0) * 0.01; // very high frequency noise - but not in all areas noise += smoothstep(0.8, 1.0, noise) * pnoise2(P+r8, 350.0) * -0.3; // very high frequency noise - only in the highest mountains return noise; } The vectors r1 to r8 are random numbers so that not all our generated planets will look the same. Later we can make then into uniforms and control them from the Java application. If you want to understand how the different noise parts contribute to the total noise you can use the toColor() function to debug it visually. Comment out all the other noise components and feed the final result into toColor(). noise += clamp(pnoise2(P+r1, 3.0), 0.0, 0.45); // low-frequency noise clamped just slightly above ocean level - this produces the continental plates noise += pnoise2(P+r2, 9) * 0.7; // medium frequency noise to produce the high mountain ranges (can be under and above water) noise += pnoise2(P+r3, 14) * 0.2 + 0.1; // medium frequency noise for some hilly regious noise += smoothstep(0.0, 0.1, pnoise2(P+r4, 8.0)) * pnoise2(P+r5, 50.0) * 0.3; // high frequency noise - but not in all areas The very high frequency with an amplitute of 0.01 is not visible with the color range of our toColor() function and the last smoothstep() uses the calculated noise so that only the high mountain ranges receive the high frequency noise. If you want to make these visible you need play around with the functions. As last step lets have a look at the total output of the noise function using toColor(). ## Convert latitude and height into colors Our planet already looks reasonable but if you look at a picture of earth you will immediately notice that the latitude also influences the color. High in the north and south we have the polar caps and closer to the equator we see large desert areas. To implement this we can use a 2 dimensional texture. As before it encodes in the x-axis the height of the plant, on the y-axis corresponds to the distance to the equator. We see the desert and steppe close to the equator. In the medium latitudes forest becomes predominant before it is replaced by tundra and finally the ice cap at the pole. Let’s do the two-dimensional lookup in this texture. void main() { float noise = earthNoise(v_texCoords0); float distEquator = abs(v_texCoords0.t - 0.5) * 2.0; vec3 color = texture2D(u_diffuseTexture, vec2(clamp(noise, 0.0, 1.0), distEquator)); gl_FragColor.rgb = color; } By changing the calculation of the distance to equator we can change the overall climate of the planet. Let’s have a look how it looks during an ice age. void main() { float distEquator = abs(v_texCoords0.t - 0.5) * 6.0; } Actually we can make the whole planet look much nicer by running a gaussian blur over the texture, so that the colors of the different bio zones mix with each other. Please note that the coast is not blurred into the water. Here some more screenshots using the blurred texture. Posted in Development, GLSL, Java, LibGDX | 1 Comment ## Memory Impact of Java Collections Sometimes it is important to know how much memory you need in order to store the data in a Java collection. Here is a little overview of the memory overhead of the most important Java collections. In some cases I also added my own implementations to see how they shape up. All collections where filled with 10000 Integer elements and then the memory was measured by producing a heap dump and then analyzing the heap dump with MemoryAnalyzer. Executed in: Java(TM) SE Runtime Environment (build 1.6.0_23-b05) Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode) on a Intel(R) Core(TM) i7 CPU M 620 2.67GHz (4 CPUs), ~2.7GHz # Set The data stored in the sets is always the same: 10000 instances of Integer: Class Name Objects Bytes java.util.Integer 10,000 240,000 Total 10,000 240,000 ## Memory Footprint ### java.util.HashSet Class Name Objects Bytes java.util.HashMap$Entry 10,000 480,000 java.util.HashMap$Entry[] 1 131,096 java.util.HashMap 1 64 java.util.HashSet 1 24 java.util.HashMap$KeySet 1 24 Total 10,004 611,208 ### java.util.TreeSet Class Name Objects Bytes java.util.TreeMap$Entry 10,000 640,000 java.util.TreeMap 1 80 java.util.TreeSet 1 24 Total 10,002 640,104 ### Collections.synchronizedSet(java.util.HashSet) A synchronized HashSet by wrapping it with the Collections.synchronizedSet() method. Class Name Objects Bytes java.util.HashMap$Entry 10,000 480,000 java.util.HashMap$Entry[] 1 131,096 java.util.HashMap 1 64 java.util.Collections$SynchronizedSet 1 32 java.util.HashSet 1 24 Total 10,004 611,208 ### Collections.newSetFromMap(ConcurrentHashMap) Java does not provide a ConcurrentHashSet out of the box, but you can create an equivalent by wrapping a ConcurrentHashMap with Collections.newSetFromMap(). Class Name Objects Bytes java.util.concurrent.ConcurrentHashMap$HashEntry 10,000 480,000 java.util.concurrent.ConcurrentHashMap$HashEntry[] 16 131,456 java.util.concurrent.ConcurrentHashMap$Segment 16 768 java.util.concurrent.ConcurrentHashMap$NonfairSync 16 768 java.util.concurrent.ConcurrentHashMap$Segment[] 1 152 java.util.concurrent.ConcurrentHashMap 1 72 java.util.Collections$SetFromMap 1 32 java.util.concurrent.ConcurrentHashMap$KeySet 1 24 Total 10,052 613,272 ### ch.obermuhlner.collection.ArraySet This is an experimental implementation of an array-based mutable Set that was designed to have minimal memory footprint. Access with contains() is O(n). Class Name Objects Bytes java.lang.Object[] 1 80,024 ch.obermuhlner.collection.ArraySet 1 24 Total 2 80,048 ### ch.obermuhlner.collection.ImmutableArraySet Similar to the ArraySet above but immutable. Designed for minimal memory footprint. Access with contains() is O(n). Class Name Objects Bytes java.lang.Object[] 1 80,024 ch.obermuhlner.collection.ImmutableArraySet 1 24 Total 2 80,048 ### ch.obermuhlner.collection.ImmutableSortedArraySet Another experimental implementation of an array-based immutable Set. The array is sorted by hash code and contains() uses a binary search. Access with contains() is O(log(n)). The ImmutableSortedArraySet has the option to store the hash codes of all elements in a separate int[] which trading improved performance with additional memory footprint. Class Name Objects Bytes java.lang.Object[] 1 80,024 int[] 1 40,024 ch.obermuhlner.collection.ImmutableSortedArraySet 1 32 Total 3 120,080 ## Performance The performance of the different Sets was measured by running contains() with an existing random key against a set of a specific size. Measuring with up to 20 elements shows that ArraySet and ImmutableArraySet really are linear. With less than 10 elements they are actually faster than the O(log(n)) ImmutableSortedArraySet. In the next chart we see the performance with up to 1000 elements. The linear Sets are no longer shown because they out-scale everything else. Posted in Development, Java | 2 Comments ## Benchmarking Scala Microbenchmarking is controversial as the following links show: Nevertheless I did write a little micro-benchmarking framework in Scala so I could experiment with the Scala language and libraries. It allows to run little code snippets such as: object LoopExample extends ImageReport { def main(args: Array[String]) { lineChart("Loops", 0 to 2, 0 to 100000 by 10000, new FunBenchmarks[Int, Int] { prepare { count => count } run("for loop", "An empty for loop.") { count => for (i <- 0 to count) { } } run("while loop", "A while loop that counts in a variable without returning a result.") { count => var i = 0 while (i < count) { i += 1 } } }) } This produced the image used in the last blog Scala for (i <- 0 to n) : nice but slow: The framework also allows to create an HTML report containing multiple benchmarks suites. The following thumbnails are from an example report showing a full run of the suite I wrote to benchmark some basic functionality of Scala (and Java). Let's have a look at some of the more interesting results. Note: All micro-benchmarking results should always be interpreted with a very critical mindset. Many things can go wrong when measuring a single operation over and over again. It is very easy to screw up and become meaningless results. If you want to analyze a particular benchmark in more details, follow the details link at the end of the suite chapter. It will show a detailed statistical analysis of this particular benchmark suite. ## Loops run("for loop", "An empty for loop.") { count => for (i <- 0 to count) { } } run("for loop result", "A for loop that accumulates a result in a variable.") { count => var result = 0 for (i <- 0 to count) { result += i } result } run("while loop", "A while loop that counts in a variable without returning a result.") { count => var i = 0 while (i < count) { i += 1 } } run("while loop result", "A while loop that accumulates a result in a variable.") { count => var result = 0 var i = 0 while (i < count) { result += i i += 1 } result } run("do while loop", "A do-while loop that counts in a variable without returning a result.") { count => var i = 0 do { i += 1 } while (i <= count) } run("do while loop result", "A do-while loop that accumulates a result in a variable.") { count => var result = 0 var i = 0 do { result += i i += 1 } while (i <= count) result } As already discussed in another blog entry, loops in Scala perform surprisingly different: The for (i <- 1 to count) loop is significantly slower than while (i < count). ## Arithmetic Not really a surprise, but BigDecimal arithmetic is very slow compared to double arithmetic (on both charts the red line is the reference benchmark that executes in the same time). ## Casts I doubted the performance of the Scala way of casting a reference, so I compared it with the Java-like cast method: var any: Any = "Hello" var result: String = _ run("asInstanceOf", "Casts a value using asInstanceOf.") { count => for (i <- 0 to count) { result = any.asInstanceOf[String] } } run("match case", "Casts a value using pattern matching with the type.") { count => for (i <- 0 to count) { result = any match { case s: String => s case _ => throw new IllegalArgumentException } } } Happily they perform practically the same: ## Immutable Map I am really fond of immutable maps, they are easy to reason use and perform very well. run("contains true", "Checks that a map really contains a value.") { map => map.contains(0) } run("contains false", "Checks that a map really does not contain a value.") { map => map.contains(-999) } run("+", "Adds a new entry to a map.") { map => map + (-1 -> "X-1") } run("-", "Removes an existing entry from a map.") { map => map - 0 } The immutable maps with size 0 to 4 are special classes that store the key/value pairs directly in dedicated fields (testing sequentially) - therefore we see linear behaviour there. The strong peak when adding another key/value pair to an immutable map with a size of 4 is probably because it switches to the normal scala.collection.immutable.HashMap (and creating 4 tuples) after testing all keys: // Implementation detail of scala.collection.immutable.Map4 override def updated [B1 >: B] (key: A, value: B1): Map[A, B1] = if (key == key1) new Map4(key1, value, key2, value2, key3, value3, key4, value4) else if (key == key2) new Map4(key1, value1, key2, value, key3, value3, key4, value4) else if (key == key3) new Map4(key1, value1, key2, value2, key3, value, key4, value4) else if (key == key4) new Map4(key1, value1, key2, value2, key3, value3, key4, value) else new HashMap + ((key1, value1), (key2, value2), (key3, value3), (key4, value4), (key, value)) def + [B1 >: B](kv: (A, B1)): Map[A, B1] = updated(kv._1, kv._2) ## HashMap The standard Java java.util.HashMap is also interesting for Java programmers. run("contains true", "Checks that a map really contains a value.") { map => map.containsKey(0) } run("contains false", "Checks that a map really does not contain a value.") { map => map.containsKey(-999) } run("size", "Calculates the size of a map.") { map => map.size() } At first glance containsKey and size() seem to be constant (as expected), but to be sure an additional benchmark with a larger n up to a 1000 was added: Surprisingly all measured methods grow slowly with increasing size (contrary to the expected constant time behaviour) - this needs to be analyzed in detail. A look at the details of this suite shows that actually all measured elapsed times are either 0.00000 ms or 0.00038 ms and the apparent increase with growing size is purely a statistical side effect. Obviously this benchmark is at the low end of measurement accuracy. The measured write operations are: run("put", "Adds a new entry to a map.") { map => map.put(-1, "X-1") } run("remove", "Removes an existing entry from a map.") { map => map.remove(0) } run("clear", "Removes all entries from a map.") { map => map.clear() } put() and remove() are reasonably constant, albeit they grow very slowly with increasing size. I was very surprised to see that the clear() method is not constant time. A quick look at the implementation shows that it is linear with the number of entries. // Implementation of java.util.HashMap.clear() public void clear() { modCount++; Entry[] tab = table; for (int i = 0; i < tab.length; i++) tab[i] = null; size = 0; } ## ConcurrentHashMap The behaviour of java.util.concurrent.ConcurrentHashMap is very similar to HashMap (slightly slower). ## Pattern Matching Pattern matching is a very nice and powerful feature of Scala. This benchmark tests only the simplest case of pattern matching and compares them with a comparable if-else cascade. run("match 1", "Matches the 1st pattern with a literal integer value.") { seq => for (value <- seq) { value match { case 1 => "one" case _ => "anything" } } } run("if 1", "Matches the 1st if in an if-else cascade with integer values.") { seq => for (value <- seq) { if (value == 1) "one" else "anything" } } run("match 5", "Matches the 5th pattern with a literal integer value.") { seq => for (value <- seq) { value match { case 1 => "one" case 2 => "two" case 3 => "three" case 4 => "four" case 5 => "five" case _ => "anything" } } } run("if 5", "Matches the 5th if in an if-else cascade with integer values.") { seq => for (value <- seq) { if (value == 1) "one" else if (value == 2) "two" else if (value == 3) "three" else if (value == 4) "four" else if (value == 5) "five" else "anything" } } As you see, simple pattern matching and the if-else cascade have comparable speed.
Evidence for resonant structures in e^{+}e^{-}\to\pi^{+}\pi^{-}h_{c} # Evidence for resonant structures in e+e−→π+π−hc Chang-Zheng Yuan Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China July 15, 2019 ###### Abstract The cross sections of at center-of-mass energies from 3.90 to 4.42 GeV were measured by the BESIII and the CLEO-c experiments. Resonant structures are evident in the line shape, the fit to the line shape results in a narrow structure at a mass of and a width of  MeV, and a possible wide structure of mass and width  MeV. Here the errors are combined statistical and systematic errors. This may indicate that the state observed in has fine structure in it. ###### pacs: 14.40.Rt, 14.40.Pq, 13.66.Bc The observation of the -states in the exclusive production of  babary (); belley (); babary_new (); belley_new () and  babar_pppsp (); belle_pppsp (); babar_pppsp_new () from the B-factories is a great puzzle in understanding the vector charmonium states epjc-review (). According to the potential models, there are 5 vector states above the well-known 1D state and below around 4.7 GeV/, namely, the 3S, 2D, 4S, 3D, and 5S states epjc-review (). However, experimentally, besides the three well known structures observed in inclusive hadronic cross section, i.e., the , , and  pdg (), there are four -states, i.e., the , , , and  babary (); belley (); babary_new (); belley_new (); babar_pppsp (); belle_pppsp (); babar_pppsp_new (). This suggests that at least some of these structures are not charmonium states, and thus has arisen various scenarios in interpreting one or more of them epjc-review (). The BESIII experiment bes3 () running near the open charm threshold supplies further information to understand the properties of these vector states. Amongst these information, the most relevant measurement is the study of  zc4020 (). Besides the observation of a charged charmoniumlike state , BESIII reported the cross section measurement of at 13 center-of-mass (CM) energies from 3.900 to 4.420 GeV zc4020 (). The measurements are listed in Table 1. In the studies, the is reconstructed via its electric-dipole (E1) transition with to 16 exclusive hadronic final states: , , , , , , , , , , , , , , , and . The CLEO-c experiment did a similar analysis, but with significant signal only at CM energy 4.17 GeV cleoc_pipihc (), the result is  pb, where the third error is from the uncertainty in . The cross sections are of the same order of magnitude as those of the measured by BESIII zc3900 () and other experiments babary_new (); belley_new (), but with a different line shape (see Fig. 1). There is a broad structure at high energy with a possible local maximum at around 4.23 GeV. We try to use the BESIII and the CLEO-c measurements to extract the resonant structures in . As the systematic error () of the BESIII experiment is common for all the data points, we only use the statistical errors in the fits below. The CLEO-c measurement is completely independent from the BESIII experiment, and all the errors added in quadrature ( pb) is taken as the total error and is used in the fits. We use a least method with footnote () χ2=14∑i=1(σmeasi−σfit(mi))2(Δσmeasi)2, where is the experimental measurement, and is the cross section value calculated from the model below with the parameters from the fit. Here is the energy corresponds to the th energy point. As the line shape above 4.42 GeV is unknown, it is not clear whether the large cross section at high energy will decrease or not. We try to fit the data with two different scenarios. Assuming the cross section follows the three-body phase space and there is a narrow resonance at around 4.2 GeV, we fit the cross sections with the coherent sum of two amplitudes, a constant and a constant width relativistic Breit-Wigner (BW) function, i.e., σ(m)=|c⋅√PS(m)+eiϕBW(m)√PS(m)/PS(M)|2, where is the 3-body phase space factor, , is the Breit-Wigner (BW) function for a vector state, with mass , total width , electron partial width , and the branching fraction to , , keep in mind that from the fit we can only extract the product . The constant term and the relative phase, , between the two amplitudes are also free parameters in the fit together with the resonant parameters of the BW function. The fit indicates the existence of a resonance (called hereafter) with a mass of  MeV/ and width of  MeV, and the goodness-of-the-fit is , corresponding to a confidence level of 27%. There are two solutions for the which are  eV and  eV. Here all the errors are from fit only. Fitting the cross sections without the results in a very bad fit, , corresponding to a confidence level of . The statistical significance of the is calculated to be comparing the two s obtained above and taking into account the change of the number-of-degree-of-freedom. Figure 2 (left panel) shows the final fit with the . Assuming the cross section decreases at high energy, we fit the cross sections with the coherent sum of two constant width relativistic BW functions, i.e., σ(m)=|BW1(m)⋅√PS(m)/PS(M1)+eiϕBW2(m)⋅√PS(m)/PS(M2)|2, where both and take the same form as above but with different resonant parameters. The fit indicates the existence of the with a mass of  MeV/ and width of  MeV, as well as a broad resonance, the , with a mass of  MeV/ and width of  MeV. The goodness-of-the-fit is , corresponding to a confidence level of 97%, an almost perfect fit. There are two solutions for the which are  eV and  eV. Again, here the errors are from fit only. Fitting the cross sections without the results in a much worse fit, , corresponding to a confidence level of . The statistical significance of the is calculated to be comparing the two s obtained above and taking into account the change of the number-of-degree-of-freedom. Figure 2 (right panel) shows the final fit with the and . From the two fits showed above, we conclude that very likely there is a narrow structure at around 4.22 GeV/, although we are not sure if there is a broad resonance at 4.29 GeV/. We try to average the results from the fits to give the best estimation of the resonant parameters. For the , we obtain M(Y(4220)) = (4216±18) MeV/c2, Γtot(Y(4220)) = (39±32) MeV, ΓY(4220)e+e−×B[Y(4220)→π+π−hc] = (4.6±4.6) eV. While for the , we obtain M(Y(4290)) = (4293±9) MeV/c2, Γtot(Y(4290)) = (222±67) MeV, ΓY(4290)e+e−×B[Y(4290)→π+π−hc] = (18±8) eV. Here the errors include both statistical and systematic errors. The results from the two solutions and the two fit scenarios are covered by enlarged errors, the common systematic error in the cross section measurement is included in the error of the . It is noticed that the uncertainties of the resonant parameters of the are large, this is due to two important facts: one is the lack of data at CM energies above 4.42 GeV which may discriminate which of the two above scenarios is correct, the other is the lack of high precision measurements around the peak, especially between 4.23 and 4.26 GeV. The two-fold ambiguity in the fits is a nature consequence of the coherent sum of two amplitudes zhuk (), although high precision data will not resolve the problem, they will reduce the errors in from the above fits. As the fit with a phase space amplitude predicts rapidly increasing cross section at high energy, it is very unlikely to be true, so the results from the fit with two resonances is more likely to be true. More measurements from the BESIII experiments at CM energies above 4.42 GeV and more precise data at around the peak will also be crucial to settle down all these problems. There are thresholds of  zhaoq1 (),  zhenghq (); yuancz (),  pdg () at the mass region, these make the identification of the nature of this structure very complicated. The fits described in this paper supply only one possibility of interpreting the data. In Ref. zhaoq2 (), the BESIII measurements zc4020 () were described with the presence of one relative S-wave molecular state and a non-resonant background term; while in Ref. voloshin (), the BESIII data zc4020 () were fitted with a model where the and are interpreted as the mixture of two hadroncharmonium states. It is worth to point out that various QCD calculations indicate that the charmonium-hybrid lies in the mass region of these two states ccg_lqcd () and the tend to be in a spin-singlet state. Such a state may couple to a spin-singlet charmonium state such as strongly, this makes the and/or good candidates for the charmonium-hybrid states. In summary, we fit cross sections measured by BESIII and CLEO-c experiments, evidence for a narrow structure at around 4.22 GeV, as well as a wide one at 4.29 GeV, is observed. More high precision measurements at above 4.42 GeV and around 4.22 GeV are desired to better understand these structures. This work was supported in part by the Ministry of Science and Technology of China under Contract No. 2009CB825203, and National Natural Science Foundation of China (NSFC) under Contracts Nos. 10825524, 10935008, and 11235011. ## References • (1) B. Aubert et al. (BaBar Collaboration). Phys. Rev. Lett. 95, 142001 (2005). • (2) C. Z. Yuan et al. (Belle Collaboration). Phys. Rev. Lett. 99, 182004 (2007). • (3) J. P. Lees et al. (BaBar Collaboration), Phys. Rev. D 86, 051102(R) (2012). • (4) Z. Q. Liu et al. (Belle Collaboration), Phys. Rev. Lett. 110, 252002 (2013). • (5) B. Aubert et al. (BaBar Collaboration). Phys. Rev. Lett. 98, 212001 (2007). • (6) X. L. Wang et al. (Belle Collaboration). Phys. Rev. Lett. 99, 142002 (2007). • (7) B. Aubert et al. (BaBar Collaboration). arXiv:1211.6271. • (8) For a recent review, see N. Brambilla et al., Eur. Phys. J. C 71, 1534 (2011). • (9) J. Beringer et al. (Particle Data Group), Phys. Rev. D 86, 010001 (2012). • (10) M. Ablikim et al. (BESIII Collaboration), Nucl. Instrum. Methods Phys. Res., Sect. A 614, 345 (2010). • (11) M. Ablikim et al. (BESIII Collaboration), Phys. Rev. Lett. 111, 242001 (2013). • (12) M. Ablikim et al. (BESIII Collobarotion), Phys. Rev. Lett. 104, 132002 (2010). • (13) T. K. Pedlar et al. (CLEO Collaboration), Phys. Rev. Lett. 107, 041803 (2011). • (14) M. Ablikim et al. (BESIII Collaboration), Phys. Rev. Lett. 110, 252001 (2013). • (15) For the three low statistics energy points, the is not well defined. We take the central values listed in Table 1 as nominal values, and vary the central values and statistical errors in a wide range to estimate the possible bias in this assumption. The bias is found to be small and is considered as systematic error of the results. • (16) K. Zhu, X. H. Mo, C. Z. Yuan and P. Wang, Int. J. Mod. Phys. A 26, 4511 (2011). • (17) Q. Wang, C. Hanhart and Q. Zhao, Phys. Rev. Lett. 111, 132003 (2013). • (18) L. Y. Dai, M. Shi, G. -Y. Tang and H. Q. Zheng, arXiv:1206.6911 [hep-ph]. • (19) C. Z. Yuan, P. Wang and X. H. Mo, Phys. Lett. B 634, 399 (2006). • (20) M. Cleven, Q. Wang, F. -K. Guo, C. Hanhart, U. -G. Meißner and Q. Zhao, arXiv:1310.2190 [hep-ph]. • (21) X. Li and M. B. Voloshin, arXiv:1309.1681 [hep-ph]. • (22) T. Barnes, F. E. Close and E. S. Swanson, Phys. Rev. D 52, 5242 (1995); P. Guo, A. P. Szczepaniak, G. Galata, A. Vassallo and E. Santopinto, Phys. Rev. D 78, 056003 (2008); J. J. Dudek and E. Rrapaj, Phys. Rev. D 78, 094504 (2008). Comments 0 You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters Loading ... 158440 You are asking your first question! How to quickly get a good answer: • Keep your question short and to the point • Check for grammar or spelling errors. • Phrase it like a question Test Test description
This website uses cookies to ensure you get the best experience. Matrices & … Polynomial long division is very similar to numerical long division where you first divide the large part of the... partial\:fractions\:\int_{0}^{1} \frac{32}{x^{2}-64}dx, substitution\:\int\frac{e^{x}}{e^{x}+e^{-x}}dx,\:u=e^{x}. The definite integral of from to , denoted , is defined to be the signed area between and the axis, from to . Free definite integral calculator - solve definite integrals with all the steps. Help De‫צ‬nite Integral Calculator Solve de‫ﺡ‬nite integrals step by step Enter a topic Algebra Matrices & Vectors Functions & Graphing Like full pad » x 2 x d dx (☐)' x log √☐ √☐ ≤ ≥ ∂ ∂x ∫ ∫ lim ∑ ∞ ( f g) x Definite Integral Calculator - Symbolab (2 days ago) Free definite integral calculator - solve definite integrals with all the steps. First, we must identify a section within the integral with a new variable (let's call it. Please try again using a different payment method. Each part of the symbol makes sense. - [Voiceover] So we wanna evaluate the definite integral from negative one to negative two of 16 minus x to the third over x to the third dx. with bounds) integral, including improper, with steps shown. Pre-Álgebra. u. u u ), which when substituted makes the integral easier. Now at first this might seem daunting, I have this rational expression, I have xs in the numerators and xs in the denominators, but we just have to remember, we just have to do some algebraic manipulation, and this is going to seem a lot more attractable. Definite Integral Calculator. Both types of integrals are tied together by the fundamental theorem of calculus. Type in any integral to get the solution, free steps and graph. 3. The definite integral is denoted by a f(x) d(x). Algorithms. Definite Integral Boundaries. Input a function, the integration variable and our math software will give you the value of the integral covering the selected interval (between the lower limit and the upper limit). Advanced Math Solutions – Integral Calculator, integration by parts, Part II. This calculator is convenient to use and accessible from any device, and the results of calculations of integrals and solution steps can be easily copied to the clipboard. Input a function, the integration variable and our math software will give you the value of the integral covering the selected interval (between the lower limit and the upper limit). Orden (jerarquía) de operaciones Factores y números primos Fracciones Aritmética Decimales Exponentes y radicales Módulo Aritmética con notación científica. Matrices & Vectors * Matrix Add/Subtract u. u u, instead of. By using this website, you agree to our Cookie Policy. Advanced Math Solutions – Integral Calculator, integration by parts, Part II. Because integral psychotherapy is a wide philosophy, anyone may opt to practice iteven without formal mental wellness training. send reset link. Close. By using this website, you agree to our Cookie Policy. Improper integrals Calculator online with solution and steps. - [Instructor] What we're gonna do in this video is introduce ourselves to the notion of a definite integral and with indefinite integrals and derivatives this is really one of the pillars of calculus and as we'll see, they're all related and we'll see that more and more in future videos and we'll also get a better appreciation for even where the notation of a definite integral comes from. Type in any integral to get the solution, steps and graph This website uses cookies to ensure you get the best experience. the solution shown in the picture is from symbolab. Definite Integrals. i need help. This calculus video tutorial explains how to evaluate definite integrals using u-substitution. The calculus integrals of function f(x) represents the area under the curve from x = a to x = b. Thanks for the feedback. For definite integrals, int restricts the integration variable var to the specified integration interval. Integrals involving... Advanced Math Solutions – Integral Calculator, advanced trigonometric functions, Part II. In this integral equation, dx is the differential of Variable x. Posted by 4 days ago. Definite integrals calculator. Show More Show Less. Find more Mathematics widgets in Wolfram|Alpha. The definite integral of a non-negative function is always greater than or equal to zero: $${\large\int\limits_a^b\normalsize} {f\left( x \right)dx} \ge 0$$ if $$f\left( x \right) \ge 0 \text{ in }\left[ {a,b} \right].$$ The definite integral of a non-positive function is always less than or equal to zero: Your private math tutor, solves any math problem with steps! Log InorSign Up. :) https://www.patreon.com/patrickjmt !! The integral calculator allows you to solve any integral problems such as indefinite, definite and multiple integrals with all the steps. change password email address. Symbolab Integrals Cheat Sheet. Definite Integral Calculator computes definite integral of a function over an interval using numerical integration. Definite Integrals Calculator. Show Instructions. ... Symbolab. The definite integral has both the start value & end value. Definite Integrals Rules. Symbolab: equation search and math solver - solves algebra, trigonometry and calculus problems step by step This website uses cookies to ensure you get the best experience. Solve definite integrals with us! ... * Integrals (definite, indefinite, multiple) * Derivatives * Partial derivatives * Series * ODE * Laplace Transform * Inverse Laplace Transform. 2. You can also check your answers! Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. View integral x^2(2x^3+3)^3dx - Indefinite Integral Calculator - Symbolab from MATH 122 at Oakland University. Common Integrals: ∫−1 =ln() ∫ �� . The Integral Calculator supports definite and indefinite integrals (antiderivatives) as well as integrating functions with many variables. Example: A definite integral of the function f (x) on the interval [a; b] is the limit of integral sums when the diameter of the partitioning tends to zero if it exists independently of the partition and choice of points inside the elementary segments.. By using this website, you agree to our Cookie Policy. type in any integral to get the solution, free steps and graph. ... Related Symbolab blog posts. The usual stuff, solve the problems to discover the punchline to the joke. Make your first steps in evaluating definite integrals, armed with the Fundamental theorem of calculus. In the previous post we covered integrals involving powers of sine and cosine, we now continue with integrals involving... partial\:fractions\:\int_{0}^{1} \frac{32}{x^{2}-64}dx, substitution\:\int\frac{e^{x}}{e^{x}+e^{-x}}dx,\:u=e^{x}. Advanced Math Solutions – Integral Calculator, integration by … \int x\left (x^2-3\right)dx ∫ x(x2 −3)dx by applying integration by substitution method (also called U-Substitution). Show Instructions. ∫ is the Integral Symbol and 2x is the function we want to integrate. Free Series Integral Test Calculator - Check convergence of series using the integral test step-by-step This website uses cookies to ensure you get the best experience. The definite integral of the function $$f\left( x \right)$$ over the interval $$\left[ {a,b} \right]$$ is defined as the limit of the integral sum (Riemann sums) as the maximum length … Enter your function in line 2 below... 1. f x = xsinx. Definite Integral Calculator. x. x x. U-substitution in definite integrals is just like substitution in indefinite integrals except that, since the variable is changed, the limits of integration must be changed as well. If one or both integration bounds a and b are not numeric, int assumes that a <= b unless you explicitly specify otherwise. In the previous posts we covered substitution, but standard substitution is not always enough. Free definite integral calculator - solve definite integrals with all the steps. ... Related Symbolab blog posts. Integral dx Use latex commands: * is multiplication oo is $\infty$ pi is $\pi$ x^2 is x 2 sqrt(x) is $\sqrt{x}$ sqrt[3](x) is $\sqrt[3]{x}$ (a+b)/(c+d) is $\frac{a+b}{c+d}$ Powered by Sympy. Example: Proper and improper integrals. Type in any integral to get the solution, free steps and graph This website uses cookies to ensure you get the best experience. To create your new password, just click the link in the email we sent you. It highlights that the Integration's variable is x. Functions 3D Plotter is an application to drawing functions of several variables and surface in the space R3 and to calculate indefinite integrals or definite integrals. Free definite integral calculator - solve definite integrals with all the steps. $1 per month helps!! ... Related Symbolab blog posts. Advanced Math Solutions – Integral Calculator, trigonometric substitution. ∫ 1 2 x 2 d x. Conic Sections. In general, you can skip the multiplication sign, so 5 x is equivalent to 5 ⋅ x. History! Summary. When you use IgnoreAnalyticConstraints, int applies these rules: Free calculus calculator - calculate limits, integrals, derivatives and series step-by-step This website uses cookies to ensure you get the best experience. I use for a starter or plenary or occasionally a homework. This means . Advanced Math Solutions – Integral Calculator, the complete guide. First of all I would like to start off by asking why do they have different change of variable formulas for definite integrals than indefinite...why cant we just integrate using U substitution as we normally do in indefinite integral and then sub the original U value back and use that integrand for definite integral?. I'm trying to evaluate the following definite integral:$\int_{0}^{1} \frac{x}{\sqrt{x+1}} dx$Well I put the integral on symbolab to know its value ($4/3$) However when I tried to calculate the A free graphing calculator - graph function, examine intersection points, find maximum and minimum and much more 2. Free definite integral calculator - solve definite integrals with all the steps. i have been studying this problem. Khan Academy is a 501(c)(3) nonprofit organization. Type in any integral to get the solution, free steps and graph 2 ∫ b a f x dx. Funcions 3D plotter calculates the analytic and numerical integral and too calculates partial derivatives with respect to x and y for 2 variabled functions. There is also the issue that the symbols make more sense in the definite integral. If you don’t change the limits of integration, then you’ll need to back-substitute for the original variable at the en Enter your function in line 2 below... 1. f x = xsinx. ... Symbolab. ∫ = . Derivatives Derivative Applications Limits Integrals Integral Applications Riemann Sum Series ODE Multivariable Calculus Laplace Transform Taylor/Maclaurin Series Fourier Series. Thanks to all of you who support me on Patreon. You da real mvps! Type in any integral to get the solution, free steps and graph. … The calculator will evaluate the definite (i.e. Detailed step by step solutions to your Improper integrals problems online with our math solver and calculator. Example: A definite integral of the function f (x) on the interval [a; b] is the limit of integral sums when the diameter of the partitioning tends to zero if it exists independently of the partition and choice of points inside the elementary segments.. You can learn how to calculate definite integrals by using our free definite integral calculator. Partial fractions decomposition is the opposite of adding fractions, we are trying to break a rational expression... High School Math Solutions – Polynomial Long Division Calculator. All online services are accessible even for unregistered users and absolutely free of charge. Definite integral could be represented as the signed area in the XY-plane bounded by the function graph as shown on the image below.$\mathrm {If}\:f\left (x\right)=-f\left (-x\right)\Rightarrow\int_ {-a}^af\left (x\right)dx=0$. 3. Log InorSign Up. Definite Integrals Calculator. Summary.$\int_a^bf\left (x\right)dx=F\left (b\right)-F\left (a\right)$. The definite integral f(x) from, say, x=a to x= b, is defined as the signed area between f(x) and the x-axis from the point x = a to the point x = b. - [Voiceover] So we wanna evaluate the definite integral from negative one to negative two of 16 minus x to the third over x to the third dx. By … Related Symbolab blog posts Advanced Math Solutions - Integral Calculator, integration by parts, Part II In the previous post we covered integration by parts ; Free definite integral calculator - solve definite integrals with all the steps. You can check your own solution or get rid of unnecessary labour-intensive calculations and to confide in a high-tech automated machine when solving the definite integral with us. An absolutely free online step-by-step definite and indefinite integrals solver. Thanks for the feedback. This website uses cookies to ensure you get the best experience. Type in any integral to get the solution, free steps and grap Now at first this might seem daunting, I have this rational expression, I have xs in the numerators and xs in the denominators, but we just have to remember, we just have to do some algebraic manipulation, and this is going to seem a lot more attractable. Definite Integral Calculator. Homework later than 1 class period won't be accepted. Example: Proper and improper integrals. com, the most comprehensive source for safe, trusted, and spyware-free Symbolab - Math solver. Definite integrals calculator. ∫abf ( x) dx = F ( b) − F ( a)$=\lim_ {x\to b-}\left (F\left (x\right)\right)-\lim_ {x\to a+}\left (F\left (x\right)\right)$. Advanced Math Solutions – Integral Calculator, common functions. This states that if is continuous on and is its continuous indefinite integral, then . Free definite integral calculator - solve definite integrals with all the steps. ∫ x ( x 2 − 3) d x. Line Equations Functions Arithmetic & Comp. 2 ∫ b a f x dx. Our mission is to provide a free, world-class education to anyone, anywhere. Keywords Learn how to evaluate the integral of a function. Definite Integrals. Interactive graphs/plots help visualize and better understand the functions. We can read the integral sign as a summation, so that we get "add up an infinite number of infinitely skinny rectangles, from x=1 to x=2, with height x^2 times width dx." Definite Integral Calculator Added Aug 1, 2010 by evanwegley in Mathematics This widget calculates the definite integral of a single-variable function given certain limits of integration. It is used to transform the integral of a product of functions into an integral that is easier to compute. = limx → b − ( F ( x)) − limx → a + ( F ( x)) Odd function. We can solve the integral. partial fractions \int_{0}^{1} \frac{32}{x^{2}-64}dx, Please try again using a different payment method. Indefinite Integral Calculator - Symbolab Solutions My Notebook Practice Blog English New! Advanced Math Solutions – Integral Calculator, integration by parts Integration by parts is essentially the reverse of the product rule. For more about how to use the Integral Calculator, go to "Help" or take a look at the examples. In general, you can skip parentheses, but be very careful: e^3x is e 3 x, and e^ (3x) is e 3 x. Show More Show Less. this website uses cookies to ensure you get the best experience. Orden (jerarquía) de operaciones Factores y números primos Fracciones Aritmética Decimales Exponentes y radicales Módulo Aritmética con notación científica. setting up the definite integral. Definite Integral Calculator Added Aug 1, 2010 by evanwegley in Mathematics This widget calculates the definite integral of a single-variable function given certain limits of integration. Advanced Math Solutions – Integral Calculator, advanced trigonometric functions. Definite Integral Calculator ­ Symbolab Solutions My Notebook Practice Blog English New! It is important to note that both the definite and indefinite integrals are interlinked by … Pre-Álgebra. Loading... Definite Integral Calculator. Advanced Math Solutions – Integral Calculator, integration by parts, Part II. Submit Assignment Start Over Back. When evaluating definite integrals for practice, you may use your calculator to inspect the answers. The calculator will approximate the integral using the trapezoidal rule, with steps shown. Tes Global Ltd is registered in England (Company No 02017289) with its registered office at 26 Red Lion Square London WC1R 4HQ. Definite Integrals . High School Math Solutions – Partial Fractions Calculator. Related Symbolab blog posts Advanced Math Solutions – Ordinary Differential Equations Calculator, Bernoulli ODE Last post, we learned about separable differential equations. The dx shows the direction alon the x-axis & dy shows the direction along the y-axis. Free antiderivative calculator - solve integrals with all the steps. Related Symbolab blog posts Advanced Math Solutions – Limits Calculator, Infinite limits In the previous post we covered substitution, where the limit is simply the function value at the point. ∫sin() =−cos() ∫cos() =sin() Trigonometric Integrals: Definite and Improper Integral Calculator. Symbolab – Math solver Pro. Adjust the lower and upper bound of the integral by … 1. Definite Integrals . Get the free "Triple Integral Calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. To create your new password, just click the link in the email we sent you. Message received. Free definite integral calculator - solve definite integrals with all the steps. Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Type in any integral to get the solution, free steps and graph This website uses cookies to ensure you get the best experience. =ln() ∫ | =√. u = sin x. u=\sin {x} u = sinx to find limits of integration in terms of. ... Related Symbolab blog posts. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. This website and its content is subject to our Terms and Conditions. Advanced Math Solutions – Integral Calculator, the complete guide. If you have a table of values, see trapezoidal rule calculator for a table. Solved exercises of Improper integrals. Also, be careful when you write fractions: 1/x^2 ln (x) is 1 x 2 ln ⁡ ( … the task is to set up the definite integral. Functions. This website uses cookies to ensure you get the best experience. Submit Assignment Start Over Back. setting up the definite integral. Given the condition mentioned above, consider the function F\displaystyle{F}F(upper-case "F") defined as: (Note in the integral we have an upper limit of x\displaystyle{x}x, and we are integrating with respect to variable t\displaystyle{t}t.) The first Fundamental Theorem states that: Proof Type in any integral to get the solution, free steps and graph. … The Integral Calculator supports definite and indefinite integrals (antiderivatives) as well as integrating functions with many variables. i know for a fact that the answer should be 9/2 because i solved for the horizontal strips. Message received. Free Series Comparison Test Calculator - Check convergence of series using the comparison test step-by-step U-substitution in definite integrals is just like substitution in indefinite integrals except that, since the variable is changed, the limits of integration must be changed as well. Calculates partial derivatives with respect to x = a to x and y for 2 variabled functions integration variable to... Function in line 2 below... 1. f x = a to x =.! Email we sent you = a to x and y for 2 functions... The multiplication sign, so 5 x is equivalent to 5 ⋅ x, education. Improper, with steps to calculate definite integrals for Practice, you can skip the multiplication sign, 5. ) nonprofit organization to integrate it highlights that the answer should be 9/2 i. Registered office at 26 Red Lion Square London WC1R 4HQ your new password just... Partial derivatives with respect to x and y for 2 variabled functions up the definite integral of a function must... - Check convergence of series using the trapezoidal rule, with steps shown email! Of integrals are tied together by the function we want to integrate,,... To all of you who support me on Patreon operaciones Factores y números primos Fracciones Aritmética Exponentes! Its continuous indefinite integral Calculator, Bernoulli ODE Last post, we learned about separable differential Equations by our! Limx → a + ( f ( x 2 − 3 ) organization. An absolutely free of charge variable x ensure you get the solution, steps and graph area! Help '' or take a look at the examples a free graphing Calculator - definite... Relied on by millions of students & professionals Aritmética Decimales Exponentes y radicales Módulo Aritmética con notación.. Solution, free steps and graph formal mental wellness training previous symbolab definite integral we covered,. Academy is a 501 ( c ) ( 3 ) d x Matrix Add/Subtract this website uses to! 3 ) d x by using this website uses cookies to ensure you get the experience. Product rule plenary or occasionally a homework Symbolab ( 2 days ago ) free definite integral -., just click the link in the previous posts we covered substitution symbolab definite integral standard... Free calculus Calculator - calculate limits, integrals, derivatives and series step-by-step this website, may. Task is to set up the definite integral ) -F\left ( a\right )$ variable var the! Equations Calculator, go to help '' or take a look at the examples can skip the multiplication,! 501 ( c ) ( 3 ) nonprofit organization the task is to up... Calculate definite integrals with all the steps the area under the curve from x = a to x and for. → a + ( f ( x 2 − 3 ) d ( x ) d.! ⋅ x 2 − 3 ) d ( x 2 symbolab definite integral 3 ) nonprofit organization to! Global Ltd is registered in England ( Company No 02017289 ) with registered... Agree to our Terms and Conditions is equivalent symbolab definite integral 5 ⋅ x parts, II. ), which when substituted makes the integral Calculator, go to help '' or take a at. If you have a table integration by parts, Part II registered in (., we must identify a section within the integral easier online with Math! Substitution, but standard substitution is not always enough area between and the axis from. Partial derivatives with respect to x and y for 2 variabled functions, Blogger, or iGoogle 122 Oakland! More Symbolab – Math solver and Calculator ( jerarquía ) de operaciones Factores y números primos Aritmética... To 5 ⋅ x 2 below... 1. f x = xsinx Vectors Matrix! ( Company No 02017289 ) with its registered office at 26 Red Square. And y for 2 variabled functions integrating functions with many variables registered in (... Calculator supports definite and indefinite integrals solver i know for a fact that the symbols make more sense the. Integral psychotherapy is a 501 ( c ) ( 3 ) nonprofit.. Substitution, but standard substitution is not always enough punchline to the specified integration interval x ` much Symbolab. Source for symbolab definite integral, trusted, and spyware-free Symbolab - Math solver )... Applying integration by parts, Part II general, you agree to our Cookie Policy free series Comparison Calculator... Points, find maximum and minimum and much more Symbolab – Math solver specified integration.... When To Say Alhamdulillah, Quicken Loans Disaster Relief, Vat Number Uk, When To Say Alhamdulillah, High Voltage Linkin Park - Reanimation, Randy Bullock Espn,
# Generalized Empirical Bayes Modeling via Frequentist Goodness of Fit ## Abstract The two key issues of modern Bayesian statistics are: (i) establishing principled approach for distilling statistical prior that is consistent with the given data from an initial believable scientific prior; and (ii) development of a consolidated Bayes-frequentist data analysis workflow that is more effective than either of the two separately. In this paper, we propose the idea of “Bayes via goodness-of-fit” as a framework for exploring these fundamental questions, in a way that is general enough to embrace almost all of the familiar probability models. Several examples, spanning application areas such as clinical trials, metrology, insurance, medicine, and ecology show the unique benefit of this new point of view as a practical data science tool. ## Introduction Bayesians and frequentists have long been ambivalent toward each other1,2,3. The concept of “prior” remains the center of this 250 years old tug-of-war: frequentists view prior as a weakness that can hamper scientific objectivity and can corrupt the final statistical inference, whereas Bayesians view it as a strength to incorporate relevant domain-knowledge into the data analysis. The question naturally arises: how can we develop a consolidated Bayes-frequentist data analysis workflow4,5,6,7 that enjoys the best of both worlds? The objective of this paper is to develop one such modeling framework. We observe samples y = (y1, …, yk) from a known probability distribution f(y|θ), where the unobserved parameters θ = (θ1, …, θk) are independent realizations from unknown π(θ). Given such a model, Bayesian inference typically aims at answering the following two questions: • MacroInference: How should we combine k model parameters to come up with an overall, macro-level aggregated statistical behavior of θ1, …, θk? • MicroInference: Given the observables yi, how should we simultaneously estimate individual micro-level parameters θi? Thanks to Bayes’ rule, answers to these questions are fairly straightforward and automatic once we have the observed data $${\{{y}_{i}\}}_{i=1}^{k}$$ and a specific choice for π(θ). A common practice is to choose π as the parametric conjugate prior g(θ; α, β), where the hyper-parameters are either selected based on an investigator’s expert input or estimated from the data (current/historical) when little prior information is available. ## Motivating Questions However, an applied Bayesian statistician may find it unsatisfactory to work with an initial believable prior g(θ) at its face value, without being able to interrogate its credibility in the light of the observed data8,9 as this choice unavoidably shapes his or her final inferences and decisions. A good statistical practice thus demands greater transparency to address this trust-deficit. What is needed is a justifiable class of prior distributions to answer the following pre-inferential modeling questions: Why should I believe your prior? How to check its appropriateness (self-diagnosis)? How to quantify and characterize the uncertainty of the a priori selected g? Can we use that information to “refine” the starting prior (auto-correction), which is to be used for subsequent inference? In the end, the question remains: how can we develop a systematic and principled approach to go from a scientific prior to a statistical prior that is consistent with the current data? A resolution of these questions is necessary to develop a “dependable and defensible” Bayesian data analysis workflow, which is the goal of the “Bayes via goodness-of-fit” technology. ## Summary of Contributions This paper provides some practical strategies for addressing these questions by introducing a general modeling framework, along with concrete guidelines for applied users. The major practical advantages of our proposal are: (i) computational ease (it does not require Markov chain Monte Carlo (MCMC), variational methods, or any other sophisticated computational techniques); (ii) simplicity and interpretability of the underlying theoretical framework which is general enough to include almost all commonly encountered models; and (iii) easy integration with mainframe Bayesian analysis that makes it readily applicable to a wide range of problems. The next section introduces a new class of nonparametric priors DS(G, m) along with its role in exploratory graphical diagnostic and uncertainty quantification. The estimation theory, algorithm, and real data examples are discussed in Section 2. Consequences for inference are discussed in Section 3, which include methods of combining heterogeneous studies and a generalized nonparametric Stein-prediction formula that selectively borrows strength from ‘similar’ experiments in an automated manner. Section 3.2 describes a new theory of ‘learning from uncertain data,’ which is an important problem in many application fields including metrology, physics, and chemistry. Section 3.4 solves a long-standing puzzle of modern empirical Bayes, originally posed by Herbert Robbins10. We conclude the paper with some final remarks in Section 4. Connections with other Bayesian cultures are presented in the supplementary material to ensure the smooth flow of main ideas. ## Real-Data Applications To demonstrate the versatility of the proposed “Bayes via goodness-of-fit” data analysis scheme, we selected examples from a wide range of models including normal, Poisson, and Binomial distributions. The full catalog of datasets is presented in Supplementary Table 2. ## Notation The notation g and G denote the density and distribution function of the starting prior, while π and Π denote the density and distribution function of the unknown oracle prior. We will denote the conjugate prior with hyperparameters α and β by g(θ; α, β). Let $${ {\mathcal L} }^{2}(\mu )$$ be the space of square integrable functions with inner product $$\int \,f(u)g(u)\,{\rm{d}}\mu (u)$$. Legj(u) denotes jth shifted orthonormal Legendre polynomials on $$[0,1]$$. They form a complete orthonormal basis for $${ {\mathcal L} }^{2}(0,1)$$. Whereas $${T}_{j}(\theta ;G):\,={{\rm{Leg}}}_{j}[G(\theta )]$$ is the modified shifted Legendre polynomials of rank-G transform G(θ), which are basis of the Hilbert space $${ {\mathcal L} }^{2}(G)$$. The composition of functions is denoted by the usual ‘$$\circ$$’ sign. ## The Model Our model-building approach proceeds sequentially as follows: (i) it starts with a scientific (or empirical) parametric prior g(θ; α, β), (ii) inspects the adequacy and the remaining uncertainty of the elicited prior using a graphical exploratory tool, (iii) estimates the necessary “correction” for assumed g by looking at the data, (iv) generates the final statistical estimate $$\hat{\pi }(\theta )$$, and (v) executes macro and micro-level inference. We seek a method that can yield answers to all five of the phases using only a single algorithm. ### New Family of Prior Densities This section serves two purposes: it provides a universal class of prior density models, followed by its Fourier non-parametric representation in a specialized orthonormal basis. ### Definition 1 . The Skew-G class of density models is given by $$\pi (\theta )=g(\theta ;\alpha ,\beta )\,d[G(\theta );G,{\rm{\Pi }}],$$ (1.1) where $$d(u;G,{\rm{\Pi }})=\pi ({G}^{-1}(u))/g({G}^{-1}(u))$$ for 0 < u < 1 and consequently $${\int }_{0}^{1}\,d(u;G,{\rm{\Pi }})=1$$. A few notes on the model specification: • It has a unique two-component structure that combines assumed parametric g with the d-function. The function d can be viewed as a “correction” density to counter the possible misspecification bias of g. • The density function d(u; G, Π) can also be viewed as describing the “excess” uncertainty of the assumed g(θ; α, β). For that reason we call it the U-function. • The motivation behind the representation (1.1) stems from the observation that d[G(θ); G, Π] is in fact the prior density-ratio π(θ)/g(θ). Hence, it is straightforward to verify that the scheme (1.1) always yields a proper density, i.e., $${\int }_{\theta }\,g(\theta )\,d[G(\theta );G,{\rm{\Pi }}]=1$$. Since the square integrable d[G(θ); G, Π] lives in the Hilbert space $${ {\mathcal L} }^{2}(G)$$, we can approximate it by projecting into the orthonormal basis {Tj} satisfying $$\int \,{T}_{i}(\theta ;G){T}_{j}(\theta ;G)\,{\rm{d}}G={\delta }_{ij}$$. We choose Tj(θ; G) to be $${{\rm{Leg}}}_{j}\circ G(\theta )$$, a member of the LP-class of rank-polynomials11. The system {Tj} possesses two attractive properties: they are polynomials of rank transform G(θ) thus constitutes a robust basis, and they are orthonormal with respect to $${ {\mathcal L} }^{2}(G)$$, for any arbitrary G (continuous). This is not to be confused with standard Legendre polynomials Legj(u), 0 < u < 1, which are orthonormal with respect to Uniform$$[0,1]$$ measure. For more details, see Supplementary Appendix B. The above discussion paves the way for the following definition. ### Definition 2 . Θ ~ DS(G, m) distribution if it admits the following representation: $$\pi (\theta )\,=\,g(\theta ;\alpha ,\beta )\,[1+\sum _{j=1}^{m}\,{\rm{LP}}[j;G,{\rm{\Pi }}]\,{T}_{j}(\theta ;G)].$$ (1.2) The LP-Fourier coefficients LP[j; G, Π] are the key parameters that help us to express mathematically the “gap” between a priori anticipated G and the true prior Π. When all the expansion coefficients are zero, we automatically recover g. We will now spend a few words on the LP-DS(G, m) class of prior models: • When π(θ) is a member of DS(G, m) class of priors, the orthogonal LP-transform coefficients (1.2) satisfy $${\rm{LP}}[j;G,{\rm{\Pi }}]={\langle d,{T}_{j}\circ {G}^{-1}\rangle }_{{ {\mathcal L} }^{2}(0,1)}={\mathbb{E}}[{T}_{j}({\rm{\Theta }};G);{\rm{\Pi }}].$$ (1.3) Thus, given a random sample θ1, …, θk from π(θ), we could easily estimate the unknown LP-coefficients, and, thus, d and π, by computing the sample mean $${k}^{-1}\,{\sum }_{i=1}^{k}\,{T}_{j}({\theta }_{i};G)$$. But unfortunately, the θis are unobserved. Section 2 describes an estimation strategy that can deal with the situation at hand. Before introducing this technique, however, we must acclimate the reader with the role played by the U-function d(u; G, Π) for uncertainty quantification and characterization of the initial believable prior g. That’s the objective of the next Section 1.2. • Under definition 2, we have DS(G, m = 0) ≡ g(θ; α, β). The truncation point m in (1.2) reflects the concentration of permissible π around a known g. While this class of priors is rich enough to approximate any reasonable prior with the desired accuracy in the large-m limit, one can easily exclude absurdly rough densities and focus on a neighborhood around the domain-knowledge-based g by choosing m not “too big.” • The motivations behind the name ‘DS-Prior’ are twofold. First, our formulation operationalizes I. J. Good’s ‘Successive Deepening’ idea12 for Bayesian data analysis: A hypothesis is formulated, and, if it explains enough, it is judged to be probably approximately correct. The next stage is to try to improve it. The form that this approach often takes in EDA is to examine residuals for patterns, or to treat them as if they were original data (I. J. Good, 1983, p. 289). Secondly, our prior has two components: A Scientific g that encodes an expert’s knowledge and a Data-driven d. That is to say that our framework embraces data and science, both, in a testable manner13. ### Exploratory Diagnostics and U-Function Is your data compatible with the pre-selected g(θ)? If yes, the job is done without getting into the arduous business of nonparametric estimation. If no, we can model the “gap” between the parametric g and the true unknown prior π, which is often far easier than modeling π from scratch (hence, one can learn from small number of cases)! If the observed y1, …, yk look very unexpected given g(θ; α, β), it is completely reasonable to question the sanctity of such a self-selected prior. Here we provide a formal nonparametric exploratory procedure to describe comprehensively the uncertainty about the choice of g. Using the algorithm detailed in the next section, we estimate U-functions for four real data sets. Among them, the first three are binomial variate and the last one normal. The results are shown in Fig. 1. • The rat tumor data14 consists of observations of endometrial stromal polyp incidence in k = 70 groups of female rats. For each group, yi is the number of rats with polyps and ni is the total number of rats in the experiment. • The terbinafine data15 comprise k = 41 studies, which investigate the proportion of patients whose treatment terminated early due to some adverse effect of an oral anti-fungal agent: yi is the number of terminated treatments and ni is the total number of patients in the experiment. • The rolling tacks16 data involve flipping a common thumbtack 9 times. It consists of 320 pairs, (9, yi), where yi represents the number of times the thumbtack landed point up. • The ulcer data consist of forty randomized trials of a surgical treatment for stomach ulcers conducted between 1980 and 198917,18. Each of the 40 trials has an estimated log-odds ratio $${y}_{i}|{\theta }_{i}\sim {\mathscr{N}}({\theta }_{i},{s}_{i}^{2})$$ that measures the rate of occurrence of recurrent bleeding given the surgical treatment. Throughout, we have used the maximum likelihood estimates (MLE) for estimating the initial starting value of the hyperparameters. However, one can use any other reasonable choice, which may involve expert’s judgment. What is important to note is the shape of the $$\hat{d}$$; more specifically, its departure from uniformity, indicates the assumed conjugate prior g(θ; α, β) needs a ‘repair’ to resolve the prior-data conflict. For example, the flat shape of the estimated $$\hat{d}$$ in Fig. 1(b) indicates that our initial selection of g(θ; α, β) is appropriate for the terbinafine and ulcer data. Therefore, one can proceed in turning the “Bayesian crank” with confidence using the parametric beta and normal prior respectively. In contrast, Fig. 1(a,c) provide a strong warning in using g = Beta(α, β) for the rat tumor and the rolling tacks experiments. The smooth estimated U-functions expose the nature of the discrepancy that exists between g and the observed data by having an “extra” mode. Clearly, the answer does not lie in choosing a different (α, β) as this cannot rectify the missing bimodality. This brings us to an important point: the full Bayesian analysis, by assigning hyperprior distribution on α and β, is not always a fail-safe strategy and should be practiced with caution (not in a blind mechanical way). The bottom line is uncertainty in the prior probability model ≠ uncertainty in α, β. A foolproof prior uncertainty model, thus, has to allow ignorance in terms of the functional shape around g. The foregoing discussion motivates the following entropy-like measure of uncertainty. ### Definition 3 . The qLP statistic for uncertainty quantification is defined as follows: $${\rm{qLP}}(G\parallel {\rm{\Pi }})=\sum _{j}\,|{\rm{LP}}[j;G,{\rm{\Pi }}]{|}^{2}.$$ (4) The motivation behind this definition comes from applying Parseval’s identity in (1.2): $${\int }_{0}^{1}\,{d}^{2}(u;G,{\rm{\Pi }})=$$$$1+{\rm{qLP}}(G\parallel {\rm{\Pi }})$$. Thus, the proposed measure captures the departure of the U-function from uniformity. The following result connects our qLP statistic with relative entropy. ### Theorem 1 . The qLP uncertainty quantification statistic satisfies the following relation: $${\rm{qLP}}(G\parallel {\rm{\Pi }})\approx 2\times {\rm{KL}}({\rm{\Pi }}\parallel G),$$ (5) where KL(Π||G) is the KullbackLeibler (KL) divergence between the true prior π and its parametric approximate g. ### Proof . Express KL-information divergence using U-functions by substituting G(θ) = u: $${\rm{KL}}({\rm{\Pi }}\parallel G)=\int \,\pi (\theta )\,\mathrm{log}\,\frac{\pi (\theta )}{g(\theta )}\,{\rm{d}}\theta ={\int }_{0}^{1}\,d(u;G,{\rm{\Pi }})\,\mathrm{log}\,d(u;G,{\rm{\Pi }})\,{\rm{d}}u.$$ (6) Complete the proof by approximating dlogd in (1.6) via Taylor series $$(d-1)+\frac{1}{2}{(d-1)}^{2}$$. We conclude this section with a few additional remarks: • Our exploratory uncertainty diagnostic tool encourages “interactive” data analysis that is similar in spirit to Gelman et al.19. Subject-matter experts can use this tool to “play” with different hyperparameter choices in order to filter out the reasonable ones. This functionality might be especially valuable when multiple expert opinions are available. • When $$\hat{d}$$ shows evidence of the prior-data conflict, the question remains: what to do next? It is not enough to check the adequacy without informing the user an explanation for the misfit or what is the “deeper” structure that is missing in the starting parametric prior. Fortunately, our DS(G, m) model suggests a simple, yet formal, guideline for upgrading: $$\widehat{\pi }(\theta )=g(\theta ;\hat{\alpha },\hat{\beta })\times \hat{d}[G(\theta );G,{\rm{\Pi }}]$$, where the shape of $$\hat{d}(u;G,{\rm{\Pi }})$$ captures the patterns which were not a priori anticipated. Hence our formalism simultaneously addresses the problem of uncertainty quantification and the subsequent model synthesis. ## Estimation Method ### Theory In this Section, we lay out the key theoretical results that we use for designing our algorithm. Before deriving the general expressions under the LP-DS(G, m) model, it is helpful to start by recalling the results for the basic conjugate model, i.e., Θ ~ DS(G, m = 0) and $${y}_{i}|{\theta }_{i}\mathop{\sim }\limits^{{\rm{ind}}}f({y}_{i}|{\theta }_{i})$$ for i = 1, …, k. Table 1 provides the marginal $${f}_{G}({y}_{i})={\int }_{{\theta }_{i}}\,f({y}_{i}|{\theta }_{i})g({\theta }_{i})\,{\rm{d}}{\theta }_{i}$$ and the posterior distribution $${\pi }_{G}({\theta }_{i}|{y}_{i})=\frac{f({y}_{i}|{\theta }_{i})g({\theta }_{i})}{{f}_{G}({y}_{i})}$$ for four commonly encountered distributions, with the Bayes estimate of hi) being denoted as $${{\mathbb{E}}}_{G}[h({{\rm{\Theta }}}_{i})|{y}_{i}]={\int }_{{\theta }_{i}}\,h({\theta }_{i}){\pi }_{G}({\theta }_{i}|{y}_{i})\,{\rm{d}}{\theta }_{i}$$. The subscript ‘G’ in these expressions underscores the fact that they are calculated for the conjugate g-model. Next, we seek to extend these parametric results to LP-nonparametric setup in a systematic way. Especially, without deriving analytical expressions for each case separately, we want to establish a more general representation theory that is valid for all of the above and, in fact, extends to any conjugate pairs, explicating the underlying unity of our formulation. ### Theorem 2. Consider the following model: $$\begin{array}{lll}{y}_{i}|{\theta }_{i} & \mathop{\sim }\limits^{{\rm{ind}}} & f({y}_{i}|{\theta }_{i}),\,(i=1,\ldots ,k)\\ {{\rm{\Theta }}}_{i} & \mathop{\sim }\limits^{{\rm{ind}}} & \pi (\theta ),\end{array}$$ where π(θ) is a member of DS(G, m) family (1.2), G being the associated conjugate prior. Under this framework, the following holds: 1. (a) The marginal distribution of yi is given by $${f}_{{\rm{LP}}}({y}_{i})={f}_{G}({y}_{i})\,(1+\sum _{j}\,{\rm{LP}}[j;G,{\rm{\Pi }}]\,{{\mathbb{E}}}_{G}[{T}_{j}({{\rm{\Theta }}}_{i};G)\,|{y}_{i}]),$$ (2.1) where $${{\mathbb{E}}}_{G}[{T}_{j}({{\rm{\Theta }}}_{i};G)|{y}_{i}]={\int }_{{\theta }_{i}}\,{{\rm{Leg}}}_{j}\circ G({\theta }_{i}){\pi }_{G}({\theta }_{i}|{y}_{i})\,{\rm{d}}{\theta }_{i}$$. 2. (b) A closed-form expression for the posterior distribution of Θi given yi is $${\pi }_{{\rm{LP}}}({\theta }_{i}|{y}_{i})=\frac{{\pi }_{G}({\theta }_{i}|{y}_{i})\,(1+{\sum }_{j}\,{\rm{LP}}[j;G,{\rm{\Pi }}]\,{T}_{j}({\theta }_{i};G))}{1+{\sum }_{j}\,{\rm{LP}}[j;G,{\rm{\Pi }}]\,{{\mathbb{E}}}_{G}[{T}_{j}({{\rm{\Theta }}}_{i};G)|{y}_{i}]}$$ (2.2) 3. (c) For any general random variable hi), the Bayes conditional mean estimator can be expressed as follows: $${{\mathbb{E}}}_{{\rm{LP}}}[h({{\rm{\Theta }}}_{i})|{y}_{i}]=\frac{{{\mathbb{E}}}_{G}[h({{\rm{\Theta }}}_{i})|{y}_{i}]+{\sum }_{j}\,{\rm{LP}}[j;G,{\rm{\Pi }}]\,{{\mathbb{E}}}_{G}[h({{\rm{\Theta }}}_{i}){T}_{j}({{\rm{\Theta }}}_{i};G)|{y}_{i}]}{1+{\sum }_{j}\,{\rm{LP}}[j;G,{\rm{\Pi }}]\,{{\mathbb{E}}}_{G}[{T}_{j}({{\rm{\Theta }}}_{i};G)|{y}_{i}]}$$ (2.3) ### Proof. The marginal distribution for DS(G, m)-nonparametric model can be represented as: $${f}_{{\rm{LP}}}({y}_{i})=\int \,f({y}_{i}|{\theta }_{i})\times \{g({\theta }_{i};\alpha ,\beta )\,d[G({\theta }_{i});G,{\rm{\Pi }}]\}\,{\rm{d}}{\theta }_{i}.$$ Expanding the U-function in the LP-bases (1.2) yields $${f}_{{\rm{LP}}}({y}_{i})={f}_{G}({y}_{i})+\sum _{j}\,{\rm{LP}}[j;G,{\rm{\Pi }}]\,\int \,{T}_{j}({\theta }_{i};G)f({y}_{i}|{\theta }_{i})g({\theta }_{i};\alpha ,\beta )\,{\rm{d}}{\theta }_{i}.$$ (2.4) The next step is to recognize that $$f({y}_{i}|{\theta }_{i})\,g({\theta }_{i};\alpha ,\beta )={f}_{G}({y}_{i})\,{\pi }_{G}({\theta }_{i}|{y}_{i}).$$ (2.5) Substituting (2.5) in the second term of (2.4) leads to $$\sum _{j}\,{\rm{LP}}[j;G,{\rm{\Pi }}]\,\int \,{T}_{j}({\theta }_{i};G)f({y}_{i}|{\theta }_{i})g({\theta }_{i};\alpha ,\beta )\,{\rm{d}}{\theta }_{i}={f}_{G}({y}_{i})\,\sum _{j}\,{\rm{LP}}[j;G,{\rm{\Pi }}]\,{{\mathbb{E}}}_{G}[{T}_{j}({{\rm{\Theta }}}_{i};G)|{y}_{i}].$$ (2.6) Complete the proof of part (a) by replacing (2.6) into (2.4). For part (b) of posterior distribution calculation we have $${\pi }_{{\rm{LP}}}({\theta }_{i}|{y}_{i})=\frac{f({y}_{i}|{\theta }_{i})\,g({\theta }_{i};\alpha ,\beta )}{{f}_{{\rm{LP}}}({y}_{i})}\{1+\sum _{j}\,{\rm{LP}}[j;G,{\rm{\Pi }}]{T}_{j}({\theta }_{j};G)\}.$$ (2.7) Combine (2.1) and (2.5) to verify that $$\frac{f({y}_{i}|{\theta }_{i})\,g({\theta }_{i};\alpha ,\beta )}{{f}_{{\rm{LP}}}({y}_{i})}=\frac{{\pi }_{G}({\theta }_{i}|{y}_{i})}{1+{\sum }_{j}\,{\rm{LP}}[j;G,{\rm{\Pi }}]\,{{\mathbb{E}}}_{G}[{T}_{j}({{\rm{\Theta }}}_{i};G)|{y}_{i}]}.$$ (2.8) Finish the proof of part (b) by replacing (2.8) into (2.7). Part (c) is straightforward as $${{\mathbb{E}}}_{{\rm{LP}}}[h({{\rm{\Theta }}}_{i})|{y}_{i}]=\int \,h({\theta }_{i})\,{\pi }_{{\rm{LP}}}({\theta }_{i}|{y}_{i})\,{\rm{d}}{\theta }_{i},$$ which is same as $$\frac{\int \,h({\theta }_{i}){\pi }_{G}({\theta }_{i}|{y}_{i})\,\{1+{\sum }_{j}\,{\rm{LP}}[j;G,{\rm{\Pi }}]{T}_{j}({\theta }_{j};G)\}\,{\rm{d}}{\theta }_{i}}{1+{\sum }_{j}\,{\rm{LP}}[j;G,{\rm{\Pi }}]\,{{\mathbb{E}}}_{G}[{T}_{j}({{\rm{\Theta }}}_{i};G)|{y}_{i}]},$$ by (2.2). Hence, result (2.3) is immediate. Our LP-Bayes recipe (2.1)–(2.3), admits some interesting overall structure: The usual ‘parametric’ answer multiplied by a correction factor involving LP[j;G,Π]’s. This decoupling pays dividends for theoretical interpretation as well as computation. ### Algorithm The critical parameters of our DS(G, m) model are the LP-Fourier coefficients, which, as is evident from (1.3), could be estimated simply by their empirical counterpart $$\widehat{{\rm{LP}}}[j;G,{\rm{\Pi }}]={k}^{-1}\,{\sum }_{i=1}^{k}\,{T}_{j}({\theta }_{i};G)$$. But as we pointed out earlier, θ1, …, θk are unobservable. How can we then estimate those parameters? While the θi’s are unseen, it is interesting to note that they have left their footprints in the observables y1, …, yk with distribution $$f({y}_{i})=\int \,f({y}_{i}|{\theta }_{i})\pi ({\theta }_{i})\,{\rm{d}}{\theta }_{i}$$. Following the spirit of the EM-algorithm, an obvious proxy for Tj(θi; G) would be its posterior mean $${{\mathbb{E}}}_{{\rm{LP}}}[{T}_{j}({{\rm{\Theta }}}_{i};G)|{y}_{i}]$$, which also naturally arises in the expression (2.1). This leads to the following ‘ghost’ LP-estimates: $$\tilde{{\rm{LP}}}[j;G,{\rm{\Pi }}]={k}^{-1}\,\sum _{i=1}^{k}\,{{\mathbb{E}}}_{{\rm{LP}}}[{T}_{j}({{\rm{\Theta }}}_{i};G)|{y}_{i}],$$ (2.9) satisfying $${\mathbb{E}}\{\tilde{{\rm{LP}}}[j;G,{\rm{\Pi }}]\}=\tilde{{\rm{LP}}}[j;G,{\rm{\Pi }}]\,(j=1\ldots ,m)$$, by virtue of the law of iterated expectations. These estimates can then be refined via iterations. The following algorithm implements this strategy. We conclude this section with a few remarks on the algorithm: • Taking inspiration from I. J. Good’s type II maximum likelihood nomenclature20, we call our algorithm Type-II Method of Moments (MOM), whose computation is remarkably tractable and does not require any numerical optimization routine. • To enhance the results, we smooth the output of MOM-II algorithm as follows: determine significantly non-zero LP-coefficients via Schwartz’s BIC-based smoothing. Arrange $$\tilde{{\rm{LP}}}[j;G,{\rm{\Pi }}]$$’s in a decreasing magnitude and choose m that maximizes $${\rm{BIC}}(m)=\sum _{j=1}^{m}\,|\widehat{{\rm{LP}}}[j;G,{\rm{\Pi }}]{|}^{2}-\frac{m\,\mathrm{log}(k)}{k}.$$ See Supplementary Appendix D for more details. Furthermore, Supplementary Appendix I discusses how MOM-II Bayes algorithm can be adapted to yield LP-maximum entropy prior density estimate21. ### Results In addition to the rat tumor data (cf. Section 1.2), here we introduce and analyze three additional datasets: two binomial and one Poissonian example. • The surgical node data22 involves number of malignant lymph nodes removed during intestinal surgery22. Each of the k = 844 patients underwent surgery for cancer, during which surgeons removed surrounding lymph nodes for testing. Each patient has a pair of data (ni, yi), where ni represents the total nodes removed from patient i and yi ~ Bin(ni, θi) are the number of malignant nodes among them. • The Navy shipyard data23 consists of k = 5 samples of the number of defects yi found in ni = 5 lots of welding material. • The insurance data24, shown in Table 4, provides a single year of claims data for an automobile insurance company in Europe24. The counts yi ~ Poisson(θi) represent the total number of people who had i claims in a single year. Figure 2 displays the estimated LP-DS(G, m) priors along with the default parametric (empirical Bayes) counterparts. The estimated LP-Fourier coefficients together with the choices of hyperparameters (α, β) are summarized below: 1. (a) Rat tumor data, g is the beta distribution with MLE α = 2.30, β = 14.08: $$\hat{\pi }(\theta )=g(\theta ;\alpha ,\beta )\,[1-0.50{T}_{3}(\theta ;G)].$$ (2.10) 2. (b) Surgical node data, g is the beta distribution with MLE α = 0.32, β = 1.00: $$\hat{\pi }(\theta )=g(\theta ;\alpha ,\beta )\,[1-0.07{T}_{3}(\theta ;G)-0.11{T}_{4}(\theta ;G)+0.09{T}_{5}(\theta ;G)+0.13{T}_{7}(\theta ;G)].$$ (2.11) 3. (c) Navy shipyard data, g is the Jeffreys prior with α = 0.5, β = 0.5: $$\hat{\pi }(\theta )=g(\theta ;\alpha ,\beta )\,[1-0.67{T}_{1}(\theta ;G)+0.90{T}_{2}(\theta ;G)].$$ (2.12) 4. (d) Insurance data, g is the gamma distribution with MLE α = 0.70 and β = 0.31: $$\hat{\pi }(\theta )=g(\theta ;\alpha ,\beta )\,[1-0.26{T}_{2}(\theta ;G)].$$ (2.13) The rat tumor data shows a prominent bimodal shape, which should not come as a surprise in light of Fig. 1(a). For the surgical data, DS-prior puts excess mass around 0.4, which concurs with the findings of Efron [22, Sec. 4.2]. In the case of the Navy shipyard data, our analysis corrects the starting “U” shaped Jeffreys prior to make it asymmetric with an extended peak at 0. This is quite justifiable looking at the proportions in the given data: (0/5, 0/5, 0/5, 1/5, 5/5). Finally, for the insurance data, the starting gamma prior requires a second-order (dispersion parameter) correction to yield a bona-fide $$\hat{\pi }$$ (2.13), which makes it slightly wider in the middle with sharper peak and tail. ## Inference ### MacroInference A single study hardly provides adequate evidence for a definitive conclusion due to the limited sample size. Thus, often the scientific interest lies in combining several related but (possibly) heterogeneous studies to come up with an overall macro-level inference that is more accurate and precise than the individual studies. This type of inference is a routine exercise in clinical trials and public policy research. #### Terbinafine data analysis For the terbinafine data, the aim is to combine k = 41 treatment arms with varying event rates and produce a pooled proportion of patients who withdrew from the study because of the adverse effects of oral anti-fungal agents. Recall that our U-function diagnostic in Fig. 1(b) indicated the parametric beta-binomial model with MLE estimates α = 1.24 and β = 34.7 as a justifiable choice for this data. Thus the adverse event probabilities across k = 41 studies can be summarized by the prior mean $$\frac{\alpha }{\alpha +\beta }=0.034$$. We apply parametric bootstrap using DS(G, m)-sampler (see Supplementary Appendix C) with m = 0 to compute the standard error (SE): 0.034 ± 0.006, highlighted in the Fig. 3(b). If one assumes a single binomial distribution for all the groups (i.e., under homogeneity), then the ‘naive’ average $${\sum }_{i=1}^{k}\,{y}_{i}/{\sum }_{i=1}^{k}\,{n}_{i}$$ would lead to an overoptimistic biased estimate 0.037 ± 0.0034. In this example, heterogeneity arises due to overdispersion among the exchangeable studies. But there could be other ways too. An example is given in the following case study. #### Rat tumor and rolling tacks data analysis Can we always extract a “single” overall number to aptly describe k parallel studies? Not true, in general. In order to appreciate this, let us look at Fig. 3(a,c), which depict the estimated DS-prior for the rat tumor and rolling tacks data. We highlight two key observations: 1. 1. Mixed population. The bimodality indicates the existence of two distinct groups of θi’s. We call this “structured heterogeneity,” which is in between two extremes: homogeneity and complete heterogeneity (where there is no similarity between the θi’s whatsoever). The presence of two clusters for the rolling tacks data was previously detected by Jun Liu25. The author further noted, “Clearly, this feature is unexpected and cannot be revealed by a regular parametric hierarchical analysis using the Beta-binomial priors.” One plausible explanation for this two-group structure was attributed to the fact that the tack data were produced by two persons with some systematic difference in their flipping. On the other hand, the bimodal shape of the rat example was not previously anticipated14,26,27. The resulting two groups of rat tumor experiments are enumerated in the Table 2. Although we do not have the necessary biomedical background to scientifically justify this new discovery, we are aware that potentially numerous factors (e.g., experimental design, underlying conditions, selection of specific groups of female rats) may contribute to creating this systemic variation. 2. 2. From single mean to multiple modes. An attempt to combine the two subpopulations using a single prior mean (as carried out for the terbinafine example) would result in overestimating one group and underestimating another. We prefer modes of $$\widehat{\pi }(\theta )$$, along with their SEs, as a good representative summary, which can be easily computed by the nonparametric smooth bootstrap via DS(G, m) sampler. Learning from big heterogeneous studies is one of the most important yet unsettled matters of modern macroinference18,28. Our key insight is the realization that the ‘science of combining’ critically depends on the shape of the estimated prior. One interesting and commonly encountered case is multimodal structure of the learned prior. In such situations, instead of the prior-mean summary, we recommend group-specific modes. Our algorithm is also capable of finding data-driven clusters of the partially exchangeable studies in a fully automated manner. ### Learning From Uncertain Data An important problem of measurement science that routinely appears in metrology, chemistry, physics, biology, and engineering can be stated as follows: measurements are made by k different laboratories in the form of y1, …, yk along with their estimated standard errors s1, …, sk. Given this uncertain data, a fundamental problem of interest is inference concerning: (i) estimation of the consensus value of the measurand, and (ii) evaluation of the associated uncertainty. The data in Table 3 are an example of such an inter-laboratory study involving k = 28 measurements for the level of arsenic in oyster tissue. The study was part of the National Oceanic and Atmospheric Administration’s National Status and Trends Program Ninth Round Intercomparison Exercise29. #### Arsenic data analysis We start with the DS-measurement model: $${Y}_{i}|{{\rm{\Theta }}}_{i}={\theta }_{i}\sim {\mathscr{N}}({\theta }_{i},{s}_{i}^{2})$$ and Θi ~ DS(G, m) (i = 1, …, 28) with G being $${\mathscr{N}}(\mu ,{\tau }^{2})$$. The shape of the estimated U-function in Fig. 4(a) indicates that the pre-selected prior $${\mathscr{N}}(\hat{\mu }=13.22,{\hat{\tau }}^{2}={1.85}^{2})$$ is clearly unacceptable for arsenic data, thereby disqualifying the classical Gaussian random effects model30. The DS-corrected $$\widehat{\pi }$$ shows some interesting asymmetric pattern with two-bumps. The left-mode represents measurements from three laboratories that are unlike the majority. The result of our macro-inference is shown in Fig. 4(b), which delivers the consensus value 13.6 ± 0.24. This is clearly far more resistant to fairly extreme low measurements and surprisingly, also more accurate when compared to the parametric EB estimate 13.22 ± 0.26. Most importantly, our scheme provides an automated solution to the fundamental problem of which (as well as how) measurements from the participating laboratories should be combined to form a best consensus value. Possolo31 fits a Bayesian hierarchical model with prior as Student’s tν, where the degrees of freedom was also treated as a random variable over some arbitrary range {3, …, 118}. Although a heavy-tailed Student’s t-distribution is a good choice to ‘robustify’ the analysis, it fails to capture the inherent asymmetry and the finer modal structure on the left. Distinguishing long-tail from bimodality is an important problem of applied statistics by itself. To summarize, there are several attractive features of our general approach: (i) it adapts to the structure of the data, yet (ii) allows the use of expert opinion to go from knowledge-based prior to statistical prior; (iii) if multiple expert opinions are available, one can also use the U-diagnostic for reconciliation–exploratory uncertainty assessment; (iv) it avoids the questionable exercise of detecting and discarding apparently unusual measurements32, and finally (v) our theory is still applicable for very small number of parallel cases (cf. Fig. 2(c)), a situation which is not uncommon in inter-laboratory studies. ### MicroInference The objective of microinference is to estimate a specific microlevel θi given yi. Consider the rat tumor example where, along with earlier k = 70 studies, we have an additional current experimental data, that shows y71 = 4 out of n71 = 14 rats developed tumors. How can we estimate the probability of a tumor for this new clinical study? There could be at least three ways to answer this question: • Frequentist MLE estimate: An obvious estimate would be the sample proportion $${\tilde{\theta }}_{i}:{y}_{71}/{n}_{71}=0.286$$. This operates in an isolated manner, completely ˇ˅ignoring the additional historical information of k = 70 studies. • Parametric empirical Bayes estimate: It is reasonable to expect that the historical data from earlier studies may be related to the current 71st study, thus borrowing information can result in improved estimator of θ71. Bayes posterior mean estimate $${\check{{\theta }}}_{i}={{\mathbb{E}}}_{G}[{{\rm{\Theta }}}_{i}|{y}_{i}]$$ operationalizes this heuristic, which in the Binomial case takes the following form: $${\check{\theta }}_{i}=\frac{{n}_{i}}{\alpha +\beta +{n}_{i}}{\tilde{\theta }}_{i}+\frac{\alpha +\beta }{\alpha +\beta +{n}_{i}}{{\mathbb{E}}}_{G}[{\rm{\Theta }}].$$ (31) This is famously known as Stein’s shrinkage formula33,34, as it pulls the sample proportions toward the overall mean of the prior $$\frac{\alpha }{\alpha +\beta }$$. For smaller (ni) studies, shrinkage intensity is higher, which allows them to learn from other experiments. • Nonparametric Elastic-Bayes estimate: Is it a wise strategy to shrink all $${\tilde{\theta }}_{i}$$’s toward the grand mean 0.14? Interestingly, this shrinking point is near the valley between the twin-peaks of the rat tumor prior density estimate (verify from Fig. 3(a)) and therefore may not represent a preferred location. Then, where to shrink? Ideally, we want to learn only from the relevant subset of the full dataset–selective shrinkage, e.g., for rat data, it would be the group 2 of Table 2. This brings us to the question: how to rectify the parametric empirical Bayes estimate $${\check{{\theta }}}_{i}$$? The formula (2.3) gives us the required (nonlinear) adjusting factor: $${\hat{\theta }}_{i}=\frac{{\check{{\theta }}}_{i}+{\sum }_{j}\,\widehat{{\rm{LP}}}[j;G,{\rm{\Pi }}]\,{{\mathbb{E}}}_{G}[{{\rm{\Theta }}}_{i}{T}_{j}({{\rm{\Theta }}}_{i};G)|{y}_{i}]}{1+{\sum }_{j}\,\widehat{{\rm{LP}}}[j;G,{\rm{\Pi }}]\,{{\mathbb{E}}}_{G}[{T}_{j}({{\rm{\Theta }}}_{i};G)|{y}_{i}]},$$ (32) dictating the magnitude and direction of shrinkage in a completely data-driven manner via LP-Fourier coefficients. Note that when $$d\equiv 1$$, i.e., all the LP[j; G, Π] are zero, (3.2) reproduces the parametric $${\check{{\theta }}}_{i}$$. Due to its flexibility and adaptability, we call this the Elastic-Bayes estimate. This can be considered as a nonparametric class of shrinkage estimators that starts with the classical Stein’s formula and rectifies it by looking at the data. #### Rat tumor example Figure 5 compares Stein’s empirical Bayes estimate with our Elastic-Bayes estimate for the all k = 70 tumor rates. Posterior mean, median, and mode of θj’s are shown side by side in three plots. The departure from the 45° reference line is a consequence of “adaptive shrinkage.” Elastic-Bayes automatically shrinks the empirical $${\tilde{\theta }}_{i}$$ towards the representative modes (0.034 and 0.156), whereas the Stein’s PEB estimate uses the grand mean (≈0.14) as the shrinking target for all the tumor rates. This is particularly prominent in Fig. 5(c) for maximum a posteriori (MAP) estimates. As before, for heterogeneous population, we prescribe posterior mode as the final prediction. #### The Pharma-example Our DS Elastic-Bayes estimate is especially powerful in the presence of prior-data conflict. To illustrate this point, we report a small simulation study. The goal is to compare MSE for frequentist MLE, parametric empirical Bayes, and nonparametric Elastic-Bayes estimates for a new study ynew in various levels of prior-data conflict. To capture the prior-data conflict, we consider the following model for π(θ) and ynew: $$\begin{array}{rcl}\pi (\theta ) & = & \eta \,{\rm{Beta}}(5,45)+(1-\eta )\,{\rm{Beta}}(30,70)\\ {y}_{{\rm{new}}} & \sim & {\rm{Bin}}(50,0.3).\end{array}$$ The parameter η varies from 0 to 0.50 in increments of 0.05; as η increases we introduce more heterogeneity into the true prior distribution and exacerbate the prior-data conflict between π(θ) and ynew; see Fig. 6(a). We simulated k = 100 θi from π(θ), with which we generate $${y}_{i}|{\theta }_{i}\sim {\rm{Bin}}(60,{\theta }_{i})$$. Using the Type-II MoM algorithm on the simulated data set, we found $$\hat{\pi }$$. After generating ynew, we then determined the frequentist MLE, parametric EB (PEB), and the nonparametric elastic Bayes estimates of the mode. For each value of η, we repeated this process 250 times and found the mean squared error (MSE) for each estimate. To better illustrate the impact of prior-data conflicts, we used ratio of PEB MSE to frequentist MSE and PEB MSE to DS MSE. The results are shown in Fig. 6(b). The Elastic-Bayes estimate outperforms the Stein’s estimate for all η. More importantly the efficiency of our estimate continues to increase with the heterogeneity. This is happening because elastic Bayes performs selective shrinkage of sample proportion towards the appropriate mode (near 0.3) and thus gains “strength” by combining information from ‘similar’ studies even when the contamination in the study population increases. An interesting observation is the performance of the frequentist MLE estimate; as the data becomes more heterogeneous, the frequentist MLE shows improvement with respect to the Stein’s PEB estimate. Our simulation depicts a scenario that is very common in historic-controlled clinical trials, where the heterogeneity arises due to changing conditions. Additional comparisons with other empirical Bayes procedures can be found in Supplementary Appendix G. Figure 7 shows the posterior plots for specific studies in four of our data sets: surgical node, rat tumor, Navy shipyard, and rolling tacks. In studies like the surgical node data, personalized predictions are typically valuable. Figure 7(a) shows posterior distributions for three selected patients, which are indistinguishable from Efron’s deconvolution answer35 [Fig. 4]; the patient with ni = 32 and yi = 7 shows almost certainly θi > 0.5, i.e., he or she is highly prone to positive lymph nodes, and thus should be referred to follow-up therapy. With regard to the rat tumor data, Fig. 7(b) depicts the DS-posterior distribution of θ71 along with its parametric counterpart πG(θ71|y71, n71). Interestingly, the DS nonparametric posterior shows less variability; this possibly has to do with the selective learning ability of our method, which learns from similar studies (e.g. group 2), rather than the whole heterogeneous mix of studies. We see similar phenomena in the rolling tacks data, where panel (d): yi = 3, is more reflective of the first mode and panel (f): yi = 8, of the second. Panel (e) shows the bimodal posterior for yi = 6 case. Finally, the Navy shipyard data (Fig. 7(c)) exhibits another advantage of DS priors: it works equally well for small k. The DS-posterior mean estimate for y6 = 0 is 0.0471, which is consistent with the findings of Sivaganesan and Berger36 [p. 117]. ### Poisson Smoothing: The Two Cultures We consider the problem of estimating a vector of Poisson intensity parameters θ = (θ1, …, θk) from a sample of Yi|θi ~ Poisson(θi), where the Bayes estimate is given by: $${\mathbb{E}}[{\rm{\Theta }}|Y=y]=\frac{{\int }_{0}^{\infty }\,\theta [{e}^{-\theta }\,{\theta }^{y}/y!]\,\pi (\theta )\,{\rm{d}}\theta }{{\int }_{0}^{\infty }\,[{e}^{-\theta }\,{\theta }^{y}/y!]\,\pi (\theta )\,{\rm{d}}\theta };\,y=0,1,2,\ldots .$$ (3.3) Two primary approaches for estimating (3.3): • Parametric Culture37,38: If one assumes π(θ) to be the parametric conjugate Gamma distribution $$g(\theta ;\alpha ,\beta )=\frac{1}{{\beta }^{\alpha }{\rm{\Gamma }}(\alpha )}{\theta }^{\alpha -1}{e}^{-\theta /\beta }$$, then it is straightforward to show that Stein’s estimate takes the following analytical form $${\check{{\theta }}}_{i}=\frac{{y}_{i}+\alpha }{{\beta }^{-1}+1}$$, weighted average of the MLE yi and the prior mean αβ. • Nonparametric Culture4,7,39: This was born out of Herbert Robbins’ ingenious observation that (3.3) can alternatively be written in terms of marginal distribution $$(y+1)\frac{f(y+1)}{f(y)}$$, and thus can be estimated non-parametrically by substituting empirical frequencies. This remarkable “prior-free” representation, however, does not hold in general for other distributions. As a result, there is a need to develop methods that can bite the bullet and estimate the prior π from the data. Two such promising methods are Bayes deconvolution7 and the Kiefer-Wolfowitz non-parametric MLE (NPMLE)39,40. Efron’s technique can be viewed as smooth nonparametric approach, whereas NPMLE generates a discrete (atomic) probability measure. For more discussion, see Supplementary Appendix A2. #### The Third Culture Each EB modeling culture has its own strengths and shortcomings. For example, PEB methods are extremely efficient when the true prior is Gamma. On the other hand, the NEB methods possess extraordinary robustness in the face of a misspecified prior yet they are inefficient when in fact $$\pi \equiv {\rm{Gamma}}(\alpha ,\beta )$$. Noticing this trade-off, Robbins raised the following intriguing question10: how can this efficiency-robustness dilemma be resolved in a logical manner? To address this issue, we must design a data analysis protocol that offers a mechanism to answer the following intermediate modeling questions (before jumping to estimate $$\hat{\pi }$$): Can we assess whether or not a Gamma-prior is adequate in light of the sample-information? In the event of a prior-data conflict, how can we estimate the ‘missing shape’ in a completely data-driven manner? All of these questions are at the heart of our ‘Bayes via goodness-of-fit’ formulation, whose goal is to develop a third culture of generalized empirical Bayes (gEB) modeling by uniting the parametric and non-parametric philosophies. Compute the DS Elastic-Bayes estimate by substituting $${\check{{\theta }}}_{i}=\frac{{y}_{i}+\alpha }{{\beta }^{-1}+1}$$ in the Eq. (3.2), which reduces to the PEB answer when $$d(u;G,{\rm{\Pi }})\equiv 1$$ (i.e, the true prior is a Gamma) and modifies non-parametrically, only when needed; thereby turning Robbins’ vision into action (see Supplementary Appendices A and G for more discussions on this point). #### The insurance data Table 4 reports the Bayes estimates $${\mathbb{E}}[\theta |Y=y]$$ for the insurance data. We compare five methods: parametric Gamma, classical Robbins’ EB, Efron’s Deconvolve, Koenker’s NPMLE, and our procedure. The raw-nonparametric Robbins’ estimator is clearly erratic at the tail due to data-sparsity. The PEB estimate overcomes this limitation and produces a stable estimate; but is it dependable? Should we stop here and report this as our final result? Our exploratory U-diagnostic tells that (consult Sec. 2.3) the PEB estimate needs a second-order correction to resolve the discrepancy between the Gamma prior and data. The improved LP-nonparametric Stein estimates are shown in the last row of Table 4. #### The butterfly data The next example is Corbet’s Butterfly data37 – one of the earliest examples of empirical Bayes. Alexander Corbet, a British naturalist, spent two years in Malaysia trapping butterflies in the 1940s. The data consist of the number of species trapped exactly y times in those two years for y = 1, …, 24. Figure 8(b) plots different Bayes estimates. The Robbins’ procedure suffers from similar ‘jumpiness.’ The blue dotted line represents the linear PEB estimate with α = 0.104 and β = 89.79 (same as of Efron and Hastie24, Eq. 6.24) estimated from the zero-truncated negative binomial marginals. Our DS-estimate is almost sandwiched between the PEB and Deconvolve answer. The NPMLE method (the orange curve) yields some strange looking sinusoidal pattern, probably due to overfitting. In conclusion, we must say that the triumph of our procedure as compared to the other Bayes estimators lies in its automatic adaptability that Robbins alluded in his 1980 article10. ## Discussions We laid out a new mechanics of data modeling that effectively consolidates Bayes and frequentist, parametric and nonparametric, subjective and objective, quantile and information-theoretic philosophies. However, at a practical level, the main attractions of our “Bayes via goodness-of-fit” framework lie in its (i) ability to quantify and protect against prior-data conflict using exploratory graphical diagnostics; (ii) theoretical simplicity that lends itself to analytic closed-form solutions, avoiding computationally intensive techniques such as MCMC or variational methods. We have developed the concepts and principles progressively through a range of examples, spanning application areas such as clinical trials, metrology, insurance, medicine, and ecology, highlighting the core of our approach that gracefully combines Bayesian way of thinking (parameter probability where prior knowledge can be encoded) with a frequentist way of computing via goodness-of-fit (evaluation and synthesis of the prior distribution). If our efforts can help to make Bayesian modeling more attractive and transparent for practicing statisticians (especially non-Bayesians) by even a tiny fraction, we will consider it a success. ### Data availability All datasets and the computing codes are available via free and open source R-software package BayesGOF. The online link: https://CRAN.R-project.org/package=BayesGOF. ## References 1. 1. Efron, B. Why isn’t everyone a Bayesian? The Am. Stat. 40, 1–5 (1986). 2. 2. Sims, C. Understanding non-Bayesians. Unpubl. chapter, Dep. Econ. Princet. Univ. (2010). 3. 3. Stigler, S. M. Thomas Bayes’s Bayesian inference. J. Royal Stat. Soc. Ser. A (General) 125, 250–258 (1982). 4. 4. Robbins, H. An empirical Bayes approach to statistics. In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, 157–164 (1956). 5. 5. Good, I. The Bayes/non-Bayes compromise: A brief review. J. Am. Stat. Assoc. 87, 597–606 (1992). 6. 6. Rubin, D. B. Bayesianly justifiable and relevant frequency calculations for the applied statistician. The Annals Stat. 12, 1151–1172 (1984). 7. 7. Efron, B. Robbins, empirical Bayes and microarrays. The Annals Stat. 31, 366–378 (2003). 8. 8. Dempster, A. P. A subjectivist look at robustness. Bull. Intern. Stat. Inst 46, 349–374 (1975). 9. 9. Berger, J. O. An overview of robust Bayesian analysis (with discussion). Test 3, 5–124, https://doi.org/10.1007/BF02562676 (1994). 10. 10. Robbins, H. An empirical Bayes estimation problem. Proc. Natl. Acad. Sci. 77, 6988–6989 (1980). 11. 11. Mukhopadhyay, S. & Parzen, E. LP approach to statistical modeling. arXiv preprint arXiv:1405.2601 (2014). 12. 12. Good, I. J. The philosophy of exploratory data analysis. Philos. science 50, 283–295 (1983). 13. 13. Gelman, A., Simpson, D. & Betancourt, M. The prior can often only be understood in the context of the likelihood. Entropy 19, 555 (2017). 14. 14. Gelman, A. et al. Bayesian Data Analysis, Third Edition. Chapman & Hall/CRC Texts in Statistical Science (Taylor & Francis, 2013). 15. 15. Young-Xu, Y. & Chan, K. A. Pooling overdispersed binomial data to estimate event rate. BMC Med. Res. Methodol. 8, 58 (2008). 16. 16. Beckett, L. & Diaconis, P. Spectral analysis for discrete longitudinal data. Adv. Math. 103, 107–128 (1994). 17. 17. Sacks, H. S., Chalmers, T. C., Blum, A. L., Berrier, J. & Pagano, D. Endoscopic hemostasis: an effective therapy for bleeding peptic ulcers. J. Am. Med. Assoc. 264, 494–499 (1990). 18. 18. Efron, B. Empirical Bayes methods for combining likelihoods. J. Am. Stat. Assoc. 91, 538–550 (1996). 19. 19. Gelman, A., Meng, X.-L. & Stern, H. Posterior predictive assessment of model fitness via realized discrepancies. Stat. Sinica 733–760 (1996). 20. 20. Good, I. J. Good thinking: The foundations of probability and its applications. (Univ. Minnesota Press, Minneapolis, 1983). 21. 21. Mukhopadhyay, S. Large-scale mode identification and data-driven sciences. Electron. J. Stat. 11, 215–240 (2017). 22. 22. Efron, B. Empirical Bayes deconvolution estimates. Biom. 103, 1–20 (2016). 23. 23. Martz, H. & Lian, M. Empirical bayes estimation of the binomial parameter. Biom. 61, 517–523 (1974). 24. 24. Efron, B. & Hastie, T. Computer Age Statistical Inference, vol. 5 (Cambridge University Press, 2016). 25. 25. Liu, J. S. Nonparametric hierarchical Bayes via sequential imputations. The Annals Stat. 911–930 (1996). 26. 26. Tarone, R. E. The use of historical control information in testing for a trend in proportions. Biom. 38, 215–220 (1982). 27. 27. Dempster, A. P., Selwyn, M. R. & Weeks, B. J. Combining historical and randomized controls for assessing trends in proportions. J. Am. Stat. Assoc. 78, 221–227 (1983). 28. 28. Cox, D. R. Comment: The 1988 Wald Memorial Lectures: The present position in Bayesian statistics. Stat. Sci. 5, 76–78 (1990). 29. 29. Willie, S. & Berman, S. Ninth round intercomparison for trace metals in marine sediments and biological tissues. NRC/NOAA (1995). 30. 30. Rukhin, A. L. & Vangel, M. G. Estimation of a common mean and weighted means statistics. J. Am. Stat. Assoc. 93, 303–308 (1998). 31. 31. Possolo, A. Five examples of assessment and expression of measurement uncertainty. Appl. Stoch. Model. Bus. Ind. 29, 1–18 (2013). 32. 32. Toman, B. & Possolo, A. Laboratory effects models for interlaboratory comparisons. Accreditation Qual. Assur. 14, 553–563 (2009). 33. 33. Stein, C. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. Proc. Third Berkeley Symp. on Math. Stat. Probab. 1, 197–206 (1955). 34. 34. Efron, B. & Morris, C. Data analysis using Stein’s estimator and its generalizations. J. Am. Stat. Assoc. 70, 311–319 (1975). 35. 35. Cox, D. & Efron, B. Statistical thinking for 21st century scientists. Sci. Adv. 3, e1700768 (2017). 36. 36. Sivaganesan, S. & Berger, J. Robust Bayesian analysis of the binomial empirical Bayes problem. Can. J. Stat. 21, 107–119 (1993). 37. 37. Fisher, R. A., Corbet, A. S. & Williams, C. B. The relation between the number of species and the number of individuals in a random sample of an animal population. The J. Animal Ecol. 42–58 (1943). 38. 38. Maritz, J. Empirical Bayes estimation for the poisson distribution. Biom. 56, 349–359 (1969). 39. 39. Gu, J. & Koenker, R. On a problem of Robbins. Int. Stat. Rev. 84, 224–244 (2016). 40. 40. Kiefer, J. & Wolfowitz, J. Consistency of the maximum likelihood estimator in the presence of infinitely many incidental parameters. The Annals Math. Stat. 887–906 (1956). ## Acknowledgements We dedicate this work to Brad Efron’s 80th birthday anniversary, in appreciation of his preeminent role in the development of Empirical Bayes and in gratitude for many stimulating conversations. ## Author information Authors ### Contributions S.M. conceived the project. D.F. developed computational algorithms and created the R package BayesGOF. S.M. and D.F. participated in analyzing the data, building the models, thorough literature survey, and writing the main manuscript. All authors reviewed the manuscript. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Mukhopadhyay, S., Fletcher, D. Generalized Empirical Bayes Modeling via Frequentist Goodness of Fit. Sci Rep 8, 9983 (2018). https://doi.org/10.1038/s41598-018-28130-5 • Accepted: • Published: • ### Alternative to the application of PDG scale factors • Jens Erler •  & Rodolfo Ferro-Hernández The European Physical Journal C (2020) • ### Nonparametric universal copula modeling •  & Emanuel Parzen Applied Stochastic Models in Business and Industry (2020) • ### Detecting new signals under background mismodeling • Sara Algeri Physical Review D (2020) • ### Nonlinear Time Series Modeling: A Unified Perspective, Algorithm and Application •  & Emanuel Parzen Journal of Risk and Financial Management (2018)
# $\zeta(2)$ via partial fractions I was looking at old complex analysis exams, and there is one problem I can't figure out. "Use the partial fraction expansion of $\frac{z}{e^z-1}$ to show $\sum_1^\infty 1/k^2=\frac{\pi^2}{6}$." I recognize that as the generating function for the Bernoulli numbers, but I think the point of the problem is to solve it "from scratch", without that kind of knowledge. The function has simple poles at $2\pi i k$ for $k\in\mathbb{Z}$, and residue $2\pi i k$ at $2\pi i k$ . Unfortunately, the obvious series, with terms of the form $\frac{2\pi i k}{z-2\pi i k}$, doesn't converge. Adding convergence terms (like in the proof of Mittag-Leffler's theorem) I get a series with terms of the form $\frac{z^2}{z^2-k^2}$, modulo some constants, but I don't see where to go from there, because it vanishes at 0. I think point is that we are supposed to evaluate the partial fraction decomposition at 0, as the function is clearly 1 there. Thanks for the help. As noted in the comments, there is an answer here that looks similar to what is intended: http://math.stackexchange.com/a/8373/1102 , however it seems much too involved for an exam setting, and is deliberately not rigorous. Would it be possible to modify it to be simpler and faster? Edit: I managed to figure out a fairly slick solution that's much better than the accepted answer. I don't have time to write it up right now. If you read this and want to see it, ping me by posting a comment to this question. - See Qiaochu's answer in the dupe: math.stackexchange.com/a/8373/1102 which answers this question, I believe. – Aryabhata Apr 17 '12 at 20:43 Qiaochu's answer isn't rigorous (the series he manipulates doesn't converge), and it seems like quite an involved answer to come up with on an exam. – Potato Apr 17 '12 at 20:49 I believe Qiaochu deliberately posted it "Euler style". As for too involved, depends on the course/professor etc etc. If you have questions about that answer perhaps you can try commenting on that answer there or editing your current question with a link to that saying that is not rigorous and how can it be made rigorous and simple etc. – Aryabhata Apr 17 '12 at 20:51 I have edited the question to address your concerns. – Potato Apr 17 '12 at 21:02 After the edit, I retract my close vote. – Aryabhata Apr 17 '12 at 21:04 ## 1 Answer Potato, this is just an idea, but note that $$\int\limits_0^\infty {\frac{x}{{{e^x} - 1}}dx} = \frac{{{\pi ^2}}}{6}$$ With a change of variables one has that $$\int\limits_0^\infty {\frac{x}{{{e^x} - 1}}dx} = - \int\limits_0^1 {\frac{{\log \left( {1 - x} \right)}}{x}dx}$$ Now use $$- \frac{{\log \left( {1 - x} \right)}}{x} = \sum\limits_{k = 1}^\infty {\frac{{{x^{k - 1}}}}{k}} \text{ ; } |x|<1$$ from where $$- \int\limits_0^x {\frac{{\log \left( {1 - t} \right)}}{t}dt} = \sum\limits_{k = 1}^\infty {\frac{{{x^k}}}{{{k^2}}}}$$ This means that $$\int\limits_0^\infty {\frac{x}{{{e^x} - 1}}dx} = - \int\limits_0^1 {\frac{{\log \left( {1 - t} \right)}}{t}dt} = \sum\limits_{k = 1}^\infty {\frac{1}{{{k^2}}}}$$ So you might want to use residues (which I don't know about), to calculate $$\int\limits_0^\infty {\frac{z}{{{e^z} - 1}}dz}$$ since it has singularities at every $z_k=2\pi ki$ Another known approach is given here, starting at $(35)$ $$\frac{z}{2} + \frac{z}{{{e^z} - 1}} = \frac{z}{2}\coth \frac{z}{2}$$ from where $$\frac{z}{2}\coth \frac{z}{2} = \sum\limits_{n = 0}^\infty {{B_{2n}}\frac{{{z^{2n}}}}{{\left( {2n} \right)!}}}$$ and then $$z\coth z = \sum\limits_{n = 0}^\infty {\frac{{{2^{2n}}{B_{2n}}}}{{\left( {2n} \right)!}}{z^{2n}}}$$ they then let $z=iz$, from where $$z\cot z = \sum\limits_{n = 0}^\infty {{{\left( { - 1} \right)}^n}\frac{{{2^{2n}}{B_{2n}}}}{{\left( {2n} \right)!}}{z^{2n}}}$$ They go on with partial fractions expansions, but I remember seeing elsewhere this: $$\sin z = z\prod\limits_{n = 1}^\infty {\left( {1 - \frac{{{z^2}}}{{{n^2}{\pi ^2}}}} \right)}$$ then $$\log \sin z = \sum\limits_{n = 1}^\infty {\log \left( {1 - \frac{{{z^2}}}{{{n^2}{\pi ^2}}}} \right)}+\log z$$ Differentiating and multiplying by $z$ gives $$z\cot z = 1-2\sum\limits_{n = 1}^\infty {\dfrac{{ \dfrac{{{z^2}}}{{{n^2}{\pi ^2}}}}}{{1 - \dfrac{{{z^2}}}{{{n^2}{\pi ^2}}}}}}$$ but since $$\frac{{\frac{{{z^2}}}{{{n^2}{\pi ^2}}}}}{{1 - \frac{{{z^2}}}{{{n^2}{\pi ^2}}}}} = \sum\limits_{k = 1}^\infty {\frac{1}{{{n^{2k}}{\pi ^{2k}}}}} {z^{2k}}$$ they write $$z\cot z = 1- 2\sum\limits_{n = 1}^\infty {\sum\limits_{k = 1}^\infty {\frac{1}{{{n^{2k}}}}} \frac{{{z^{2k}}}}{{{\pi ^{2k}}}}}$$ then changing the order of the sum $$z\cot z = 1 - 2\sum\limits_{k = 1}^\infty {\frac{{\zeta \left( {2k} \right)}}{{{\pi ^{2k}}}}{z^{2k}}}$$ Thus, since $$z\cot z = 1 + \sum\limits_{n = 1}^\infty {{{\left( { - 1} \right)}^n}\frac{{{2^{2n}}{B_{2n}}}}{{\left( {2n} \right)!}}{z^{2n}}}$$ they argue that $${\left( { - 1} \right)^{n + 1}}\frac{{{2^{2n - 1}}{B_{2n}}}}{{\left( {2n} \right)!}}{\pi ^{2n}} = \zeta \left( {2n} \right)$$ - Thanks for your opinion. However I deleted it definitely due to a downvote. +1 for yours. – Américo Tavares Apr 17 '12 at 23:46 @AméricoTavares Mine was downvoted too. I guess someone is not very happy with these contributions. I made it a CW because: 1. It doesn't fully answer the question, but is rather a very long comment-hint. 2. Anyone can add more info, correct an error, or provide a better hint. If someone is not happy with the contribution, comment and/or edit. Downvoting is rather pointless, IMO. – Pedro Tamaroff Apr 17 '12 at 23:50 I've seen the downvote in yours. When I undeleted mine I made it a CW too, having seen yours was a CW. – Américo Tavares Apr 17 '12 at 23:54
## Real Analysis – Limits and Continuity — 6. Limits and Continuity — After introducing sequences and gaining some knowledge of some of their properties (I, II, III, and IV) we are ready to embark on the study of real analysis. — 6.1. Preliminary Definitions — Physics is expressed best and most powerfully in the language of mathematics and a very useful mathematical concept for physics is the concept of a function. Generally speaking a function is an association (it transforms an input signal from the first set into an output signal of a second set) between the elements of two sets. The sequences we studied are a special case of functions: they take natural numbers (or a subset of them) as their input signals and map them to real numbers. Now, more formally we introduce: Definition 23 A function is a mapping between a set of real numbers to another set of real numbers $\displaystyle f:D\subset \mathbb{R} \rightarrow \mathbb{R} \ \ \ \ \ (15)$ The set ${D}$ is called the domain of the function The set of values taken by the output signals is called th range of the function. We represent the output signal by ${f(x)}$, thus the former can be written as: ${\left\lbrace f(x):x \in D \right\rbrace = f\left[ D \right] }$. Sometimes we may not be interested in how the function maps the whole of ${D}$ but just on a particular subset of ${D}$. So it makes sense to introduce: Definition 24 Given ${E \subset D}$ it is ${f\left[ E \right] = \left\lbrace f(x):x \in E \right\rbrace }$ is the image of ${f}$ by ${E}$. As we did for sequences we can too define what is a bounded above function, a bounded below function, a bounded function and etc. As an example we’ll give: Definition 25 ${f}$ is said to be bounded iff ${\exists \, \alpha > 0 : |f(x)| \leq \alpha \forall x \in D }$ — 6.2. Introduction to Topology — We will now introduce some light topological notions in order to shed some light into the study of limits and continuity. Definition 26 Given ${E \subset \mathbb{R}}$ we’ll say that ${c \in \overline{\mathbb{R}}}$ is a limit point of ${E}$ if there exists a sequence ${x_n}$ of points in ${E \setminus \left\lbrace c \right\rbrace }$ such as ${\lim x_n = c}$. The set of limit points of ${E}$ will be represented by ${E^\prime}$. The set of points of ${E}$ that aren’t limit points will be called isolated points. Once again so that we don’t let things get too abstract let us give an example: $\displaystyle E = \left] 0,1\right[ \cup \left\lbrace 2 \right\rbrace$ It is easy to see (and we won’t give a rigorous proof of that) that ${E^\prime= \left[ 0,1 \right] }$ and that ${2}$ is the only isolated point of ${E}$. Definition 27 We’ll use the symbol ${\displaystyle \lim _{x \rightarrow c^+}}$ to denote approximation to ${c}$ by real numbers bigger than ${c}$. In an analogous way we can also define ${\displaystyle \lim _{x \rightarrow c^-}}$. Thus, we define ${\displaystyle \lim _{x \rightarrow c^+} f(x) = a}$ if for all ${x_n \in D}$ such as ${x_n \rightarrow c^+}$ corresponds a sequence ${f(x_n)}$ such as ${f(x_n) \rightarrow a}$. Definition 28 The symbol ${D_{c^+}}$ will be used to denote ${D \cap \left] c, \infty \right[ }$ and the symbol ${D_{c^-}}$ will denote ${D \cap \left] - \infty , c \right[ }$ As an example let us calculate $\displaystyle \lim _{x \rightarrow 0^+} \frac{1}{x}$ In this case it is ${D_{0^+} = \left] 0, \infty \right[ }$ and ${0^+ \in D^\prime_{c^+}}$ so that the limit we intend to calculate indeed makes sense. If ${x_n}$ is a sequence of points in ${D^\prime_{c^+}}$ such as ${x_n \rightarrow 0^+}$ then it follows that ${\lim f(x_n)=\lim \dfrac{1}{x_n}=\dfrac{1}{0^+}=+\infty }$ Theorem 28 Given ${D \subset \mathbb{R}}$, ${f : D \rightarrow \mathbb{R}}$, ${c \in D^\prime}$ let us suppose that ${\displaystyle \lim_{x \rightarrow c} f(x) = a}$. Then, if ${c \in D^\prime_{c^+}}$ it also is ${\displaystyle \lim_{x \rightarrow c^+} f(x) = a }$. If ${c \in D^\prime_{c^-}}$ it also is ${\displaystyle \lim_{x \rightarrow c^-} f(x) = a }$. Proof: Let ${x_n}$ be a sequence of points in ${D_{c^+}}$ such as ${x_n \rightarrow c}$. Since ${x_n}$ is a sequence of points in ${D \setminus \left\lbrace c \right\rbrace }$ (by our choice of ${x_n}$) and ${\displaystyle \lim_{x \rightarrow c} f(x) = a}$ (by hypothesis of the theorem) it follows from the definition of limit that ${ \lim f(x_n)= a}$. But this is just ${\displaystyle \lim_{x \rightarrow c^+} = a}$ by definition. The case ${\displaystyle \lim_{x \rightarrow c^-}}$ is proven with the same kind of reasoning. $\Box$ As an application of theorem 28 let us calculate the following limit $\displaystyle \lim_{x \rightarrow 0} \dfrac{1}{x}$ It is easy to see that this limit doesn’t exist. Let ${f(x)=\dfrac{1}{x}}$ it is ${\displaystyle \lim_{x \rightarrow 0^+} f(x) = +\infty}$ and ${\displaystyle \lim_{x \rightarrow 0^-} f(x) = -\infty}$. Since the limit from the left is different from the limit from the right we can conclude that ${\displaystyle\lim_{x \rightarrow 0}\dfrac{1}{x}}$ doesn’t exist. Definition 29 ${ +\infty }$ is a limit point of ${E}$ if ${E}$ isn’t bounded above in ${ \mathbb{R} }$. ${ -\infty }$ is a limit point of ${E}$ if ${E}$ isn’t bounded below in ${\mathbb{R}}$. If you’re having trouble understanding these definitions just think that if ${E}$ isn’t bounded above than it means that ${ \exists x_n \in E: \quad \lim x_n = +\infty }$. Which is just the definition of limit point. Definition 30 ${c}$ is said to be a limit point of ${E}$ if $\displaystyle \forall \delta > 0 \; V(c,\delta) \cap E \setminus \left\lbrace c \right\rbrace \neq \emptyset$ Definition 31 Let ${D \subset \mathbb{R} }$, ${f : D \rightarrow \mathbb{R}}$, ${c \in D^\prime}$ and ${ a \in \mathbb{R} }$. ${f}$ has limit ${a}$ in point ${c}$ if for all sequences ${x_n \in D \setminus \left\lbrace c \right\rbrace }$ such as ${\lim x_n = c}$ it follows that ${\lim f(x_n) = a}$. We’ll only define the limit of a function in limit points of the domain. Notice that by this way we can too define the limit of points that don’t belong in the domain of the function. As always a few examples will be provided in order for us to test our knowledge. • Calculate the limit of ${\displaystyle \lim_{x \rightarrow + \infty} \dfrac{1}{x} }$. ${ D = \mathbb{R} \setminus \left\lbrace 0 \right\rbrace }$ and ${ + \infty \in D^\prime }$ since ${D}$ isn’t bounded above in ${ \mathbb{R} }$. Thus the limit we set ourselves to calculate makes sense in our theory of limits. Let ${x_n}$ be a sequence of points in ${D}$ such as ${ x_n \rightarrow + \infty }$ and ${f(x)=\dfrac{1}{x}}$, then ${f(x_n)=\dfrac{1}{x_n}}$ and it always is ${\lim f(x_n)=0}$. • Calculate the limit of ${\displaystyle \lim_{x \rightarrow + \infty} \sin x }$ Choosing ${f(x)= \sin x}$ we see that the domain is ${D = \mathbb{R}}$. Thus ${+\infty \in D^\prime}$ Let us choose ${x_n = n \pi}$. Thus ${x_n \rightarrow +\infty }$ and ${f(x_n)=\sin x_n = 0}$. In this case it trivially is ${\lim f(x_n)=0}$. Now if we choose ${y_n=\pi/2 + 2n\pi}$ it also is ${y_n \rightarrow + \infty}$ but ${f(y_n)= \sin (\pi/2+2n\pi)=1}$ and so ${\lim f(y_n)=1}$. Thus we were able to find ${x_n}$, ${y_n}$ such as ${\lim x_n = \lim y_n = + \infty}$ but ${\lim f(x_n) \neq \lim f(y_n)}$. Thus ${\displaystyle \lim_{x \rightarrow +\infty} \sin x }$ doesn’t exist. In order for us to proceed deeper in the study of limits and continuity we’ll introduce the notions of one-sided limit. We’ll use the symbols ${\displaystyle \lim_{x \rightarrow c^+}}$ to denote approximation to ${c}$ by real numbers that are bigger than ${c}$. In an analogous way we can also define ${\displaystyle \lim_{x \rightarrow c^-}}$ to denote the approximation to ${c}$ by real numbers that are smaller than ${c}$. Formalizing the previous notions: Definition 32 We’ll say that ${\displaystyle \lim_{x \rightarrow c^+} f(x)=a}$ if for all ${x_n \in D}$ such as ${ x_n \rightarrow c^+}$ corresponds a sequence ${f(x_n)}$ such as ${f(x_n) \rightarrow a}$. The symbols ${D_{c^+}}$ will be used to denote ${D \cap \left] c, +\infty \right[ }$ and the symbols ${D_{c^-}}$ will denote ${D \cup \left] -\infty, c \right[ }$. The definitions of ${\displaystyle \lim_{x \rightarrow c^-} f(x)=a}$ and ${D_{c^-}}$ are done analogously. ### 3 Responses to “Real Analysis – Limits and Continuity” 1. thaian2784 Says: How to type Latex theorem environment as above? please tell me! Thanks! 2. I just sent you an email with the modification I made to the latex2wp script. Let me know if you found it useful. 3. Thanks for let me know latex2wp. But the formula in my post is showed text like $e^2 = .....$ WP Latex plugin is installed as but it becomes “formula is not prase”. How you make it ?
# How do you find the magnitude and direction angle of the vector v=3(cos60i+sin60j)? Oct 26, 2017 $3$ ${60}^{o}$ or $\frac{\pi}{3}$ radians #### Explanation: To find the magnitude we use a similar idea as used to find the distance between two points. Magnitude is: For vector $\underline{c}$ with component vectors $a i$ and $b j$ |c|= sqrt((ai)^2+(bj)^2 $3 \cos \left(60\right) = \frac{3}{2}$ $3 \sin \left(60\right) = \frac{3 \sqrt{3}}{2}$ $| v | = \sqrt{{\left(\frac{3}{2}\right)}^{2} + {\left(\frac{3 \sqrt{3}}{2}\right)}^{2}} = \sqrt{\frac{9}{4} + \frac{27}{4}} = \pm \sqrt{9} = 3 , - 3$ ( negative root not applicable) So: $| v | = 3$ The direction is the angle formed between the vector and the x axis. So we find the tangent ratio, which is: $\tan \left(\theta\right) = \frac{3 \sin \left(60\right)}{3 \cos \left(60\right)} = \frac{\sin \left(60\right)}{\cos \left(60\right)} = \frac{\frac{\sqrt{3}}{2}}{\frac{1}{2}} = \sqrt{3}$ $\arctan \left(\sqrt{3}\right) = {60}^{o}$ or $\frac{\pi}{3}$ radians
# zbMATH — the first resource for mathematics Nonnegative Ricci curvature and the Brownian coupling property. (English) Zbl 0584.58045 This paper shows that if $$M$$ is a complete Riemannian manifold with Ricci curvatures all nonnegative then $$M$$ has the Brownian coupling property. From this one may immediately draw deductions concerning the nonexistence of certain harmonic maps. ##### MSC: 58J65 Diffusion processes and stochastic analysis on manifolds 32H25 Picard-type theorems and generalizations for several complex variables 53C21 Methods of global Riemannian geometry, including PDE methods; curvature restrictions 60J65 Brownian motion Full Text: ##### References: [1] Elworthy K. D., 70, in: Stochastic Differential Equations on Manifolds (1982) · Zbl 0514.58001 [2] Goldberg S. I., J. Diff. Geom 10 pp 619– (1975) [3] Kendall W. S., Stochastic differential geometry, a coupling property, and harmonic maps · Zbl 0573.58029 [4] Kendall W. S., The radial part of Brownian motion on a manifold; semimartingale properties · Zbl 0647.60086 [5] Kendall W. S., Stochastic differential geometry:An introduction [6] DOI: 10.1214/aop/1176992442 · Zbl 0593.60076 [7] Lyons T., J.Diff. Geom 19 pp 299– (1984) [8] Yau S. T., J. Math. pures et appl 57 pp 191– (1978) This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
## Cryptology ePrint Archive: Report 2007/448 Generalized Correlation and Higher Order Nonlinearity for Probabilistic Algebraic Attacks Description Sergiy Pometun Abstract: Abstract. Algebraic attacks are relatively new and interesting subject in cryptanalysis. The algebraic attacks where introduced in [1], where several possible attack's scenarios where given. The big attention was paid to deterministic scenarios of those. In this paper, probabilistic scenarios are studied. Conception of conditional correlation and partial higher order nonlinearity of Boolean function where introduced (briefly definition of conditional correlation: $C(g,f|f = a): = \Pr (g = f|f = a) - \Pr (g \ne f|f = a)$ ) . It was shown, that the both types of scenarios can be seen as a one unified attack - higher order correlation attack, which uses conditional correlation. The clear criteria of vulnerability of Boolean function to both types of scenarios was given. Accordingly, the notion of the algebraic immunity was extended. There are very vulnerable functions to probabilistic scenario. Calculations show that if a function with a very low partial higher order nonlinearity was used in the cipher like SFINKS [8], the simple attack would require only about $2^{42}$ operations and $32Kb$ of keystream. The question about relation between partial higher order nonlinearity and algebraic immunity remains open yet. Category / Keywords: foundations / cipher, algebraic attack, Boolean function, algebraic immunity, conditional correlation, partial higher order nonlinearity. Publication Info: Not published before Date: received 30 Nov 2007, last revised 30 Nov 2007 Contact author: pomu at mail ru Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2007/448 [ Cryptology ePrint archive ]
# Is this how stars’ right ascensions correlate to planets’ longitudes on a 2d map? This is my first ever online question, so please bear with me: I’m working on an A1-sized map of at least 100 of the largest solar system bodies (henceforth just called ‘planets’, ignorning the IAU definition), including moons, spacecraft and the nearest stars. This is just a hobby, and I’ve got no academic experience. I’ve got the planets’ heliocentric longitudes, latitudes and distances from JPL Horizons, and the stars’ right ascensions, declinations and distances from Wikipedia (unless there are better sources?). I’d like to check that I’m plotting hours right ascension correctly onto my map, which uses degrees longitude for direction. I’ve attached a rough sketch, depicted looking ‘down’ from above the ecliptic. I always put 0° longitude on the left so that Pluto is towards the top. Are these assumptions correct? — that 0h right ascension is the same direction as 0° longitude? — that hours of right ascension increase anti-clockwise in the same direction as degrees longitude? — that 6h right ascension is the same direction as 90° longitude, 12h = 180° and 18h = 270°? — if these are correct, are my star directions correct, so that Barnard’s Star would be at the top of the page in the vague direction of Pluto, and Alpha Centauri is roughly the same direction from the Sun as Earth is in June? — the planets’ locations are heliocentric, but presumably star coordinates are geocentric, but as we’re so close to our Sun in astronomical terms, would it make much difference, or is there a source of heliocentric coordinates for stars? — I’ve assumed that stars’ degrees declination are the same direction as planets’ degrees latitude, so that if a star’s declination is +45°, is that the same latitudinal direction as a planet which is 45° above the ecliptic? I’m plotting each planets’ location over the period of my lifetime, to get a better understanding of what’s around us, and how it changes over time. The distance scale will be quasi-logarithmic, and will be enlargened around planets to show some moons. The final map will therefore be a cross between the map on the left (Reddit/dawierha), showing planets with moons and trojans, and the map on the right (Wikipedia/Voyager 1), showing planets and spacecraft over a period of decades. I’ve never found a map which shows more than a few dozen planets with logarithmic scales, and/or which shows planets with moons, and/or which shows locations over time. Sorry this question is so long, but I’ve edited it down as much as I can! Thank you. PS: One more excellent map that I found is this one from https://eleanorlutz.com/mapping-18000-asteroids — I think it's very beautiful, probably the best that I've found so far: • Welcome to astronomy SE, and thank you for the nice question! Jun 2, 2021 at 12:35 • I suspect you have some confusion between ecliptic and equatorial coordinate systems. See en.wikipedia.org/wiki/Celestial_coordinate_system Jun 2, 2021 at 12:54 • @PM thanks for the feedback; I believe the main difference is that equatorial coordinates are geocentric, but ecliptic are heliocentric. I hope that as our nearest stars are 260,000 – 500,000 AU away, using geocentric coordinates to find the direction from the Sun shouldn’t make much difference for a map only for my personal use (but obviously no good for navigation!). I’ve found this source of galactic coordinates (which I think have a different 0° point), but I haven’t found a source of ecliptic coordinates: icc.dur.ac.uk/~tt/Lectures/Galaxies/LocalGroup/Back/50lys.html Jun 2, 2021 at 17:09 • Traditionally, both systems were geocentric, but you can also have heliocentric versions of either system. The main difference is that the horizontal plane of the equatorial system is the celestial equator, but the horizontal plane of the ecliptic system is the ecliptic plane, i.e., the orbital plane of the Earth. Jun 2, 2021 at 17:49 • @PM 2Ring thank you; so presumably that difference would only create discrepancies in latitude/declination. As my map is mostly 2D, that's acceptable. Jun 2, 2021 at 18:52 Most of your assumptions are correct. However, right ascension and declination are an equatorial system, while (heliocentric or geocentric) longitude and latitude are an ecliptic system. The reference planes do not coincide, even though the main reference point, the vernal equinox, does—this is because that point is where the ecliptic crosses the equator from south to north, so both planes coincide there and at the automnal equinox. This means too that stars’ declinations do not coincide with latitudes. A conversion is needed between the two: $$\displaystyle \tan \lambda = \frac { \sin \alpha \cos \epsilon + \tan \delta \sin \epsilon } { \cos \alpha }$$ and $$\displaystyle \sin \beta = \sin \delta \cos \epsilon - \cos \delta \sin \epsilon \sin \alpha$$ and in the reverse, $$\displaystyle \tan \alpha = \frac { \sin \lambda \cos \epsilon - \tan \beta \sin \epsilon } { \cos \lambda }$$ and $$\displaystyle \sin \delta = \sin \beta \cos \epsilon + \cos \beta \sin \epsilon \sin \lambda$$ where ε is the obliquity of Earth on its orbit (approx. 23° 27′), α and δ are the right ascension and declination, and λ and β are the longitude and latitude. Your assumption that stars are so distant that it doesn’t matter whether their position is measured with respect to the Sun or with respect to the Earth (although technically, it’s with respect to the Sun, as we average out positions measured throughout the year, which are affected by aberration and—barely—by parallax), so these formulas are OK for the stars, but you would need to convert heliocentric λ and β to geocentric λ and β should you need to convert planetary positions. • This is really helpful, merci beaucoup. Jun 3, 2021 at 17:50
# Function (Redirected from Functions) x2 − 5x + 6 = 0 x = ? This article/section deals with mathematical concepts appropriate for a student in mid to late high school. A function "f" is a fixed method for calculating a unique output "f(x)" for every given input "x". For example, the polynomial function f(x) = x2 takes numbers as inputs and outputs their squares. So in this case, if 1 is input, 1 is output, whilst if 2 is input, 4 is output. A function is typically written with a formula for the output, in terms of the input. For example, in the case of f above, the formula is f(x) = x2. When we want to input a particular value, we insert it in the place of x. So f(3) = 32 = 3 * 3 = 9. ## Examples If the inputs and outputs are numbers, then the most obvious examples are the polynomials. f(x) = x + 1 is a simple example, a function that adds one to the input. f(1) = 1 + 1 = 2, f(2) = 2 + 1 = 3, and so on. Also, g(x) = 2x is a function that doubles the input. g(3) = 6, g(4) = 8. Another example is h(x) = 2x + 1. This doubles the output, and adds 1. The result is that h(1) = 2*1 + 1 = 3, h(2) = 2*2 + 1 = 5. i(x) = x is another function that does nothing to x. So i(1) = 1, i(2) = 2, and so on. This is called the identity function. Some functions do not have numbers as outputs. For example, we can define a function "pres" that takes a number n, and returns the name of the nth American President. So, pres(1) = George Washington (the first president), pres(2) = John Adams (the second president), and so on. Similarly, not all functions take numbers as inputs. For example, we could define a function "presnumber" that takes the name of an American President (the nth in sequence), and outputs n. Thus presnumber(George Washington) = 1, presnumber(John Adams) = 2. This would be the inverse of pres defined above. (However, due to a historical curiosity, this function is not properly defined; see below.) ## Some Properties and Terminology The set of possible inputs is called the domain of the function. It is usually a set of numbers, such as the set of all real numbers. The set of all possible outputs is called the range. It, too, is often the set of real numbers. A function might not give result values for all possible items in the range. The set of items in the range that are possible values is called the image. For example, the function f(x) = x2 has the real numbers as its domain and range, but its image is just the non-negative numbers. A function must be "single-valued", that is, have a rule that uniquely gives its result for any input. The "presnumber" function that we attempted to define above fails, because presnumber(Grover Cleveland) is not properly defined. If a function can give as output every item in the range, it is said to be onto or surjective. That is, a function is onto if its image is the same as its range. If a function has only one unique input that yields any given output, it is said to be one-to-one or injective. A function that is both surjective and injective is daid to be a one-to-one correspondence, or bijective. For such a function, there is a unique correspondence between every point in the domain and every point in the range. The function f(x) = x2 fails to be onto because there is no input that gives a result of, say, -3. It fails to be one-to-one because f(3) and f( − 3) are the same number. The fact that a function is a one-to-one correspondence does not mean that its domain and image are the same. There is, for example, a one-to-one correspondence between the set of all reals and the interval ( − π / 2,π / 2). The arctangent function does it. ## Composition We can take 2 functions, and combine them to form another function, by applying them one after another. This is called composition. The notation for this is a small circle written between the two function names. Beware!! The function to the right of the circle is evaluated first, followed by the function to the left. For example, take f(x) = x + 1, and g(x) = 2x. We can apply g to x, and then f to the output g(x), to get f $\circ$ g, which is the function we get by combining the functions f and g in this way. We can think of this as like a factory production line, where the output of one process or machine becomes the input of the next. Here, the output of g becomes the input of the function f. In our example, (f $\circ$ g)(x) = f(g(x)) = f(2x) = 2x + 1. So the combined function f $\circ$ g is in fact h, which we met earlier. To test this, we can take (f $\circ$ g) applied to 8. g(8) = 2*8 = 16. Then we apply f to 16, to get f(16) = 16 + 1 = 17. 17 = 2*8 + 1, as expected. We can also apply the functions in the opposite order to get g $\circ$ f, where we apply f first then g. f(x) = x + 1, and g(x+1) = 2(x+1) = 2x + 2. So (g $\circ$ f)(x) = 2x + 2. So, the order in which we combine the functions makes a difference to the final result. Again, we test this formula by applying the function to 8. f(8) = 9, and g(9) 18. 18 = 2*8 + 2, as expected. ## Non-numeric functions There are functions that take objects other than numbers as inputs. For example, rotation can be thought of as a function that takes a shape as an input, and outputs that shape rotated by a certain angle.
# Synopsis: Magnetic joystick A two-dimensional trap takes advantage of the magnetic domain walls in a narrow wire to guide the thermal motion of magnetic particles. Magnetic particles can be guided with external fields through small-scale fluidic environments, bringing with them a biological molecule hitching a ride. A paper appearing in Physical Review Letters presents a two-dimensional magnetic trap that uses this type of magnetic remote control to guide the thermal motion of submicron magnetic beads. Following a magnetic trap design from their earlier work, Aaron Chen at The Ohio State University in Columbus and his colleagues deposit a $2$-micron-wide magnetic wire in the shape of a zigzag on a silicon surface. Chen et al. apply a one-time, large, in-plane magnetic field of $1000$ oersted to polarize the legs of the zigzag shape, resulting in a sequence of head-to-head and tail-to-tail magnetic domain walls which meet at the kinks in the wire. Embedding the trap in a solution of magnetic beads, the team coaxes the beads to the large magnetic trapping gradients near the kinks using fairly weak (less than 100 oersted) external magnetic fields. The key control parameter is the strength of the external field perpendicular to the trap. This setup allows exploration between two types of particle motion: one where the beads are tightly confined near a wire kink and another where the motion, driven by thermal fluctuations, spreads out around the kink. A magnetic trap such as this has the additional benefit that it does not rely on strong fields to move the particles or generate heat, both of which could perturb the environment studied. – Jessica Thomas ### Announcements More Announcements » ## Previous Synopsis Semiconductor Physics Mesoscopics ## Related Articles Industrial Physics ### Viewpoint: Caught in the Tube Ring-shaped polymers have been used to detect a theoretical tube that restricts the motion of molten polymer chains. Read More » Magnetism ### Viewpoint: X Rays Expose Transient Spins X-ray magnetic circular dichroism detects the transient magnetic moments that are induced in a nonmagnetic material by spin injection from a ferromagnet. Read More » Nanophysics ### Focus: Shaking Cleans Nanoscale Surface An oscillatory motion dramatically reduces the number of contaminant molecules at the interface between two surfaces. Read More »
# Bus Numbers Find numbers from $10 - 999$ such that the number can be given in the form $x^4 + y^4$ in $n$ distinct ways where $x, y$ are all positive integers. Make a list. These are the bus numbers (Reason $1$, those numbers are all bus routes. Reason $2$, I love buses.) All integers are positive. Note by Yajat Shamji 11 months, 2 weeks ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: Do you mean sum of digits of the number? - 11 months, 2 weeks ago @Yajat Shamji, since the freedom is there for $n$ to be positive and negative, any number can be achieved, right? Have I misunderstood the question, because it looks trivial then - 11 months, 2 weeks ago I guess maybe $n$ is a constant, so we have to decide considering it to be constant. - 11 months, 2 weeks ago I agree with @Mahdi Raza. We can choose $n$, so any number, not even till $999$ can be made. - 11 months, 2 weeks ago @Mahdi Raza, @Vinayak Srivastava, @Aryan Sanghi - I have changed the conditions - check again. If nobody gives the full list before Friday, I'll post the first number. - 11 months, 2 weeks ago @Mahdi Raza, you know the definition of the Hardy-Ramanujan taxicab numbers, right? All I did is change the exponent from $3$ to $4$ and named it bus numbers. - 11 months, 2 weeks ago Clue $1$ of $2$: Here is the definition of the Hardy-Ramanujan taxicab number: In mathematics, the $n$th taxicab number, typically denoted Ta($n$) or Taxicab($n$), also called the $n$th Hardy–Ramanujan number, is defined as the smallest integer that can be expressed as a sum of two positive integer cubes in $n$ distinct ways. The most famous taxicab number is $1729 = \text{Ta}(2) = 1^3 + 12^3 = 9^3 + 10^3$. - 11 months, 2 weeks ago Here it is # Code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 b = [] for i in range(0, 10000000): b.append(0) for x in range(1, 32): for y in range(1, 32): b[x * x * x * x + y * y * y * y] += 1 for i in range(1, 26): print("For n = %s:" % (i), end = ' ') for j in range(1, 1000000): if b[j] == i: print(j, end = ' ') print() # Output 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 For n = 1: 2 32 162 512 1250 2592 4802 8192 13122 20000 29282 41472 57122 76832 101250 131072 167042 209952 260642 320000 388962 468512 559682 663552 781250 913952 For n = 2: 17 82 97 257 272 337 626 641 706 881 1297 1312 1377 1552 1921 2402 2417 2482 2657 3026 3697 4097 4112 4177 4352 4721 5392 6497 6562 6577 6642 6817 7186 7857 8962 10001 10016 10081 10256 10625 10657 11296 12401 14096 14642 14657 14722 14897 15266 15937 16561 17042 18737 20737 20752 20817 20992 21202 21361 22032 23137 24641 24832 27297 28562 28577 28642 28817 29186 29857 30736 30962 32657 35122 35377 38417 38432 38497 38561 38672 39041 39712 40817 42512 43202 44977 48416 49297 50626 50641 50706 50881 51250 51921 53026 53057 54721 57186 59152 60625 65266 65537 65552 65617 65792 66161 66832 66977 67937 69632 71361 72097 75536 79186 80177 83522 83537 83602 83777 84146 84817 85922 86272 87617 89041 90082 93521 94097 98162 103952 104257 104977 104992 105057 105232 105601 106272 107377 109072 111537 112082 114976 116161 119617 121937 125712 130322 130337 130402 130577 130946 131617 132722 133537 134146 134417 136882 140321 143392 144962 149057 151057 155601 158882 160001 160016 160081 160256 160625 161296 162401 164096 166561 168737 170000 170512 174641 180736 180946 188497 188561 194482 194497 194562 194737 195106 195777 195857 196882 198416 198577 201042 204481 209122 210625 213842 215217 223042 225536 232897 234257 234272 234337 234512 234881 235297 235552 236657 238352 240817 243521 244256 245106 248897 254992 260017 262817 264976 272672 278002 279842 279857 279922 280097 280466 281137 282242 283937 284881 286402 289841 290321 294482 299457 299792 300577 308402 317777 318257 324802 330466 331777 331792 331857 332032 332401 333072 334177 335872 338337 339232 341776 345377 346417 352512 354481 360337 363362 364577 370192 382401 384817 390626 390641 390706 390881 391250 391921 393026 394256 394721 397186 397312 400625 405266 410162 411361 415297 419186 428737 429041 436752 439841 441250 456161 456977 456992 457057 457232 457601 458272 459377 461072 462097 463537 466976 471617 474146 474322 477712 485537 491776 495392 495601 507601 514097 520946 522512 526257 531442 531457 531522 531697 532066 532737 533842 535537 538002 540497 541441 546082 550625 552177 560002 561952 566032 569857 582066 585106 587297 596977 611617 614657 614672 614737 614912 614962 615281 615952 616976 617057 618752 621217 624656 624881 629297 635392 636417 643217 651457 653072 661762 665281 670466 680192 691232 691441 698177 707282 707297 707362 707537 707906 708577 709682 711377 713842 717281 719632 721922 722401 725922 728017 735842 736817 744977 745697 757906 765697 772817 774656 788752 790802 809137 810001 810016 810081 810256 810625 811282 811296 812257 812401 814096 816561 820000 824641 830736 837602 838561 847601 848416 848912 860625 863217 867281 875536 893521 894497 901762 914976 922066 923522 923537 923602 923777 924146 924817 925922 927617 930082 933521 938162 940321 941537 944257 946432 952082 961937 970000 974146 987122 988417 989057 For n = 3: For n = 4: For n = 5: For n = 6: For n = 7: For n = 8: For n = 9: For n = 10: For n = 11: For n = 12: For n = 13: For n = 14: For n = 15: For n = 16: For n = 17: For n = 18: For n = 19: For n = 20: For n = 21: For n = 22: For n = 23: For n = 24: For n = 25: - 11 months, 2 weeks ago - 11 months, 2 weeks ago I request you to show proof (i.e. show all the $n$ distinct ways for their respective number where $a = x^4 + y^4$.) since nobody else has replied and I need it to strengthen your list because after all, I need proof before announcing that you have found the list. (as well as keeping you busy - that's the main reason overall.) Also, a challenge: try and do the numbers from $10 - 9999$ after giving the proof for your list above. And if you've finished, show proof for that list. Then, I will ask you to find the limit for the $n$ distinct ways where $a = x^4 + y^4$. After that, you're free. P.S. If you want after the limit question, I've got a super-challenge: find the ratio of the numbers that satisifes the condition $a = x^4 + y^4$ in $n$ distinct ways to the number ($n$) distinct ways. P.S.S. $a$ is the number which fulfills the condition $x^4 + y^4$ in $n$ distinct ways. P.S.S.S. You don't have to anything after finding the list for $10 - 9999$ and giving the proof for that list. It's just I want to keep you busy. If you don't want to do anything after finding the list for $10 - 9999$ and giving the proof for that list, then ask. - 11 months, 2 weeks ago See now @Yajat Shamji, it's the list till $10^6$. I have few points • Brute force checks all combinations, so if says it a number, it is a number • You can't find ratio as no number will go above $n = 2$ • I am not free as I have lots of studies for my JEE, so please stop saying I am free. If anything was offensive, I am sorry for it. - 11 months, 2 weeks ago What about the proof for the list? - 11 months, 2 weeks ago I mean, I still need it. - 11 months, 2 weeks ago Because once you've given the proof, I can tell Mahdi and Vinayak. - 11 months, 2 weeks ago One of $x$ and $y$ must be less than $\sqrt[4] {1000}$, I. e., must be in the range $[2,5]$,while the other in the range $[1,4]$. If by $n$ distinct ways we mean the pairs $(x_i,y_i)$ and $(y_i,x_i)$ are not distinct (as is the case with taxicab numbers), then there is no solution in that range. In fact, the smallest such number is a nine digit number $635318657$, which can be expressed as the sum of the fourth powers of two integers in two different ways : $635318657=59^4+158^4=133^4+134^4$. - 11 months, 1 week ago
# Computation Effort of Algorithms Consider the strictly convex unconstrained optimization problem $\mathcal{O} := \min_{x \in \mathbb{R}^n} f(x).$ Let $x_\text{opt}$ denote its unique minima and $x_0$ be a given initial approximation to $x_\text{opt}.$We will call a vector $x$ an $\epsilon-$ close solution of $\mathcal{O}$ if $$\frac{||x - x_{\text{opt}}||_2}{||x_0 - x_\text{opt}||_2} \leq \epsilon.$$ Suppose that there exists two iterative algorithms $\mathcal{A}_1$ and $\mathcal{A}_2$ to find an $\epsilon-$ close solution of $\mathcal{O}$ with the following properties: 1. For any $\epsilon > 0,$ the total computational effort, i.e. effort required per iteration $\times$ total number of iterations, to find an $\epsilon-$ close solution is same for both the algorithms. 2. The per iteration effort for $\mathcal{A}_1$ is $O(n),$ say, while that of $\mathcal{A}_2$ is $O(n^2).$ Are there situations, where one would prefer one algorithm over the other? Why?
### DISCLAIMER: Very rough notes from class, with some additional side notes. These are notes for the UofT course PHY2403H, Quantum Field Theory I, taught by Prof. Erich Poppitz fall 2018. ## Relativistic normalization. We will continue looking at the generator of spacetime translation $$\hatU(\Lambda)$$, which has the property \label{eqn:qftLecture11:40} \hatU(\Lambda) \ket{0} = \ket{0}, That is \label{eqn:qftLecture11:760} \hatU(\Lambda) = \mathbf{1} + \text{operators that anhillate the vacuum state}. The action on a field was \label{eqn:qftLecture11:60} \hatU(\Lambda) \phihat(x) \hatU^\dagger(\Lambda) = \phihat(\Lambda x), and the action on the anhillation operator was \label{eqn:qftLecture11:300} \hatU(\Lambda) \sqrt{ 2 \omega_\Bp } \hat{a}_\Bp \hatU^\dagger(\Lambda) = \sqrt{ 2 \omega_{\Lambda \Bp} } \hat{a}_{\Lambda \Bp}. If $$\ket{\Bp_1}$$ is the one particle state with momentum $$\Bp_1$$, then that momentum state can be generated from the ground state with the following normalized creation operation \label{eqn:qftLecture11:780} \ket{\Bp_1} = \sqrt{ 2 \omega_{\Bp_1} } \hat{a}_{\Bp_1}^\dagger \ket{0}. We can compute the matrix element between two matrix states using the creation operator representation \label{eqn:qftLecture11:80} \begin{aligned} \braket{\Bp_2}{\Bp_1} &= \sqrt{ 2 \omega_{\Bp_1} } \sqrt{ 2 \omega_{\Bp_2} } \bra{0} \hat{a}_{\Bp_2} \hat{a}_{\Bp_1}^\dagger \ket{0} \\ &= \sqrt{ 2 \omega_{\Bp_1} } \sqrt{ 2 \omega_{\Bp_2} } \bra{0} \lr{ \hat{a}_{\Bp_1}^\dagger \hat{a}_{\Bp_2} + i (2 \pi)^3 \delta^3(\Bp – \Bq) } \\ &= \sqrt{ 2 \omega_{\Bp_1} } \sqrt{ 2 \omega_{\Bp_2} } (2 \pi)^3 \delta^3(\Bp_1 – \Bp_2) \\ &= 2 \omega_{\Bp_1} (2 \pi)^3 \delta^3(\Bp_1 – \Bp_2). \end{aligned} ## Spacelike surfaces. fig. 0. Constant spacelike surface. If $$x^\mu, p^\mu$$ are four vectors, then $$p^\mu x_\mu = \text{invariant} = {p’}^\mu x’_\mu$$. The light cone is the surface $$p_0^2 = \Bp^2$$, whereas timelike four-momentum form a parabaloid surface $$p_0^2 – \Bp^2 = m^2$$ (i.e. $$E = \sqrt{ m^2 c^4 + \Bp^2 c^2 }$$). The surface for constant spacelike points (i.e. all related by a Lorentz transformation) is illustrated in fig. 0. A boost moves a point up or down that surface along the energy axis. It is therefore possible to use a sequence of boost and rotation to transform a point $$(E, \Bp) \rightarrow (-E, \Bp) \rightarrow (-E, -\Bp)$$. That is, any spacelike four-vector $$x$$ may be transformed to $$-x$$ using a Lorentz transformation. ## Condition on microcausality. We defined operators $$\phihat(\Bx)$$, which was a Hermitian operator for the real scalar field. For the complex scalar field we used $$\phihat(\Bx) = (\phihat_1 + \phihat_2)/\sqrt{2}$$, where each of $$\phihat_1, \phihat_2$$ were Hermitian operators. i.e. we can think of these operators as “observables”, that is $$\phihat(\Bx) = \phihat^\dagger(\Bx)$$. We now want to show that these operators commute at spacelike separations, and see how this relates to the question of causality. In particular, we want to see that an observation of one operator, will not effect the measurement of the other. The condition of microcausality is \begin{equation*} \antisymmetric{\phihat(x)}{\phihat(y)} = 0 \end{equation*} if $$x \sim y$$, that is $$(x – y)^2 < 0$$. That is, $$x, y$$ are spacelike separated. We wrote \label{eqn:qftLecture11:160} \phihat(x) = \int \frac{d^3 p}{(2 \pi)^3 \sqrt{2 \omega_\Bp}} \evalbar{ e^{-i p \cdot x} }{p^0 = \omega_\Bp} \hat{a}_\Bp + \int \frac{d^3 p}{(2 \pi)^3 \sqrt{2 \omega_\Bp}} \evalbar{ e^{i p \cdot x} }{p^0 = \omega_\Bp} \hat{a}^\dagger_\Bp , or $$\phihat(x) = \phihat_{-}(x) + \phihat_{+}(x)$$, where \label{eqn:qftLecture11:180} \begin{aligned} \phihat_{-}(x) &= \int \frac{d^3 p}{(2 \pi)^3 \sqrt{2 \omega_\Bp}} \evalbar{ e^{-i p \cdot x} }{p^0 = \omega_\Bp} \hat{a}_\Bp \\ \phihat_{+}(x) &= \int \frac{d^3 p}{(2 \pi)^3 \sqrt{2 \omega_\Bp}} \evalbar{ e^{i p \cdot x} }{p^0 = \omega_\Bp} \hat{a}^\dagger_\Bp \end{aligned} Compute the commutator \label{eqn:qftLecture11:200} \begin{aligned} D(x) &= \antisymmetric{\phihat_{-}(x)}{\phihat_{+}(0)} \\ &= \int \frac{d^3 p}{(2 \pi)^3 \sqrt{2 \omega_\Bp}} \evalbar{ e^{-i p \cdot x} }{p^0 = \omega_\Bp} \int \frac{d^3 k}{(2 \pi)^3 \sqrt{2 \omega_\Bk}} \evalbar{ e^{i k \cdot 0} }{k^0 = \omega_\Bk} \antisymmetric{\hat{a}_\Bp }{\hat{a}_\Bk^\dagger } \\ &= \int \frac{d^3 p}{(2 \pi)^3 \sqrt{2 \omega_\Bp}} \evalbar{ e^{-i p \cdot x} }{p^0 = \omega_\Bp} \int \frac{d^3 k}{(2 \pi)^3 \sqrt{2 \omega_\Bk}} (2 \pi)^3 \delta^3(\Bp – \Bk), \end{aligned} \label{eqn:qftLecture11:800} \boxed{ D(x) = \int \frac{d^3 p}{(2 \pi)^3 2 \omega_\Bp} \evalbar{ e^{-i p \cdot x} }{p^0 = \omega_\Bp}. } Now about the commutator at two spacetime points \label{eqn:qftLecture11:220} \begin{aligned} \antisymmetric{\phihat(x)}{\phihat(y)} &= \antisymmetric{\phihat_{-}(x) + \phihat_{+}(x)}{\phihat_{-}(y) + \phihat_{+}(y)} \\ &= \antisymmetric{\phihat_{-}(x)}{\phihat_{+}(y)} + \antisymmetric{\phihat_{+}(x)}{\phihat_{-}(y)} \\ &= -D(y – x) + D(x – y) \end{aligned} Find \label{eqn:qftLecture11:240} \begin{aligned} \antisymmetric{\phihat(x)}{\phihat(y)} &= D(x – y) – D(y – x) \\ \antisymmetric{\phihat(x)}{\phihat(0)} &= D(x) – D(- x) \end{aligned} Let’s look at $$D(x)$$, \ref{eqn:qftLecture11:800}, a bit more closely. ### Claim: D(x) is Lorentz invariant (has the same value for all $$x^\mu, {x’}^\mu$$ We can see this by writing this out as \label{eqn:qftLecture11:280} D(x) = \int \frac{d^3 p}{(2 \pi)^3 } dp^0 \delta( p_0^2 – \Bp^2 – m^2) \Theta(p^0) e^{-i p \cdot x} The exponential is Lorentz invariant, and the delta function has been put into a Lorentz invariant form. ### Claim 1: $$D(x) = D(x’)$$ where $$x^2 = {x’}^2$$. ### Claim 2: $$x^\mu, -x^\mu$$ are related by Lorentz transformations if $$x^2 < 0$$. From the figure, we see that $$D(x) = D(-x)$$ for a spacelike point, which implies that $$\antisymmetric{\phihat(x)}{\phihat(0)} = 0$$ for a spacelike point $$x$$. We’ve shown this for free fields, but later we will see that this is the case for interacting fields too. ## Harmonic oscillator. \label{eqn:qftLecture11:320} L = \inv{2} \qdot^2 – \frac{\omega^2}{t} q^2 – j(t) q The term $$j(t)$$ shifts the origin in a time dependent fashion (graphical illustration in class wiggling a hockey stick, as a sample of a harmonic oscillator). \label{eqn:qftLecture11:340} H = \frac{p^2}{2} + \frac{\omega^2}{t} q^2 + j(t) q \label{eqn:qftLecture11:360} \begin{aligned} i \qdot_H(t) &= \antisymmetric{q_H}{H} = i p_H \\ i \pdot_H(t) &= \antisymmetric{p_H}{H} = -i \omega^2 q_H – i j(t) \end{aligned} \label{eqn:qftLecture11:380} \ddot{q}_H(t) = – \omega^2 q_H(t) – j(t) or \label{eqn:qftLecture11:400} (\partial_{tt} + \omega^2 ) q_H(t) = – j(t) \label{eqn:qftLecture11:420} q_H(t) = q_H^0( t ) + \int G_R(t – t’) j(t’) dt’ This solves the equation provided $$G_R(t – t’)$$ has the property that \label{eqn:qftLecture11:440} \boxed{ (\partial_{tt} + \omega^2) G_R(t – t’) = – \delta(t – t’) } That is \label{eqn:qftLecture11:460} (\partial_{tt} + \omega^2) q_H(t) = (\partial_{tt} + \omega^2) q_H^0( t ) + (\partial_{tt} + \omega^2) \int G_R(t – t’) j(t’) dt’ This function $$G_R$$ is called the retarded Green’s function. We want to find this function, and as usual, we do this by taking the Fourier transform of \ref{eqn:qftLecture11:440} \label{eqn:qftLecture11:480} \begin{aligned} \int dt e^{i p t} (\partial_{tt} + \omega^2) G_R(t – t’) &= -\int_{-\infty}^\infty dt e^{i p t} \delta(t – t’) \\ &= – e^{i p t’} \end{aligned} Let \label{eqn:qftLecture11:500} G(t – t’) = \int \frac{dp }{2 \pi} e^{- i p'(t – t’)} \tilde{G}(p’), so \label{eqn:qftLecture11:520} \begin{aligned} – e^{i pt’} &= \int dt e^{i p t} (\partial_{tt} + \omega^2) \int \frac{dp’}{2 \pi} e^{- i p'(t – t’)} \tilde{G}(p’) \\ &= \int dt e^{i p t} \int \frac{dp’}{2 \pi} \lr{ -{p’}^2 + \omega^2 } e^{- i p'(t – t’)} \tilde{G}(p’) \\ &= \int dp’ \lr{ -{p’}^2 + \omega^2 } e^{i p’ t’} \delta(p – p’) \tilde{G}(p’) \\ &= \lr{ -{p}^2 + \omega^2 } \tilde{G}(p) e^{i p t’} \end{aligned} so \label{eqn:qftLecture11:540} \tilde{G}(p) = \inv{p^2 – \omega^2} Now \label{eqn:qftLecture11:560} G(t) = \int \frac{dp}{2 \pi} e^{-i p t} \tilde{G}(p) Let’s write the momentum space Green’s function as \label{eqn:qftLecture11:580} \tilde{G}(p) = \inv{(p – \omega)(p + \omega)} The solution contained \label{eqn:qftLecture11:600} \int G(t – t’) j(t’) dt’. Suppose $$j(t) = 0$$ for all $$t < t_0$$. We want the effect of $$j(t)$$ to be felt in the future, for example, $$j(t)$$ is an impulse starting at some time. We want $$G(t)$$ to vanish at negative times. We want the integral \label{eqn:qftLecture11:620} G(t) = \int \frac{dp}{2 \pi} e^{-i p t} \inv{(p – \omega)(p + \omega)} to vanish when $$t < 0$$. Start with $$t > 0$$ (that is $$t’ < t$$), so that $$e^{-i p t} = e^{-i p \Abs{t}}$$ which means that we have to integrate over a lower plane contour like fig. 1, because the imaginary part of $$p$$ is negative, but for $$t < 0$$ (that is $$t’ > t$$), we want an upper plane contour like fig. 2. fig. 1. Lower plane contour. fig. 2. Upper plane contour. Question: since we are integrating over the real line, how can we get away with deforming the contour? Answer: it works. If we do this we get a Green’s function that makes sense (better answer later?) We add an infinite circle, so that we can integrate over a closed contour, and pick the contour so that it is zero for $$t < 0$$ and non-zero (enclosed poles) for $$t > 0$$. \label{eqn:qftLecture11:640} \begin{aligned} G_R(t > 0) &= \int_C \frac{dp}{2 \pi} e^{-i p t} \inv{(p – \omega)(p + \omega)} \\ &= \inv{2 \pi} (-2 \pi i) \lr{ \frac{e^{-i \omega t}}{2 \omega} \frac{e^{i \omega t}}{2 \omega} } \\ &= -\frac{\sin(\omega t)}{\omega}. \end{aligned} Now we write the Green’s function for all time as \label{eqn:qftLecture11:660} \boxed{ G_R(t) = -\frac{\sin(\omega t)}{\omega} \Theta(t). } The question of what contour to pick can now be justified by the result, since this satisfies fig. 3). In particular, the bumps up and down contour will be used to derive the “Feynman propagator” that we’ll use later. fig. 3. All possible deformations around the poles. ## Field theory (where we are going). We will consider a massive real scalar field theory with an external source with action \label{eqn:qftLecture11:680} S = \int d^4 x \lr{ \inv{2} \partial_\mu \phi \partial^\mu \phi – \frac{m^2}{2} \phi^2 + j(x) \phi(x) } We don’t have examples of currents that create scalar fields, but to study such as system, recall that in electromagnetism we added sources to the field by adding a term like \label{eqn:qftLecture11:700} \int d^4 x A^\mu(x) j_\mu(x), to our action. The equation of motion can be found to be \label{eqn:qftLecture11:720} \lr{ \partial_\mu \partial^\mu + m^2 } \phi(x) = j(x). We want to study the Green’s function of this Klien-Gordon equation, defined to obey \label{eqn:qftLecture11:740} \lr{ \partial_\mu \partial^\mu + m^2 }_x G(x – y) = -i \delta^4(x – y), where the $$-i$$ factor is for convienience. This is analogous to the Green’s function that we just studied for the QM harmonic oscillator. ## Question: Compute $$D(x-y)$$ from the commutator. Generalize the derivation \ref{eqn:qftLecture11:800} by computing the commutator at two different space time points $$x, y$$. Let \label{eqn:qftLecture11:860} \begin{aligned} D(x – y) &= \antisymmetric{\phihat_{-}(x)}{\phihat_{+}(y)} \\ &= \int \frac{d^3 p}{(2 \pi)^3 \sqrt{2 \omega_\Bp}} \evalbar{ e^{-i p \cdot x} }{p^0 = \omega_\Bp} \int \frac{d^3 k}{(2 \pi)^3 \sqrt{2 \omega_\Bk}} \evalbar{ e^{i k \cdot y} }{k^0 = \omega_\Bk} \antisymmetric{\hat{a}_\Bp }{\hat{a}_\Bk^\dagger } \\ &= \int \frac{d^3 p}{(2 \pi)^3 \sqrt{2 \omega_\Bp}} \evalbar{ e^{-i p \cdot x} }{p^0 = \omega_\Bp} \int \frac{d^3 k}{(2 \pi)^3 \sqrt{2 \omega_\Bk}} \evalbar{ e^{i k \cdot y} }{k^0 = \omega_\Bk} (2 \pi)^3 \delta^3(\Bp – \Bk) \\ &= \int \frac{d^3 p}{(2 \pi)^3 2 \omega_\Bp} \evalbar{ e^{-i p \cdot (x – y)} }{p^0 = \omega_\Bp}. \end{aligned} ## Question: Verification of harmonic oscillator Green’s function. Take the derivatives of a convolution of the Green’s function \ref{eqn:qftLecture11:660} to show that it satisifies \ref{eqn:qftLecture11:440}. Let \label{eqn:qftLecture11:880} \begin{aligned} q(t) &= \int_{-\infty}^\infty G(t – t’) j(t’) dt’ \\ &= -\inv{\omega} \int_{-\infty}^\infty \sin(\omega(t – t’)) \Theta(t – t’) j(t’) dt’. \end{aligned} We are free to add any $$q_0(t)$$ that satisfies the homogeneous wave equation $$q_0”(t) + \omega^2 q_0(t) = 0$$ to our assumed convolution solution \ref{eqn:qftLecture11:880}, but that isn’t interesting for this exersize. Since $$\Theta(t – t’) = 0$$ for $$t – t’ < 0$$, or $$t’ > t$$, the convolution can be written as \label{eqn:qftLecture11:900} q(t) = -\inv{\omega} \int_{-\infty}^t \sin(\omega(t – t’)) j(t’) dt’, which is now in a convient form to take derivatives. We have contributions from the boundary’s time dependence and from the integrand. In particular \label{eqn:qftLecture11:920} \ddt{} \int_{a(t)}^{b(t)} g(x, t) dx = g(b(t)) b'(t) – g(a(t)) a'(t) + \int_a^b \frac{\partial}{\partial t} g(x, t) dx. Assuming that $$j(-\infty) = 0$$, this gives \label{eqn:qftLecture11:940} \begin{aligned} \ddt{q(t)} &= -\inv{\omega} \evalbar{\sin(\omega(t – t’)) j(t’) }{t’ = t} -\int_{-\infty}^t \cos(\omega(t – t’)) j(t’) dt’ \\ &= -\int_{-\infty}^t \cos(\omega(t – t’)) j(t’) dt’. \end{aligned} For the second derivative we have \label{eqn:qftLecture11:960} \begin{aligned} q”(t) &= – \evalbar{ \cos(\omega(t – t’)) j(t’) }{t’ = t} +\omega \int_{-\infty}^t \sin(\omega(t – t’)) j(t’) dt’ \\ &= -j(t) -\omega^2 \int_{-\infty}^t \frac{-\sin(\omega(t – t’))}{\omega} j(t’) dt’, \end{aligned} or \label{eqn:qftLecture11:980} q”(t) = -j(t) – \omega^2 q(t), which is our forced Harmonic oscillator equation.
Article Contents Article Contents # Analysis of a corner layer problem in anisotropic interfaces • We investigate a model of anisotropic diffuse interfaces in ordered FCC crystals introduced recently by Braun et al and Tanoglu et al [3, 18, 19], focusing on parametric conditions which give extreme anisotropy. For a reduced model, we prove existence and stability of plane wave solutions connecting the disordered FCC state with the ordered $Cu_3Au$ state described by solutions to a system of three equations. These plane wave solutions correspond to planar interfaces. Different orientations of the planes in relation to the crystal axes give rise to different surface energies. Guided by previous work based on numerics and formal asymptotics, we reduce this problem in the six dimensional phase space of the system to a two dimensional phase space by taking advantage of the symmetries of the crystal and restricting attention to solutions with corresponding symmetries. For this reduced problem a standing wave solution is constructed that corresponds to a transition that, in the extreme anisotropy limit, is continuous but not differentiable. We also investigate the stability of the constructed solution by studying the eigenvalue problem for the linearized equation. We find that although the transition is stable, there is a growing number $0(\frac{1}{\epsilon})$, of critical eigenvalues, where $\frac{1}{\epsilon}$ » $1$ is a measure of the anisotropy. Specifically we obtain a discrete spectrum with eigenvalues $\lambda_n = \e^{2/3}\mu_n$ with $\mu_n$ ~ $Cn^{2/3}$, as $n \to + \infty$. The scaling characteristics of the critical spectrum suggest a previously unknown microstructural instability. Mathematics Subject Classification: 74A50, 34B16, 34L15. Citation:
# Is true-RMS meter required for audio signals? Status Not open for further replies. #### Elerion ##### Member I know what true-rms is, but I wonder if it is required to measure a real audio signal. With "real" I mean a mix of many frequencies, not just a test pure sine wave. I suppose an average meter should work fine, but I've got no true-rms to compare. Would an average and true-rms meters differ in readings? Thanks! #### Nigel Goodwin ##### Super Moderator They would differ in readings, but most meters wouldn't work accurately on audio anyway (they mostly work only with low frequencies) - if you want to measure audio, either use an oscilloscope or an audio millivoltmeter. Personally I use a scope (as I have scopes), this makes it easy to use p-p measurements, which is mostly what you want for audio. ##### Well-Known Member Would an average and true-rms meters differ in readings? Thanks! Yes, they would differ considerably. Let's use an example. Since I don't have a good audio source we can look at the output of a UPS which when on battery outputs a MSW (Modified Sine Wave) output. This first image is looking at mains voltage. A clean sine wave. Now both meters measuring the MSW output of the UPS inverter: Both meters are looking at the same signal. The meter on the left is an Average Responding RMS Indicating meter and while it did well with a nice sine wave it does not fare well at all with the modified sine wave. The meter on the right is a RMS responding RMS indicating meter costing considerably more the the meter on the left. Additionally any meter used to try and measure a varying amplitude and varying frequency audio will need a fast sample rate to capture the signal. I believe a good digital scope with a record function would be a better solution. That or a good A/D converter with a storage function. <EDIT> I see Nigel addressed this well as I was typing. </EDIT> Ron #### JimB ##### Super Moderator I know what true-rms is, but I wonder if it is required to measure a real audio signal. I guess it all depends on what it is you are measuring and why. My understanding is that audio people often use meters which measure the peak voltage of the audio signal. This would make sense if you do not want to overload the input of some other device such as an amplifier or radio transmitter. Measuring the RMS voltage of the audio signal would give a good indication of the average power of the signal, and hence how "loud" it sounded overall, not just the peaks. JimB #### crutschow ##### Well-Known Member I suspect that an average responding meter would read reasonable close to the RMS value for typical music signals (since it's calibrated to read the RMS of a sinewave), at least close enough for most audio measurements. The average responding meter would generally read somewhat lower the a true RMS responding. Music with large peaks such as drums or loud bass would be more in error. #### Elerion ##### Member Thank you everyone. Very useful. An oscilloscope is the obvious piece of equipment, but I was curious if I would benefit form a true-rms meter, as I need to buy a new one soon, and it only is around 25€/$more. #### Nigel Goodwin ##### Super Moderator Most Helpful Member Thank you everyone. Very useful. An oscilloscope is the obvious piece of equipment, but I was curious if I would benefit form a true-rms meter, as I need to buy a new one soon, and it only is around 25€/$ more. It's worth buying a true RMS meter regardless, but neither type is particularly useful for audio signals. I mentioned AC millivoltmeters for audio earlier, it might be worth you knowing that those are NOT true RMS. It doesn't really matter for audio, as anything you're measuring usually uses sine waves. #### alec_t ##### Well-Known Member RMS as a measure of audio signals is pretty meaningless unless the interval over which the 'mean' is calculated is known. One cycle? Ten minutes? ...? #### Elerion ##### Member It's worth buying a true RMS meter I never really missed a true-rms. Appart from diode rectification, most situations I faced where AC or just any other kind of sinusoidal wave. Square waves (digital clock signals, etc.) are only 10% off, and most of the time this is fine. Are there any common situations I could have missed? Opinions, please #### crutschow ##### Well-Known Member Are there any common situations I could have missed? As shown in post #3, the average responding meter is nearly 40% low as compared to the true RMS meter when measuring the modified sinewave from a UPS converter. #### schmitt trigger ##### Well-Known Member If you plan to measure input current on most low power supplies, the input current will not be PF-corrected and there will be significant distortion. A non-true RMS meter will display very significant errors. Likewise with SCR or Triac phase-control circuits. The output voltage is distorted, and the readings without a true RMS meter will be meaningless. #### dknguyen ##### Well-Known Member If you plan to measure input current on most low power supplies, the input current will not be PF-corrected and there will be significant distortion. A non-true RMS meter will display very significant errors. Likewise with SCR or Triac phase-control circuits. The output voltage is distorted, and the readings without a true RMS meter will be meaningless. Not necessarily. Know that "true-RMS" doesn't always mean true-RMS. At work, we have a fluke meter labelled a "true-RMS", but it's not. It was giving us weird readings with some special switched LCR waveforms so we compared it to a bench meter power meter and an oscilloscope's calculated RMS values (both devices confirmed to numerically calculate RMS from samples). The bench meter and oscilloscope agreed and the Fluke meter was off by a factor of 0.7. As asked the guy that owns it and he told us that it "true-RMS" only under certain conditions so it seems that if the waveform strays too far from sinusoid then it's no longer valid since it's working under some set of assumptions rather than numerically calculating the RMS. #### crutschow ##### Well-Known Member The Fluke meter has an AC frequency measurement limit, which may have been the source of your observed large measurement error. A Fluke 114 true RMS meter, for example, has an upper frequency limit of 1kHz, which would seriously limit its accuracy when trying to measure high frequency signals, such as from a switching power supply. A fundamental rule is, you have to know the limits of the instruments you are using if you want to make accurate measurements. The Fluke is indeed a true RMS meter when used within it's specification limits, which you apparently didn't do. #### ronsimpson ##### Well-Known Member Is true-RMS meter required for audio signals? For measuring true power in a resistor .... RMS meter would be nice. Because most meters do not work well above 1khz I can not use the meter. For audio, I do not a RMS meter. A scope will tell me what level is distorting. (clipping) Any meter will measure relative voltage. >Example: I am injecting a 100mV 1khz sign wave and getting out a 10V 1khz sign wave. The amp should have a gain of 100. Even if the meter was really strange it would measure both input and output the same way, and give a good gain reading. >Example: For testing stereo, you often are just comparing a working right amp against a not working left amp. Again just comparing. >Example: I have a 20 input mixer. For a test I can inject the same signal into all inputs. With any meter I can look to see if all channels are working, the same. I really don't care if the voltage reads 1.00V or 0.707V or 1.414V. I just want to know that channel 7 is very weak. Many low cost meters don't work well with signals that are not sign wave. In some manuals it may say that signals with 10% duty cycle will not read right. Some digital scopes have math functions. (will measure the signal at 10,000 points and do the math) This math is good and works to 100mhz. (top end of your scope) A meter might measure 60hz at 100 points or 600hz at 10 points and 6khz at 1 point. #### KeepItSimpleStupid ##### Well-Known Member As asked the guy that owns it and he told us that it "true-RMS" only under certain conditions so it seems that if the waveform strays too far from sinusoid then it's no longer valid since it's working under some set of assumptions rather than numerically calculating the RMS. There is usually something called "Crest factor" and frequency response to consider. The thermal sensors are now obsolete. #### Elerion ##### Member I finally bought the true-rms version for a little more (by the way Amprobe AM-550, which also has switchable input impedance for VDC/VAC). I think the best way to learn is from practice, and it sometimes (always?) has a price. The meter seems well built, although it doesn't have this bulky back covers like Fluke's and many other cheap ones, which confer the meter a very sturdy looking. This one is one piece, but very light (<400g). Its AC bandwidth is just 1 kHz, but I haven't seen meters which go beyond that, which fit my budget. Hopefully, I won't do any switching power supply work. If someone considers getting a meter in the series AM-520 up to AM-570, feel free to ask for any technical question. #### MrAl ##### Well-Known Member I know what true-rms is, but I wonder if it is required to measure a real audio signal. With "real" I mean a mix of many frequencies, not just a test pure sine wave. I suppose an average meter should work fine, but I've got no true-rms to compare. Would an average and true-rms meters differ in readings? Thanks! Hi, Not sure if anyone mentioned this yet, but the true RMS value and the average value are in general not the same. To state this in a more definitive way, they are almost never the same. They can get pretty close and maybe we can find a wave that does show the same values, but it would take a little work to find this. This means that a random signal should be considered to have different values for RMS and AVG. To show how simple this is to prove, all we have to do is find the average and true RMS values for a sine wave. If it doesnt work for a sine wave, then we have to go to great lengths to find a combination that might actually work, and this is not what we normally will see. The average value for a sine wave mathematically is zero, but for power line and other measurements we usually find the average value of the absolute value of the sine wave, and for a sine wave of amplitude equal to 1 unit this comes out to simply 0.636620 with six digits of accuracy. The RMS value of that sine wave on the other hand is 0.707107 to six digits. The exact values are 2/pi and 1/sqrt(2). The ratio of average to RMS is 0.900316 to six digits, so right away we see that they are not the same already. Since this holds true EVEN for a single sine wave, we will find in general the two will not be equal. If we look a little we might find some signals that have the same RMS and average values, well, close anyway. I think the signal: sin(w*t)+sin(3*w*t)/3+sin(5*w*t)/5 comes close but still not exactly the same. This happens to be the first three harmonics of a square wave. If we do the first 119 harmonics of a square wave we get a ratio of 0.998310 which suggests that a perfect square wave has equal RMS and AVG values. This is not a usual audio signal though, but if you could pass a perfect square wave through an audio amp then you'd see equal RMS and AVG values. Again, this is far from typical. So the assumption should be that they are never the same, even though on rare occasion we'll see them the same. Of all the meters i own now i only have one that goes up far above 1kHz. It was not cheap though. The usual meters are used for power line frequencies so they dont have to go up too high. You can also build your own peak detecting meter. You can calibrate it with a scope and then use it to measure audio and other stuff. The main parts are a very fast diode and resistor and capacitor. If you dont have a scope you can get someone else to calibrate it for you. Last edited: #### Elerion ##### Member If we do the first 119 harmonics of a square wave we get a ratio of 0.998310 which suggests that a perfect square wave has equal RMS and AVG values. This seems strange, if you look at this: Although this, at the same times, is somewhat contradictory: #### Nigel Goodwin ##### Super Moderator If we do the first 119 harmonics of a square wave we get a ratio of 0.998310 which suggests that a perfect square wave has equal RMS and AVG values. This is not a usual audio signal though, but if you could pass a perfect square wave through an audio amp then you'd see equal RMS and AVG values. Again, this is far from typical. Sorry MrAl, but you need to stop smoking whatever illegal substance you appear to be using Normal multimeters read the 'average' calibrated solely for a sinewave - on the assumption that it's going to be used for mains measurements - as it's designed to be. Obviously measuring a squarewave will be massively incorrect, as it's not calibrated for that. #### Elerion ##### Member any idea why the two images above disagree? I suppose that average multimeters are just calibrated for sine wave, and not for a real average. Maybe that explains what MrAl explained. Status Not open for further replies.
# Problem of dice The initial purpose of this post was give a proper proof of a problem posted on Twitter by @jamestanton (it is hard within the 140 char limit), but the post was later extended to cover some other problems. Show that a four-sided and a nine-sided dice cannot be used to simulate the probability distribution of the product of outcomes when using two six-sided dice. In the original wording: Is there a 4-sided die & a 9-sided die that together roll each of the products 1,2,3,…,30,36 w the same prob as two ordinary 6-sided dice? We make an argument by contradiction, considering only the possible outcomes without taking the actual probabilities into account. Obviously, to reach the same outcomes as for two normal dice $\{1,2,3,4,5,6\} \times \{1,2,3,4,5,6\}$, we need both dice to have the identity $\{1\}$ (otherwise, we will not be able to reach $1 \cdot 1 = 1$). So, $\{1,*,*,*\} \times \{1,*,*,*,*,*,*,*,*\}$. Now, consider the prime $5$. It must be on both dice, or we would have $\mathbb{P}(5^2\cdot b)>0, b>1$. So, $\{1,5,*,*\} \times \{1,5,*,*,*,*,*,*,*\}$. Also, since $5$ appears on both dice, no dice can contain some product of the primes $\{2,3,5\}$ and their powers (e.g $2^2 \cdot 3$) that does not exist on the original dice, because then impossible products could be reached. Hence, $6$ must be on both dice, giving $\{1,5,6,*\} \times \{1,5,6,*,*,*,*,*,*\}$. There are $6$ sides left on the larger die but we have more even products, so $2$ must also be on each die. $\{1,5,6,2\} \times \{1,5,6,2,*,*,*,*,*\}$. Now, there is no space left for $3$ on the smaller die. This means that $3^2$ must be one the larger die, but then $\mathbb{P}(3^2\cdot 5)>0$, which is a contradiction. (@FlashDiaz gave a shorter proof) Project Euler 205 Peter has nine four-sided (pyramidal) dice, each with faces numbered $1, 2, 3, 4$. Colin has six six-sided (cubic) dice, each with faces numbered $1, 2, 3, 4, 5, 6$. Peter and Colin roll their dice and compare totals: the highest total wins. The result is a draw if the totals are equal. What is the probability that Pyramidal Pete beats Cubic Colin? Give your answer rounded to seven decimal places in the form $0.abcdefg$. The probability functions of the nine four-sided dice and the six six-sided dice are given by the generating functions $\frac{1}{4^9} \cdot (x^1+x^2+x^3+x^4)^9$ and $\frac{1}{6^6} \cdot (y^1+y^2+y^3+y^4+y^5+y^6)^6$, respectively. Let $X_1,...,X_9$ be i.i.d random variables taking values in the range $[1,4]$ and let $Y_1,...,Y_6$ taking values in the range $[1,6]$. We want to determine the probability that $\rho = \mathbb{P}(X_1+...+X_9 > Y_1+...+Y_6)$. The distributions can be computed as def rec_compute_dist(sides, nbr, side_sum): global dist if nbr == 1: for i in range(1, sides+1): dist[side_sum+i] += 1 else: for i in range(1, sides+1): rec_compute_dist(sides, nbr-1, side_sum+i) dist = [0]*37 rec_compute_dist(4,9,0) dist_49 = dist dist = [0]*37 rec_compute_dist(6,6,0) dist_66 = dist To determine $\rho$, we may express it as $\begin{array}{rl} \rho = & \sum_{t=6}^{36} \mathbb{P}(X_1+...+X_9 > t| Y_1+...+Y_6 = t)\cdot \mathbb{P}(Y_1+...+Y_6 = t) \\\\ = & \sum_{t=6}^{36} \mathbb{P}(X_1+...+X_9 > t)\cdot \mathbb{P}(Y_1+...+Y_6 = t) \end{array}$. Computing the sum using the following code, probability = 0 for i in range(6,36+1): for j in range(i+1,36+1): probability += dist_66[i]*dist_49[j] print 1.0 * probability/(6**6 * 4**9) we obtain the answer. Great 🙂 Project Euler 240 There are $1111$ ways in which five six-sided dice (sides numbered $1$ to $6$) can be rolled so that the top three sum to $15$. Some examples are: $\begin{array}{rcl} D_1,D_2,D_3,D_4,D_5 &=& 4,3,6,3,5\\ D_1,D_2,D_3,D_4,D_5 &=& 4,3,3,5,6\\ D_1,D_2,D_3,D_4,D_5 &=& 3,3,3,6,6\\ D_1,D_2,D_3,D_4,D_5 &=& 6,6,3,3,3 \end{array}$ In how many ways can twenty twelve-sided dice (sides numbered $1$ to $12$) be rolled so that the top ten sum to $70$? Let us first consider the simpler problem $\left\{ d_1+..+d_{10}=70 \right\}$. If we restrict the remaining ten dice to be less than or equal to the minimum value of the ten dice, we then can compute the cardinality. Let $n_i$ denote the number of $i$‘s we got. Then, $n_1 \cdot 1 + n_2 \cdot 2 + ... + n_{12} \cdot 12 = 70$ where $n_1 + n_2 + ... + n_{12} = 10, n_i \geq 0$. All histograms of top-ten dice can be computed with from copy import copy d = [0] * 12 possible = [] def rec_compute(i, j, sum): global d if j == 0: if sum == 70: possible.append(copy(d)) return while i > 0: if sum + i <= 70: d[i - 1] += 1 rec_compute(i, j - 1, sum + i) d[i - 1] -= 1 i -= 1 rec_compute(12, 10, 0) The code exhausts all solutions in 200ms. Call any solution $H$. For instance H = [0, 0, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0]. The remaining dice can take any values in the range $[1, j]$, where $j$ is the left-most non-zero index (starting from $1$). The number of configurations for this particular solution is then given by $20! \cdot \left((10+H_7)!H_6!H_5!H_4!H_3!H_2!H_1!\right)^{-1}$, where $\sum^7_1 H_i = 10$. Unfortunately, there is no good analytical way of computing this. So, the easiest way is to enumerate all possible $H_i$. Disregarding $H_7$, we compute all permutations of a given histogram in the same way (hence, we can make the search space a lot smaller) and the using the multiplicity to determine the exact number. All and all, the following code gives our answer: def configurations(i, j, x, s, l): if sum(x) == s: # we can stop here as the sum cannot get smaller multiplicity = fact(l) / fact(l-len(x)) / \ reduce(lambda m, n: m * n, \ [fact(y) for y in \ Counter(x).values()]) return fact(DICE) * multiplicity / \ reduce(lambda m, n: m * n, \ [fact(y) for y in x]) if j == 0 or i == 0: return 0 return configurations(i-1, j, x, s, l) + \ configurations(i, j-1, x + [i], s, l) S = 0 for H in possible_top_dice: min_index = next((i for i, \ x in enumerate(H) if x), None) for j in range(0, REMAINING+1): u = reduce(lambda m, n: m * n, \ [fact(y) for y in H]) if j < REMAINING: q = configurations(REMAINING-j, min_index, \ [], REMAINING-j, min_index) / u else: q = fact(DICE) / u H[min_index] += 1 S += q print S # Breaking affine ciphers – a matrix approach An affine cipher is a one the remnants of classical cryptography, easily broken with todays computational power. The cipher defines a symbol mapping from $f :\{A,B,\ldots,\} \mapsto \mathbb{Z}_n$. Each cipher symbol is then computed as $a \cdot x + b \rightarrow y$, where $a \in \mathbb{Z}^*_n$ and $b \in \mathbb{Z}_n$. Decryption is then done by computing $x= (y - b) \cdot a^{-1}$. In this blog post, I will show how to break this cipher in time faster than trying all keys. Let us first sketch the general idea. Consider an expected distribution $\hat{P}$ of the symbols and a given distribution $P$, the integral $\int (\hat{P}(x) - P(x))^2 dx$ defines a statistical distance between the distributions (this would correspond to the Euclidian distance), which we would like to minimize. Now, clearly $(\hat{P}(x) - P(x))^2 = \hat{P}(x)^2 - \hat{P}(x)P(x) + P(x)^2$. Trivially, $\hat{P}(x)^2$ and $P(x)^2$ remains constant over any keypair $(a,b)$, so instead of minimizing the above, we can maximize $\hat{P}(x)P(x)$. Therefore, the minimization problem can be turned into a maximization problem $\max_{a,b} \int \hat{P}(x)P_{a,b}(x) dx$. Cool. In terms of our cipher, which is discrete, the minimization problem is a sum $\max_{a,b} \sum \hat{P}(x)P_{a,b}(x)$. The observant reader may notice that this looks like a term in a matrix multiplication. There is just one caveat; the indices corresponding appear only in one term. There is an easy way to get around this. Instead of applying transformations on only $P$, we may split them among the two. So by instead computing $\max_{a,b} \sum \hat{P}_a(x) P_{b}(x)$, we have achieved what we desired. This means that we shuffle $\hat{P}$ with $a$ and ${P}$ with $b$. Let us interpret this as Python. The expected distribution of an alphabet ABCDEFGHIJKLMNOPQRSTUVWXYZ ,. may be as follows (depending on the observation): P_hat = {' ': 0.05985783763561542, ',': 0.0037411148522259637, '.': 0.0028058361391694723, 'A': 0.0764122708567153, 'C': 0.02600074822297044, 'B': 0.012065095398428732, 'E': 0.11878039655817432, 'D': 0.03974934530490086, 'G': 0.018892630003741116, 'F': 0.020856715301159744, 'I': 0.0651889263000374, 'H': 0.05695847362514029, 'K': 0.00720164609053498, 'J': 0.0014029180695847362, 'M': 0.02254021698466143, 'L': 0.03769173213617658, 'O': 0.07023943135054246, 'N': 0.06313131313131314, 'Q': 0.0009352787130564909, 'P': 0.01805087916199027, 'S': 0.05920314253647587, 'R': 0.0560231949120838, 'U': 0.025813692480359144, 'T': 0.08473625140291807, 'W': 0.022072577628133184, 'V': 0.00916573138795361, 'Y': 0.01842499064721287, 'X': 0.0014029180695847362, 'Z': 0.0006546950991395436} The transformations are done by computing the matrices # compute first matrix for transformed P_hat for i in range(1, N): for element in priori_dist: X[i, (look_up.index(element) * i) % N] = priori_dist[element] # compute second matrix for transformed P for j in range(N): for element in dist: Y[(look_up.index(element) - j) % N, j] = dist[element] Here, the $i$th row in $X$ corresponds to $\hat{P}$ transformed by $a = i$. Moreover, the $j$th row in $Y$ corresponds ${P}$ transformed by $b = j$. For some distribution, they may look like As we can see, $X$ is only shifted (by the addition), while in $Y$ the indices are reordered by multiplication with row index $i$. Taking advantage of the matrix multiplication property, we may now compute $Z=XY$. Any entry in $Z$ is $Z_{a,b} = \sum_x X_{a,x} Y_{x,b}$ so finding a maximum element in $Z$ is equivalent to saying $\max_{a,b} \sum_x X_{a,x} Y_{x,b}$. Looks familiar? It should. This is our maximization problem, which we stated earlier. Therefore, we may solve the problem using Z = numpy.dot(X, Y) a, b = numpy.unravel_index(Z.argmax(), Z.shape) This breaks the affine cipher. Some notes on complexity So, what is the complexity of the matrix approach? Computing the matrices takes $O(N^2)$ modular operations. The matrix multiplication takes naively $O(N^3)$ operations, but for large $N$ this can be achieved faster. For instance Strassen takes $O(N^{2.807})$ but faster algorithms exist. Also, taking advantage of symmetry and structure could probably decrease the complexity further. This is the total complexity of this approach. Compare this with brute-force guessing of the key (taking $O(N^2)$ guesses) and for each guess, compute the distance taking $O(N)$ operations, which in total yields $O(N^3)$. It should be noted that complexity of this approach may be reduced by picking $a,b$ in an order which minimizes the number of tries. Example implementation for the id0-rsa.pub: github # Custom terminal for Vagrant/SSH Short story: I wanted to distinguish my terminal windows between local sessions, ssh sessions and vagrant sessions. SSH_THEME="SSH" VAGRANT_THEME="Vagrant" set_th () { osascript -e "tell app \"Terminal\" to set current settings of first window to settings set \"$1\"" } set_id () { osascript -e "tell app \"Terminal\" to set current settings of first window to$1 $2$3 $4" #$@ does not work! } get_id () { cur_id=$(osascript -e "tell app \"Terminal\" to get current settings of first window") } ssh(){ #!/bin/sh get_id set_th$SSH_THEME /usr/bin/ssh "$@" set_id$cur_id } vagrant(){ #!/bin/sh if [ $1 = "ssh" ]; then get_id set_th$VAGRANT_THEME /opt/vagrant/bin/vagrant "$@" set_id$cur_id else /opt/vagrant/bin/vagrant "$@" fi } The code creates a temporary variable of the current theme before switching. So, when ending the session, the original theme changes back instead of a fixed one. Putting the above code in your .bash_profile: gives the following nice behavior: Color coding your sessions is a great way to visualize things and make sure you do not take any unwanted action by mistake 🙂 Of course, the code can be used to wrap any application. For instance, one could use it to make the interactive interpreter of Python/Sage or terminal sessions using torsocks appear in different colors or even fonts. # Re-mapping KBT Pure Pro in OS X For my everyday-use computer, I use a modded KBT Pure Pro; this is a small mechanical keyboard with aluminium base and background lightning, perfect for programming and typing. The size of the keyboard is 60 % of a normal one, making it suitable for spatially constrained workspaces. To my experience, it is also more ergonomic. Below is a comparison of the Pure Pro and a wireless Apple keyboard. For those being the in the process of buying a keyboard, I recommend this one 🙂 For quite a while, I have used Linux on this computer. But after installing OS X, the keyboard map went wack, so to speak. Many keys were mapped incorrectly. Using Ukulele, I created a customized layout with correct mapping (don’t mind the duplicate keys): The layout covers all keys and can be found here. NOTE: this is a layout for KBT Pure Pro with British ISO layout and not ANSI. # BackdoorCTF16 – Collision Course With 350 points and a description as follows: In today’s world, hash collisions are becoming more and more popular. That is why, one must rely on standardized hashing techniques, such as bcrypt. However, n00bster shall never learn, and he has implemented his own hash function which he proudly calls foobar. Attached is an implementation of the hash function and a file with which you are supposed to find a collision. He believes that you will not be able to find a collision for the file, especially since he hasn’t even given you the hashing algorithm, but has packaged it as a black box application. Prove to him that he is wrong. Note: Multiple collisions are possible, but only one of them is a valid flag. You will realize you’ve gotten it once you do. The hash is given as follows: So, we start off by looking at the binary. Using Hopper, we obtain the following pseudo code by decompilation: int hash(int input) { eax = _rotr(input ^ 0x24f50094, (input ^ 0x24f50094) & 0xf); eax = _rotl(eax + 0x2219ab34, eax + 0x2219ab34 & 0xf); eax = eax * 0x69a2c4fe; return eax; } int main() { esp = (esp & 0xfffffff0) - 0x20; puts(0x80486d0); gets(0x804a060); stack[2039] = "\nBar:"; puts(stack[2039]); while (stack[2039] < *(esp + 0x18)) { stack[2039] = *(stack[2039] + stack[2039] * 0x4); *(esp + 0x14) = *(esp + 0x14) ^ hash(stack[2039]); eax = _rotr(stack[2039], 0x7); printf("%08lx", stack[2039]); *(esp + 0x10) = *(esp + 0x10) + 0x1; } eax = putchar(0xa); return eax; } We sketch the above code as block scheme below: The first thing to note is that we can find an infinite number of collisions just by appending arbitrary data after 10 blocks. However, this is not interesting to us, but completely defeats the conditions for a safe cryptographic hash function. This Merkle-Damgård-like structure allows us to solve blocks iteratively, starting from the first. Here is how. Starting from the first block, we can find an input to the function $H$ such that when rotated 7 steps is equal to block 0 (here, denoted $B_0$). Hence, the problem we solve is to find an $x$ such that $H(x) \ll 7 = B_0$. This is a simple thing for Z3. Then, we take the next block and solve for $(H(x) \oplus B_0) \ll 7 = B_1$ and so forth. Implemented in Python/Z3, it may look like the following: from z3 import * import binascii, string, itertools bits = 32 mask = 2**bits - 1 allowed_chars = string.printable def convert_to_hex(s): return ''.join([hex(ord(x))[2:].zfill(2) for x in s[::-1]]) def convert_to_string(h): return ''.join([chr(int(x, 16)) for x in list(map(''.join, zip(*[iter(hex(h)[2:])]*2)))[::-1]]) def rot(val, steps): return (val << (bits-steps)) | LShR(val, steps) def hash_foobar(input): eax = rot(input ^ 0x24f50094, (input ^ 0x24f50094) & 0xf) eax = rot(eax + 0x2219ab34, bits - (eax + 0x2219ab34 & 0xf)) eax = eax * 0x69a2c4fe return eax & mask def break_iteratively(hashdata, i): if i == 0: prev_block = 0 else: prev_block = hashdata[i-1] s = Solver() j = BitVec('current_block', bits) eax = rot(prev_block ^ hash_foobar(j), 7) s.add(eax == hashdata[i]) block_preimages = [] while s.check() == sat: sol = s.model() s.add(j != sol[j].as_long()) block_string = convert_to_string(sol[j].as_long()) if all(c in allowed_chars for c in block_string): block_preimages.append(block_string) return block_preimages known = '9513aaa552e32e2cad6233c4f13a728a5c5b8fc879febfa9cb39d71cf48815e10ef77664050388a3' # this the hash of the file data = list(map(''.join, zip(*[iter(known)]*8))) hashdata = [int(x, 16) for x in data] print '[+] Hash:', ''.join(data) print '[+] Found potential hashes:\n' for x in itertools.product(*[break_iteratively(hashdata, i) for i in range(10)]): print ' * ' + ''.join(x) This code is surprisingly fast, thanks to Z3, and runs in 0.3 seconds. Taking all possible collisions into consideration… [+] Hash: 9513aaa552e32e2cad6233c4f13a728a5c5b8fc879febfa9cb39d71cf48815e10ef77664050388a3 [+] Found potential hashes: * CTFEC0nstra1nts_m4keth_fl4g} * CTFEC0nstra1nts_m4keth_nl4g} * CTFEC0nstra1nws_m4keth_fl4g} * CTFEC0nstra1nws_m4keth_nl4g} * CTFEC0nstra9nts_m4keth_fl4g} * CTFEC0nstra9nts_m4keth_nl4g} * CTFEC0nstra9nws_m4keth_fl4g} * CTFEC0nstra9nws_m4keth_nl4g} * CTF{C0nstra1nts_m4keth_fl4g} * CTF{C0nstra1nts_m4keth_nl4g} * CTF{C0nstra1nws_m4keth_fl4g} * CTF{C0nstra1nws_m4keth_nl4g} * CTF{C0nstra9nts_m4keth_fl4g} * CTF{C0nstra9nts_m4keth_nl4g} * CTF{C0nstra9nws_m4keth_fl4g} * CTF{C0nstra9nws_m4keth_nl4g} …we finally conclude that the flag is the SHA-256 of C0nstra1nts_m4keth_fl4g. # BackdoorCTF16 – Baby Worth 200 points, this challenge was presented with the following: z3r0c00l has a safe repository of files. The filename is signed using z3r0c00l’s private key (using the PKCS-1 standard). Anyone willing to read a file, has to ask for a signature from z3r0c00l. But z3r0c00l is currently unavailable. Can you still access a file named “flag” on z3rc00l’s repository? nc hack.bckdr.in 9001 Let us take a look at the public key… 3072 bits and public exponent $e = 3$. Hmm… having a small exponent is usually not a good practice. First, I tried computing the roots to $x^3 - s \bmod n$, where $s$ is the signature and $n$ is the modulus, but then I realized that this was not the way to go. What if we use non-modular squareroot, plain old Babylonian style? After looking around, I also realized that this is Bleicherbacher’s $e = 3$ attack, which I probably should have known about. There is a lot of information about this attack (therefore, I will not describe it here) and, of course, lots of people have already written code for this. Being lazy/efficient, I rewrote a functional code into the the following: from libnum import * from gmpy2 import mpz, iroot, powmod, mul, t_mod import hashlib, binascii, rsa, os def get_bit(n, b): """ Returns the b-th rightmost bit of n """ return ((1 << b) & n) >> b def set_bit(n, b, x): """ Returns n with the b-th rightmost bit set to x """ if x == 0: return ~(1 << b) & n if x == 1: return (1 << b) | n def cube_root(n): return int(iroot(mpz(n), 3)[0]) snelhest = hashlib.sha256('flag') ASN1_blob = rsa.pkcs1.HASH_ASN1['SHA-256'] suffix = b'\x00' + ASN1_blob + snelhest.digest() sig_suffix = 1 for b in range(len(suffix)*8): if get_bit(sig_suffix ** 3, b) != get_bit(s2n(suffix), b): sig_suffix = set_bit(sig_suffix, b, 1) while True: prefix = b'\x00\x01' + os.urandom(3072//8 - 2) sig_prefix = n2s(cube_root(s2n(prefix)))[:-len(suffix)] + b'\x00' * len(suffix) sig = sig_prefix[:-len(suffix)] + n2s(sig_suffix) if b'\x00' not in n2s(s2n(sig) ** 3)[:-len(suffix)]: break print hex(s2n(sig))[2:-1] Ok, so lets try it: Great! # Defcon CTF – b3s23 (partial?) The server runs a program (game of life) which has a $110 \times 110$ board with cells (bits). After a fixed number $n$ of iterations, the simulation stops and the program jumps to the first bit of the memory containing the board. We want to create an input which contains shellcode in this area after $n$ iterations. Obviously, we could choose any shellcode, and run game of life backwards. Cool, let us do that then! Uh-oh, inverting game of life is in fact a very hard problem… so it is not really feasible 😦 What to do, then? Game of life Game of life a cellular automata, found by Jon Conway, and is based on the following rules: 1. A cell is born if it has exactly 3 neighbours. Neighbors are defined as adjacent cells in vertical, horistontal and diagonal. 2. A cell dies if it has less than two or more than three neighbors. Stable code (still life) Still life consists of cell structures with repeating cycles having period 1. Here are the building blocks I used to craft the shellcode. Of course, the still life is invariant of rotation and mirroring. Shellcode So, I tried to find the shortest shellcode that would fit one line (110 bits). This one is 8 bytes. Great. 08048334 <main>: 8048334: 99 cltd 8048335: 6a 0b push$0xb 8048337: 58 pop %eax 8048338: 60 pusha 8048339: 59 pop %ecx 804833a: cd 80 int \$0x80 In binary, this translates to: 000001101001100101101010000010110101100001100000010110011100110110000000 Ok, so we note that 110101 ... 01110 cannot be constructed by our building blocks (there most certainly exist such blocks, but I didn’t consider them). So, I use a padding trick. By inserting an operation which does nothing specific 10110011 00000000 mov bl,0x0 we are able to use the blocks given in previous section. This Python code gives the binary (still-life-solvable) sequence: from pwn import * binary_data = ''.join([bin(ord(opcode))[2:].zfill(8) for opcode in shellcode]) context(arch = 'i386', os = 'linux') print disasm(shellcode) print binary_data[0:110] which is 0000011010011001101100110000000001101010000010110101100001100000010110011100110110000000 The following cellular automata is stable, and the first line contains our exploit: As can be seen in the animation below, we have found a still-life shellcode. When feeding it to the program, we find that it remains in memory after any number of iterations: Nice! Unfortunately, the code did not give me a shell, but at least the intended code was executed. I had a lot of fun progressing this far 🙂 # TU CTF – Secure Auth This was a 150 point challenge with the description: We have set up this fancy automatic signing server! We also uses RSA authentication, so it’s super secure! nc 104.196.116.248 54321 Connecting to the service, we get the following Obviously, we cannot feed the message get_your_hands_off_my_RSA! to the oracle. So, we will only receive signatures, but no way to verify them; this means we don’t know either the public modulus, nor the public exponent. But, of course, we could guess the public exponent… there are a few standard ones: $3, 17, 65537...$ First, I obtained the signatures for $3$ and $4$ from the provided service. Denote these $s_3, s_4$, respectively. We note that given a correct public exponent $e$, we may compute $s_3^e = 3 + k \cdot N$ and $s_4^e = 4 + l \cdot N$. Inevitably, $\textnormal{gcd}(s_3^e-3,s_4^e-4) = \textnormal{gcd}(k,l)\cdot N$. Hoping for $\textnormal{gcd}(k,l)$ to be small, we can use serveral pairs until we find one that works. Trying all the listed (guessed) public exponents, we find that $e = 65537$ (this was performed surprisingly fast in Sage with my Intel hexacore). Hence, we have now determined the modulus $\begin{array}{rl} N = & 24690625680063774371747714092931245796723840632401231916590850908498671935961736 \\ &33219586206053668802164006738610883420227518982289859959446363584099676102569045 \\ &62633701460161141560106197695689059405723178428951147009495321340395974754631827 \\ &95837468991755433866386124620786221838783092089725622611582198259472856998222335 \\ &23640841676931602657793593386155635808207524548748082853989358074360679350816769 \\ &05321318936256004057148201070503597448648411260389296384266138763684110173009876\\ &82339192115588614533886473808385041303878518137898225847735216970008990188644891 \\ &634667174415391598670430735870182014445537116749235017327.\end{array}$ Now, note that libnum.strings.s2n('get_your_hands_off_my_RSA!') % 3 == 0 OK, so we may split this message $m$ into a product of two message factors: $m_1 = 3$ and $m_2 = 166151459290300546021127823915547539196280244544484032717734177$ and sign them. Then, we compute the final signature $s = m^d = (m_1 \cdot m_2)^d = m_1^d \cdot m_2^d = s_1 \cdot s_2 \bmod N$. Mhm, so what now? Phew 🙂 # TU CTF – Hash’n’bake This challenge, worth 200 points, exhibits a trivial (and, obviously, non-secure) hash function with the objective to find a keyed hash. The description: A random goat from Boston hashed our password! Can you find the full output? The hash function is defined as: def to_bits(length, N): return [int(i) for i in bin(N)[2:].zfill(length)] def from_bits(N): return int("".join(str(i) for i in N), 2) CONST2 = to_bits(65, (2**64) + 0x1fe67c76d13735f9) def hash_n_bake(mesg): mesg += CONST shift = 0 while shift < len(mesg) - 64: if mesg[shift]: for i in range(65): mesg[shift + i] ^= CONST2[i] shift += 1 return mesg[-64:] def xor(x, y): return [g ^ h for (g, h) in zip(x, y)] The following computations will give the hash PLAIN_1 = "goatscrt" PLAIN_2 = "tu_ctf??" def str_to_bits(s): return [b for i in s for b in to_bits(8, ord(i))] def bits_to_hex(b): return hex(from_bits(b)).rstrip("L") if __name__ == "__main__": with open("key.txt") as f: print PLAIN_1, "=>", bits_to_hex(hash_n_bake(xor(KEY, str_to_bits(PLAIN_1)))) print "TUCTF{" + bits_to_hex(hash_n_bake(xor(KEY, str_to_bits(PLAIN_2)))) + "}" # Output # goatscrt => 0xfaae6f053234c939 # TUCTF{****REDACTED****} So, the problem is: we need to compute the hash without knowing the key (or brute forcing it). The first observation we make is that the hash function is a truncated affine function, i.e., $h(m) = f((m \cdot 2^{64} \oplus \texttt{CONST})\cdot \texttt{CONST}_2)$, with $f(a \oplus b) = f(a) \oplus f(b)$ . There is a quite simple relation emerging: $h(k \oplus m) = h(k) \oplus h(m) \oplus h(0)$ (note: $h(0)$ denotes empty input here). Using this relation, we can do the following. We know $h(k \oplus m_1)$ and $h(m_2)$ and want to determine $h(k \oplus m_2)$. Consider the following relation: $\begin{array}{rl} h(k \oplus m_2) = & h(k) \oplus h(m_2) \oplus h(0) \\ = & h(k) \oplus h(m_1) \oplus h(0) \oplus h(m_1) \oplus h(m_2) \phantom{\bigg(} \\ = & h(k \oplus m_1) \oplus h(m_1) \oplus h(m_2). \end{array}$ All terms on the last line of the above equation are known. So, we can easily compute the hash, even without knowing the key. Ha-ha! Computing the above relation using Python can be done in the following manner: xor(xor(to_bits(64, 0xfaae6f053234c939), hash_n_bake(str_to_bits(PLAIN_1))),hash_n_bake(str_to_bits(PLAIN_2))) This gives the flag TUCTF{0xf38d506b748fc67}. Sweet 🙂 # TU CTF – Pet Padding Inc. A web challenge worth 150 points, with description We believe a rouge whale stole some data from us and hid it on this website. Can you tell us what it stole? http://104.196.60.112/ Visiting the site, we see that there is a cookie youCantDecryptThis. Alright… lets try to fiddle with it. We run the following command curl -v --cookie "youCantDecryptThis=aaaa" http://104.196.60.112/ and we observe that there is an error which is not present compared to when running it with the correct cookie is set, i.e., curl -v --cookie "youCantDecryptThis=0KL1bnXgmJR0tGZ/E++cSDMV1ChIlhHyVGm36/k8UV/3rmgcXq/rLA==" http://104.196.60.112/ Clearly, this is a padding error (actually, there is an explicit padding error warning but it is not shown by curl). OK, so decryption can be done by a simple padding oracle attack. This attack is rather simple to implement (basically, use the relation $P_i = D_K(C_i) \oplus C_{i-1}$ and the definition of PCKS padding, see the wikipedia page for a better explanation), but I decided to use PadBuster. The following (modified example) code finds the decryption: class PadBuster(PaddingOracle): def __init__(self, **kwargs): self.session = requests.Session() self.wait = kwargs.get('wait', 2.0) def oracle(self, data, **kwargs): while 1: try: response = self.session.get('http://104.196.60.112', stream=False, timeout=5, verify=False) break except (socket.error, requests.exceptions.RequestException): logging.exception('Retrying request in %.2f seconds...', self.wait) time.sleep(self.wait) continue self.history.append(response) return The decrypted flag we get is TUCTF{p4dding_bec4use_5ize_m4tt3rs}!
## Bresenham’s Pie 29/03/2022 Approximating $\pi$ is always fun. Some approximations are well known, like Zu Chongzhi’s, $\frac{355}{113}$, that is fairly precise (it gives 6 digits). We can derive a good approximation using continue fractions, series, or some spigot algorithm. We can also use Monte Carlo methods, such as drawing uniformly distributed values inside a square, count those who falls in the circle, then use the ratio of inside points to outside points to evaluate $\pi$. This converges slowly. How do we evaluate the area of the circle then? Well, we could do it exhaustively, but in a smart way. ## Binary Trees are optimal… except when they’re not. 20/07/2021 The best case depth for a search tree is $O(\log_b n)$, if $b$ is the arity (or branching) of the tree. Intuitively, we know that if we increase $b$, the depth goes down, but is that all there is to it? And if not, how do we chose the optimal branching $b$? While it’s true that an optimal search tree will have depth $O(\log_b n)$, we can’t just increase $b$ indefinitely, because we also increase the number of comparisons we must do in each node. A $b$-ary tree will have (in the worst case) $b-1$ keys, and for each, we must do comparisons for lower-than and equal (the right-most child will be searched without comparison, it will be the “else” of all the comparisons). We must first understand how these comparisons affect average search time. ## Fixed-Points 25/08/2020 Preparing lecture notes (or videos) sometimes brings you to revisit things you’ve known for a while, but haven’t really taken time to formalize properly. One of those things is fixed-point arithmetic. ## Mirror, Mirror on the Tripod, Who’s the Fairest of them All? 04/08/2020 This week, I’ll show my mirror assembly to reverse the image “in hardware” for the lightboard. ## Møre Lïtbørd 28/07/2020 Last time, I gave the instruction on how to build a lightboard, but not much in terms of how you actually use it. Now I’ve been giving lectures from it (with graduate students as test subjects), I’ve started recording for the undergrad courses, and so I’ve tweaked my setup and learnt a few tricks. This week, I’ll discuss some of them ## just_enough 30/06/2020 The C99 <stdint.h> header provides a plethora of type definition for platform-independent safe code: int_fast16_t, for example, provides an integer that plays well with the machine but has at least 16 bits. The int_fastxx_t and int_leastxx_t defines doesn’t guarantee a tight fit, they provide an machine-efficient fit. They find the fastest type of integer for that machine that respects the constraint. But let’s take the problem the other way around: what about defines that gives you the smallest integer, not for the number of bits (because that’s trivial with intxx_t) but from the maximum value you need to represent? ## Lïtbørd (more than some assembly required) 09/06/2020 As you may have noticed, a global pandemics got many of us working from home. While one can argue that you can do accounting from home, it’s a lot more complicated to teach from home. Pretty much everyone is trying to figure that one out. For my part, I decided that zoom and the virtual whiteboard is not very interesting. Like many, I decided to use a lightboard. So, the problem is, where do you get a lightboard on short notice? Well, you can build one. Let’s see how: ## Evaluating polynomials 05/05/2020 Evaluating polynomials is not a thing I do very often. When I do, it’s for interpolation and splines; and traditionally those are done with relatively low degree polynomials—cubic at most. There are a few rather simple tricks you can use to evaluate them efficiently, and we’ll have a look at them. ## Factorial Approximations 31/03/2020 $n!$ (and its logarithm) keep showing up in the analysis of algorithm. Unfortunately, it’s very often unwieldy, and we use approximations of $n!$ (or $\log n!$) to simplify things. Let’s examine a few! ## How many bits? 24/03/2020 In this quarantine week, let’s answer a (not that) simple question: how many bits do you need to encode sound and images with a satisfying dynamic range? Let’s see what hypotheses are useful, and how we can use them to get a good idea on the number of bits needed.
Hong Qi (Cardiff University) Cosmological Inference using Gravitational Wave Standard Sirens The observation of binary neutron star merger GW170817 provides the first constraint on the Hubble constant $H_0$ using bright counterpart standard siren method. For the case of a dark standard siren, where an electromagnetic counterpart is not identified, a statistical method with a galaxy catalog can be exploited to estimate Hubble constant. With multiple events, a combined posterior can be obtained using Bayesian framework. In this lunch talk, I will walk you through the Hubble constant inference via a series of mock data challenges from O2-like detectability simulation of hundreds of binary neutron star mergers, using both the counterpart and the statistical standard siren methods and with focuses on gravitational wave selection and electromagnetic selection effects. I will also talk about Hubble constant measurement with real gravitational wave detections from LIGO. [link] Place: Room 2907, Department of Astronomy Time: Mon, 2019-12-30 12:00 to 13:00
# Einstein: speed or velocity? 1. Mar 16, 2005 ### philosophking This is an interesting point that I just thought of while listening to one of Einstein's own lectures on his theory of relativity. In it he said that energy is equal to mass times the velocity of light squared. But doesn't velocity have direction? And wouldn't this imply energy has direction? Obviously it doesn't. Speed, on the other had, is a scalar, so I would think it would be mass times the speed of light squared. $$\vec{v}^2 = \vec{v} \cdot \vec{v}$$
## Results (1-50 of 541 matches) Next Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim arith $\mathbb{Q}$ self-dual $\operatorname{Arg}(\epsilon)$ $r$ First zero Origin 2-1323-1.1-c1-0-1 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0 0 0.478678 Modular form 1323.2.a.v.1.1 2-1323-1.1-c1-0-11 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $0.876824$ Modular form 1323.2.a.u.1.1 2-1323-1.1-c1-0-13 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0 0 0.936863 Modular form 1323.2.a.be.1.3 2-1323-1.1-c1-0-14 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $0.946329$ Modular form 1323.2.a.z.1.1 2-1323-1.1-c1-0-18 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0 0 1.16277 Modular form 1323.2.a.ba.1.2 2-1323-1.1-c1-0-19 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $1.16384$ Modular form 1323.2.a.bd.1.3 2-1323-1.1-c1-0-2 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0 0 0.628411 Modular form 1323.2.a.bd.1.2 2-1323-1.1-c1-0-45 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.90124$ Modular form 1323.2.a.bb.1.3 2-1323-1.1-c1-0-46 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0 0 1.90129 Modular form 1323.2.a.w.1.2 2-1323-9.7-c1-0-29 3.25 10.5 2 3^{3} \cdot 7^{2} 9.7$$ $1.0$ $1$ $0.387$ $0$ $1.61224$ Modular form 1323.2.f.f.883.5 2-1323-9.7-c1-0-3 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 9.7 $$1.0 1 0.335 0 0.423101 Modular form 1323.2.f.h.883.4 2-1323-1.1-c1-0-0 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $0.401169$ Elliptic curve 1323.d Modular form 1323.2.a.d Modular form 1323.2.a.d.1.1 2-1323-1.1-c1-0-10 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0 0 0.794908 Elliptic curve 1323.i Modular form 1323.2.a.i Modular form 1323.2.a.i.1.1 2-1323-1.1-c1-0-12 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $0.913197$ Elliptic curve 1323.c Modular form 1323.2.a.c Modular form 1323.2.a.c.1.1 2-1323-1.1-c1-0-15 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0 0 0.980257 Elliptic curve 1323.l Modular form 1323.2.a.l Modular form 1323.2.a.l.1.1 2-1323-1.1-c1-0-16 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $1.06074$ Elliptic curve 1323.g Modular form 1323.2.a.g Modular form 1323.2.a.g.1.1 2-1323-1.1-c1-0-17 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0 0 1.12292 Elliptic curve 1323.f Modular form 1323.2.a.f Modular form 1323.2.a.f.1.1 2-1323-1.1-c1-0-22 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $1.24852$ Elliptic curve 1323.p Modular form 1323.2.a.p Modular form 1323.2.a.p.1.1 2-1323-1.1-c1-0-23 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0 0 1.26108 Elliptic curve 1323.r Modular form 1323.2.a.r Modular form 1323.2.a.r.1.1 2-1323-1.1-c1-0-24 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.26210$ Elliptic curve 1323.a Modular form 1323.2.a.a Modular form 1323.2.a.a.1.1 2-1323-1.1-c1-0-26 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0.5 1 1.32164 Elliptic curve 1323.e Modular form 1323.2.a.e Modular form 1323.2.a.e.1.1 2-1323-1.1-c1-0-29 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $1.46860$ Modular form 1323.2.a.bd.1.4 2-1323-1.1-c1-0-3 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0 0 0.664539 Modular form 1323.2.a.w.1.1 2-1323-1.1-c1-0-30 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.47161$ Elliptic curve 1323.h Modular form 1323.2.a.h Modular form 1323.2.a.h.1.1 2-1323-1.1-c1-0-32 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0.5 1 1.53430 Elliptic curve 1323.b Modular form 1323.2.a.b Modular form 1323.2.a.b.1.1 2-1323-1.1-c1-0-33 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $1.54854$ Elliptic curve 1323.s Modular form 1323.2.a.s Modular form 1323.2.a.s.1.1 2-1323-1.1-c1-0-34 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0.5 1 1.55390 Modular form 1323.2.a.bc.1.2 2-1323-1.1-c1-0-35 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $1.55674$ Modular form 1323.2.a.ba.1.3 2-1323-1.1-c1-0-36 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0.5 1 1.62851 Elliptic curve 1323.j Modular form 1323.2.a.j Modular form 1323.2.a.j.1.1 2-1323-1.1-c1-0-37 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.63769$ Elliptic curve 1323.k Modular form 1323.2.a.k Modular form 1323.2.a.k.1.1 2-1323-1.1-c1-0-38 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0.5 1 1.64011 Modular form 1323.2.a.t.1.1 2-1323-1.1-c1-0-39 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.64117$ Modular form 1323.2.a.x.1.1 2-1323-1.1-c1-0-4 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0 0 0.716540 Modular form 1323.2.a.z.1.2 2-1323-1.1-c1-0-40 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.70267$ Modular form 1323.2.a.bc.1.1 2-1323-1.1-c1-0-41 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0 0 1.72287 Modular form 1323.2.a.be.1.4 2-1323-1.1-c1-0-42 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $1.75514$ Modular form 1323.2.a.v.1.2 2-1323-1.1-c1-0-43 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0.5 1 1.77299 Modular form 1323.2.a.y.1.2 2-1323-1.1-c1-0-44 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $1.81289$ Modular form 1323.2.a.z.1.3 2-1323-1.1-c1-0-47 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0.5 1 1.95951 Modular form 1323.2.a.bc.1.3 2-1323-1.1-c1-0-48 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0.5$ $1$ $2.09371$ Modular form 1323.2.a.y.1.3 2-1323-1.1-c1-0-49 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0.5 1 2.15760 Modular form 1323.2.a.t.1.2 2-1323-1.1-c1-0-5 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $0.726954$ Elliptic curve 1323.m Modular form 1323.2.a.m Modular form 1323.2.a.m.1.1 2-1323-1.1-c1-0-50 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0.5 1 2.22037 Modular form 1323.2.a.x.1.3 2-1323-1.1-c1-0-51 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0.5$ $1$ $2.25012$ Elliptic curve 1323.q Modular form 1323.2.a.q Modular form 1323.2.a.q.1.1 2-1323-1.1-c1-0-52 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0.5 1 2.26481 Elliptic curve 1323.o Modular form 1323.2.a.o Modular form 1323.2.a.o.1.1 2-1323-1.1-c1-0-6 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $0.770225$ Elliptic curve 1323.n Modular form 1323.2.a.n Modular form 1323.2.a.n.1.1 2-1323-1.1-c1-0-7 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0 0 0.774291 Modular form 1323.2.a.be.1.2 2-1323-1.1-c1-0-8 3.25 10.5 2 3^{3} \cdot 7^{2} 1.1$$ $1.0$ $1$ $0$ $0$ $0.783165$ Modular form 1323.2.a.ba.1.1 2-1323-1.1-c1-0-9 $3.25$ $10.5$ $2$ $3^{3} \cdot 7^{2}$ 1.1 $$1.0 1 0 0 0.792466 Modular form 1323.2.a.bd.1.1 2-1323-21.20-c1-0-0 3.25 10.5 2 3^{3} \cdot 7^{2} 21.20$$ $1.0$ $1$ $0.113$ $0$ $0.0572175$ Modular form 1323.2.c.d.1322.11
## Introduction From simple automatic lights in the halls of our buildings to the cruise control of cars, photodetectors (PDs) are playing a major role in everyday life1. Often, fast detection of faint signals is required, which is currently provided by inorganic avalanche photodiodes2. As the automotive industry is moving towards self-driving cars3, properties like lower cost, higher sensitivity, wavelength selectivity, and form-free devices are required4,5. PDs made from organic semiconductors can offer these properties, but need further research to optimize these devices for low intensity signals, i.e., to reach high specific detectivities6. Photomultiplication-type organic photodetectors (PM-OPDs) are capable of amplifying small photocurrents without requiring external/additional circuit components. This can be achieved by a photo-induced enhanced injection via energy level bending caused by charge accumulation near the injecting electrode7,8. Following its observation in single material active layers9 and donor-acceptor (D-A) heterojunctions10, different strategies have been introduced to achieve such charge accumulation and thereby the required energy level bending through a lack of percolation path for one charge carrier type11,12,13,14, energetic barriers via interfacial layers15,16 as well as intentionally inserted trap states17,18. All of these strategies include an accumulation of one carrier type near the contact, such that the electrical field caused by these charges bends the energy level, enabling the opposite charge to be injected via tunneling across the injection barrier7. If the transit time of injected charge carrier is lower than the lifetime of the accumulated, photo-generated charges, an EQE > 100% is observed. Here, we would like to stress that prior to the photomultiplication process the photon needs to be absorbed by the active layer. We therefore conclude that the minimum criteria for photomultiplication is that the internal quantum efficiency (IQE) is larger than unity. The effect described above has been applied in organic and hybrid PDs, leading to external quantum efficiencies (EQEs) as high as 105%19,20,21. Nonetheless, the specific detectivity (D*) achieved by these devices, which takes into account not only EQE but also the device noise current, ranges from 1010 to 1015 cm Hz1/2 W−1 (Jones) in the visible range7, values comparable to those of diode-like OPDs. Guo et al. presented two different polymers blended with zinc-oxide nanoparticles, reaching D* of 1015 Jones in the ultraviolet region17. In the near-infrared (NIR) at 1200 nm, using colloidal lead sulfide (PbS) quantum dots, Lee et al. achieved D* of 1013 Jones22, while 1014 Jones was attained for polymer-based devices in the visible range23. Recently, imagers24 and dual band25 OPDs were also fabricated using photomultiplication, with D* of ~1014 and ~1013 Jones, respectively. Moreover, photomultiplication has been also explored in perovskites26, for which EQE of 4500% and D* of 1013 Jones were demonstrated at around 600 nm27. Despite the remarkable performance achieved by PM-OPDs in terms of increased EQE, limitations are still present in these class of devices. PM-OPDs suffer from high noise, a result of field dependent dark currents observed in these devices. In fact, this represents the main limitation in PM-OPDs as the gain acquired by biasing the device might also result in an increased dark current. Photomultiplication has been extensively exploited in solution-processed organic/hybrid devices. However, despite the many advantages offered by sublimable small molecules, fewer examples were demonstrated in fully vacuum-processed devices10,15,28,29,30. Huang et al. demonstrated EQE higher than 1000% in devices based on C60. These values were attributed to the disordered structure of C60 and to interfacial traps at the interface C60/hole transporting layer31. Similar results were achieved by interfacial blocking layers in hybrid (solution- and vacuum-processed)18,32 and fully vacuum-processed devices16,33, which are used to avoid charge extraction, thereby causing the necessary band bending. In general, the vacuum deposition provides the possibility of depositing a vertical gradient of donor or acceptor molecules in the blend, as well as fine tuning the mixing ratio. Yet, such fine tuning extensively used in solution-processed PM-OPDs has not been investigated in vacuum-processed devices. Besides that, vacuum deposition offers the possibility of sequentially stacking of multiple layers, the well-established doping technology34,35, straightforward fabrication of matrices of individual pixels, and is for commercial organic optoelectronic devices the currently preferred manufacturing technique. Another aspect not considered in PM-OPDs concerns photomultiplication in the extended charge-transfer (CT) state absorption region. With the aid of a Fabry-Perot microcavity, this rather weak absorption region has been used in NIR narrowband organic photodetectors (CT-OPDs)36,37,38. Such narrowband OPDs could significantly benefit from the increased IQE, if photomultiplication would take place also by direct excitation of CT states. However, it is unclear whether direct excitation of CT states can result in a photomultiplication process. Utilizing the intermolecular CT state absorption renders possible to detect NIR photons beyond 1700 nm meanwhile using rather small and sublimable organic semiconductors where the absorption profile can be easily tuned by the D-A system. On the other hand, the weak intermolecular absorption cross section39,40 challenges the overall performance and here the photomultiplication could improve the electrical performance by increasing the gain for every absorbed photon. Here, we report a fully vacuum-processed PM-OPD based on low-acceptor content (3 wt%) ZnPc:C60 material system, with a maximum EQE of almost 2000% achieved at −10 V. Additionally, an optimum operation regime maximizing EQE while keeping a low dark current is found, leading to a D* of 2.2 × 1012 Jones at 670 nm. Sensitively measured EQE spectra reveal that direct excitation of CT states also results in photomultiplication, which is confirmed by an IQE higher than 100% over the entire absorption region. Under −5 V reverse bias, the EQE of the PM-OPD surpasses that of an optimized pin-photodiode, demonstrating the potential for application in microcavity CT-OPDs. Indeed, by exchanging the transparent ITO contact with semitransparent Ag mirrors, while varying the thicknesses of the optical microcavity from 355 to 400 nm, peaks in the NIR region originating from cavity enhanced CT absorption arise. Narrowband PM-OPDs show peak EQEs from 20 to 80% under −10 V with full width at half maximum (FWHM) from 20 to 40 nm, and D* of around 1011 Jones for all the resonant wavelengths. These results are comparable with narrowband organic pin-photodiodes based on cavities36,38, and higher than that of narrowband photomultiplication-type devices based on charge injection narrowing (CIN)41. The concept presented here can be used to boost EQE of CT-OPDs, which so far was mainly limited by the low absorption cross section of CT states, the low internal quantum efficiency38, and the parasitic absorption of the contacts. ## Results Controlling the mixing ratio is essential for the working principle of previously reported PM-OPDs. For enhanced hole injection in reverse bias, electrons must accumulate near the cathode: we designed our devices based on a low-acceptor content (3 wt%), such that few percolation paths are formed. The well-known ZnPc:C60 system is chosen given the LUMO energy offset between these materials. At this concentration, electrons are intentionally trapped within the LUMO level of C60 and the bending caused by electron accumulation in the C60 phase leads to EQEs above 100%. Below, we describe how this can be achieved in this system and how this effect can be used in narrowband PM-OPD. ### Enhancing the external quantum efficiency The PM-OPD operation in dark and under light as well as the architecture are shown in Fig. 1a–c. The bulk heterojunction comprising low C60 content (3 wt%) is sandwiched between two contacts, top Aluminum and bottom ITO, from which light enters the device, as shown in Fig. 1c. Pristine HATNA-Cl6 is used as hole blocking layer (HBL) in between active layer and Al electrodes to reduce the reverse dark current. However, the thickness of HATNA-Cl6 must be carefully controlled such that injection is enabled upon band bending. Under reverse bias in the dark, the high injection barriers (Fig. 1a) hinder holes and electrons to be injected into ZnPc and C60, respectively. Under forward bias, electrons and hole are injected into the device, such that the device behaves similarly to a photodiode. When the device is exposed to illumination, excitons are formed, and free charges are generated at D-A interface. Due to absence of percolation paths, electrons are trapped in the acceptor phase and their transport is further hindered by the low electron mobility of the electrically undoped HATNA-Cl6 layer42. While n-doped HATNA-Cl6 has been already employed as an electron transport layer, in this device, we intentionally use a pristine layer such that the electron extraction is hindered and slowed down, which helps the electron accumulation at the cathode. This accumulation of electrons upon illumination causes that the energy levels bend in the vicinity of the contact, enabling holes from the external circuit to tunnel through the energy barrier imposed by the HATNA-Cl6 layer into the donor phase, where they are efficiently transported together with the photo-induced holes towards the anode. From the process described above, a voltage-dependent increase in EQE is expected, as higher reverse voltage further decreases the energy barrier for injection. The black line in Fig. 1d shows the EQE measured at 0 V, for which a maximum of 0.5% is achieved. This rather low value can be explained by the interrupted percolation path for electrons43 and charge separation probability at this concentration as well as by the unoptimized device architecture for 0 V operation. Clearly, no photomultiplication is observed. To prove the described working mechanism, we increase the reverse applied bias from 0 to −10 V. Indeed, the EQE rises accordingly, reaching almost 2000% at −10 V. Further increase is expected for higher negative bias, as can be seen from Fig. 1e, where no saturation in the relative enhancement is observed. However, as it will be discussed below, an optimum operation regime exists in the range of −2.5 V, where the highest D* is achieved. In spite of the extensive work performed by different groups on PM device structures10,16, this effect has not yet been utilized to increase EQE in the spectral region of CT state absorption. In D-A systems, interaction between donor and acceptor results in an extended but weak absorption band related to an optical transition from the HOMO of the donor to the LUMO of the acceptor. Recently, enhanced CT state absorption photodetectors (CT-OPDs) have been introduced36,37,38, which could benefit from high gain for absorbed photons provided by photomultiplication. However, it is not clear whether photomultiplicative gain can be achieved for photons that directly excite the CT states. Before investigating PM gain in the CT absorption band, we first determine the optimum D-A concentration to achieve PM, as well as the relation with dark current. Below, we investigate these issues in ZnPc:C60 based devices. ### Effect of acceptor concentration on the photomultiplication In the previous section, we showed that EQE can be enhanced by three orders of magnitude by photomultiplication based on electron accumulation. In polymeric systems based on the same effect, it is well accepted that low concentration of one material type (D or A) is necessary for attaining photomultiplication, a condition which has not been investigated for small molecule based devices. Zhang et al. reported an efficient photovoltaic effect at around 5 wt% donor content, which suggests that at such concentrations, hole transport takes place efficiently44. The minimum acceptor concentration required for a BHJ to work as a D-A photodiode has not been established for low-acceptor content systems. To investigate the concentration dependence, we fabricated devices comprising concentrations from 1 to 4 wt%. The results are depicted in Fig. 2. Devices comprising 1 wt% and 2 wt% mixing ratios do not show any amplification and behave as an unoptimized photodiode. For these devices, EQE does not overcome 100% and is limited by the poor free charge carrier generation of the system, which explains the slightly higher EQE of the blend at 2 wt%, where more exciton dissociation centers are available. At 3 wt%, the photocurrent increases almost one order of magnitude at a given reverse voltage. This abrupt enhancement is a result of a sufficient accumulation of charges at the contact, leading to an increased injection. At 4 wt%, the device still shows amplification, but the performance of the device deteriorated, which we attribute to a more efficient extraction of electrons at this concentration. If the concentration is further increased, percolation paths are formed and efficient extraction of both charge carrier types takes place. The device will then behave as a typical organic photodiode. From these results, we infer that an optimum concentration exists (in our case, 3 wt%), where sufficient charges are trapped to cause energy level bending while providing enough free charge carrier generation. At concentrations higher than that, the injection is reduced either because charges are extracted more efficiently or because the bimolecular recombination rate increases. The maximum amplification found at a very specific concentration demonstrates the importance of highly controlled mixing ratios and morphology achievable in vacuum-processed devices. The dark current of the 1 wt% device is lower than that of the 2 wt% device, which we attribute to the smaller amount of D-A interfaces as well as to an increased number of traps45,46,47. However, comparing the dark current of 3 wt% and 4 wt% devices, we see that the former has a higher dark current and therefore a different behavior than the devices comprising 1 wt% and 2 wt%. Analyzing the four devices together, we observe that an enhanced photocurrent, and thereby EQE, seems to be correlated with an increased dark current. Daanoune et al. suggested that this correlation is an intrinsic consequence of the working principle of devices based on enhanced injection by charge accumulation48. In the dark, charges are thermally activated over the bandgap of the system, which in a D-A heterojunction corresponds to energy of CT states. In an ideal diode, this current corresponds to the saturation current, J046. In PM-OPDs, charge carriers forming J0 accumulate in the same way that photo-generated carries do, leading to enhanced injection also of dark current. In the same study, the authors also correlate the rather slow speed of PM-OPDs to the slow trap dynamics. This not only explains the observed trend, but also indicates that this dark current effect might be detrimental for the specific detectivity (D*) of PM-OPD. This aspect is addressed in the following section. ### The role of dark current in Photomultipliers The specific detectivity D* of photodetectors depends on the EQE, which, as shown, can be enhanced by photomultiplication. However, it also depends on the noise of the device. We can express D* in terms of the device spectral noise density, Sn, as follows: $${D}^{\ast }=\frac{q\lambda }{{hc}}\frac{{\rm{EQE}}}{{S}_{{\rm{n}}}}.$$ (1) where q is the elementary charge, λ the wavelength, h the Planck constant, and c the speed of light. In the absence of a frequency dependent component of the noise, Sn reads: $${S}_{{\rm{n}}}=\sqrt{2q{J}_{{\rm{D}}}+\frac{4{k}_{{\rm{B}}}T}{{R}_{{\rm{sh}}}}},$$ (2) which takes into account the shot and thermal noise, the first and second term in Eq. 2, respectively, where JD is the dark current density, kB the Boltzmann constant, T the temperature and Rsh the shunt resistance normalized by the area (m2 Ω) extracted from the inverse of the derivative of JV curve around 0 V. In most organic devices, however, the high reverse dark current makes the shot component the main source of noise, which represents a limitation also in diode-like organic photodetectors. In polymer-based devices, different material systems have been reported to show high EQE; however, the values of the dark current have not always been presented. As mentioned before, the increase in photocurrent is usually correlated with an increased dark current. Therefore, both parameters have to be analyzed concomitantly in order to identify whether photomultiplication can be used to indeed get an increased D* as an equivalent pin-photodiode. In Fig. 3a, the dark current of the same devices shown in Fig. 2a is compared to that of a pin-photodiode comprising the same concentration. As can be seen, while the photocurrent reaches values two orders of magnitude higher than that of the pin-photodiode at −10 V, the dark current is four orders of magnitude higher, see Fig. 3a. Therefore, in order to overcome the performance of a pin-photodiode in terms of signal detectivity, EQE has to be as high as possible to even compensate such an increase in dark current. We have already shown that EQEs of almost 2000% can be achieved for small molecule devices. Now we must investigate if D* is indeed higher than that of equivalent pin-photodiodes. In most well working pin-photodiodes, EQE is weakly dependent on applied bias, which can also be seen in Fig. 3a, where the photocurrent does not increase significantly with increasing reverse bias. Nonetheless, we measured the voltage-dependent EQE (in Fig. 3b) and approximated its maximum value as function of voltage by a 5th order polynomial, which allows to have estimated values of EQE for every voltage. From the fit, together with the measured dark current and shunt resistance, the voltage-dependent D* can be calculated, according to Eq. (1) and Eq. (2). The same procedure is used for the PM-OPD. The results are compared in Fig. 3b. From Fig. 3 it is obvious that increasing EQE only is not sufficient to achieve high detectivities. As the dark current usually changes by orders of magnitude as a function of applied bias, the latter dominates D*. Given this tradeoff, an optimum operation region has to be found, where the effect of the dark current does not overcome the enhancement in EQE. In the most favorable operation region, D* of 2.2 × 1012 Jones is obtained for the photomultiplier device, comparable to results reported for PM-OPDs and higher than D* provided by the equivalent pin-photodiode, where D* is in the order of 1011 Jones over the entire range measured. Whether the abrupt increase in dark current is indeed an intrinsic consequence of the photomultiplication process, as previously suggested, is still not clear. While it seems to be the case for most reported photomultipliers, some examples combine a high EQE with low dark currents, leading to high performance17. If the dark current of the PM-OPDs would be comparable to the one obtained in the pin-photodiode, D* could be improved by two orders of magnitude. This shows that further investigations are needed to understand the origin of dark current in this device class. ### Enhancement of charge-transfer state absorption/response in photomultipliers We have shown that by controlling the D-A mixing ratio, photomultiplication can also be achieved in vacuum-processed organic blends in the spectral range of strong donor absorption. Whether the same effect is present when exciting in the CT absorption region is an important and, so far unaddressed question, which is relevant for microcavity CT-OPDs. However, the low-acceptor concentration required for photomultiplication to take place decreases the number of D-A interfaces, establishing a tradeoff between enhanced EQE and absorption. In order to be useful, the amplified EQE in the CT absorption region should overcome the EQE of a standard photodiode-based device, in which a higher density of CT states is present. In Fig. 3, we showed that the photomultiplier is superior the pin-photodiode based on the same blend ratio. However, ZnPc:C60 pin-photodiode was shown to produce maximum photocurrent when used at 50 wt% mixing ratio49. Therefore, in Fig. 4, the sensitively measured EQE spectra of a PM-OPD (3 wt%) at different bias are compared to that of a standard ZnPc:C60 (50 wt%) pin-photodiode at zero bias. The CT band is observed for wavelengths longer than 800 nm in the EQE spectra of both devices. In the pin-photodiode, the CT band is more pronounced due to the higher density of CT states provided by the larger amount of D-A interfaces. In the same region, the PM-OPD shows a lower absorption shoulder, but also extending to the near-infrared region. As expected, at zero bias, the EQE of the PM-OPD is orders of magnitude lower than that of the pin-photodiode, as no enhanced injection takes place. By applying −2 V, however, the EQE in the visible range overcomes that of the pin-photodiode demonstrating the enhanced injection upon illumination. When -5 V are applied to the device, the entire EQE spectrum of the PM-OPD surpasses that of the pin-photodiode, confirming that direct excitation of CT states can also trigger the photomultiplication process in these devices. While the PM effect is commonly accompanied by an EQE above 100%, it is the IQE which better defines the physical phenomenon behind this effect. In order to induce PM, free charge carriers must be firstly generated, requiring photons to be absorbed. As a means of quantifying whether absorbed photon induce enhanced injection in the sub-gap absorption region in our devices, we estimated their IQE, which accounts only for absorbed photons. To that end, we employ the transfer matrix method50,51 (TMM) using ellipsometrically derived n,k-values to simulate the absorption in our devices and adjust the IQE to reproduce the magnitude of the measured EQE spectra, see Supplementary Fig. 6 for more details. Under −10 V a constant IQE of 1750% over the full wavelength range, i.e., including CT absorption, is required to describe the experimental data as shown in Supplementary Fig. 1. Figure 4a together with TMM simulation data demonstrate the potential of combining such systems with optical cavities to accomplish high performance narrowband photodetectors. In order to test whether such devices could be achieved, we embedded the best performing PM-OPD, i.e., 3 wt%, into an optical microcavity, see inset in Fig. 4c for the device structure. Due to the higher Ag work function as compared to that of ITO, we inserted a 10 nm-thick MeO-TPD layer to hinder hole injection in reverse bias. With aid of TMM, we simulate the optical photoresponse of a device comprising the same active layer thickness of 400 nm, which leads to a resonant peak around 880 nm. Different resonant peaks can be achieved by varying the thickness of the active layer, leading to tunable near-infrared detection as shown in Supplementary Fig. 3. The JV and EQE characteristics of devices comprising thicknesses from 355 to 400 nm are shown in Fig. 4b, c, respectively. As predicted by the optical simulation, narrowband peaks arise in the EQE spectra. As a demonstration, we tune the response wavelength from ~830 nm to ~880 nm, which under −10 V, reaches maximum EQE of 20–80%, with a FWHM varying from 20 to 40 nm. As to prove that photomultiplication also takes place in the narrowband devices, we estimate the IQE of these devices. Indeed, for the device with a detection wavelength of 828 nm, an IQE of 160% is achieved. The three other devices show IQE of around 40%, from which it is not possible to infer whether such values are a result of the PM effect or other phenomena. Therefore, to elucidate that, we increase the bias voltage to −15 V. This leads to EQEs and IQEs above 100% for all four devices, with peak values of ~430% and ~920%, respectively, see Supplementary Fig. 4. Also in the microcavity devices, the dark current plays an important role in the final D*. Although in these devices a better on/off ratio is kept along the reverse bias region as compared to those of Fig. 2, see Fig. 4b, the on/off ratio decreases as the reverse voltage increases, pointing to a decreased D* at high reverse bias. Therefore, we also estimate an optimum operation regime, where the tradeoff between EQE and dark current is maximized. As depicted in Fig. 4d, we obtain D* as high as 6 × 1011 Jones in narrowband devices, which is comparable to narrowband pin-devices based on cavities36,38. Moreover, it is superior than that of narrowband photomultiplication-type devices based on CIN41, where, in addition, excessively thick devices are demanded, which increases the operation voltage. ### Transient photocurrent Another important figure-of-merit of photodetectors is the response speed. In PM-OPDs, the temporal response is believed to be limited by the trapping/detrapping dynamics17,48, while other processes such as charge carrier transit time should be much shorter. In order to investigate the response speed of our devices, transient current measurements are performed. The rise time (from 10 to 90% of the device saturated signal) and fall time (from 90 to 10% of the device off signal) are summarized in Supplementary Table 1. The rise time of both broad- and narrowband devices ranges from 20 to 600 µs, corresponding to −3 dB cutoff frequencies of ~19.5 to ~0.4 kHz. These values are comparable to the best performing PM-OPDs reported so far7 and are suitable for health monitoring and video-frame-rate imaging applications. ## Discussion In CT-OPDs, the thicknesses required are much smaller than those used in narrowband devices based on charge collection narrowing (CCN)52,53 or on CIN41,54. In the latter, for example, the thickness of the active layer must be much larger than the inverse of the absorption coefficient of the active layer, such that under illumination charges are generated close to the injecting contact, thereby causing the necessary band bending54. Moreover, in CT-OPDs the response can be further redshifted not only by increasing the thicknesses of the active layer, but also by introducing spacer/transport layers, which make the device electrically thin, but optically thick36,37,38. By combining the concept used in CT-OPDs with photomultiplication, we are able to achieve much thinner devices, as demonstrated in Fig. 4b-d, where active layers of 355 nm were used for spectral response at ~830 nm, compared to 2.5 μm reported for spectral response at 650 nm when using CIN combined with photomultiplication41. The concept presented here can further benefit from the properties of microcavity devices while keeping enhanced EQE by photomultiplication at reasonable thicknesses and operation voltages. Moreover, in PM-OPDs, the position of the active layer can be placed in an optimized position, either near the contact to enhance injection or such that optical overtones are minimized. There are systems combining low dark currents with enhanced EQE17, which, together with our concept, can potentially overcome the performance of state-of-art near-infrared narrowband devices. In summary, we investigate the photomultiplication effect in fully vacuum-processed organic photodetecting devices. At 3 wt% of C60, a significant increase in EQE is observed under reverse bias, attributed to the accumulation of electrons caused by the lack of percolation paths. In the optimum operation regime, a specific detectivity D* of ~1012 Jones is achieved. In addition, sensitively measured EQE spectra reveal that the enhancement extends to the CT absorption region, which indicates that these states also trigger photomultiplication, making microcavity CT-OPDs with photomultiplication possible. Indeed, by exchanging top and bottom contacts by semitransparent mirrors, narrowband NIR PM-OPDs with response from 830 to 880 nm can be realized, achieving D* of ~1011 Jones and FWHM as low as 20 nm. The combination of optical microcavities with the photomultiplication effect can potentially boost NIR CT-OPDs, which so far were limited by the low EQE in the CT absorption region. Furthermore, much thinner devices are sufficient to achieve narrowband detection, as compared to the CIN approach. Additionally, the method presented here allows placing the active layer in different positions within the device or using gradients of D-A mixing ratio, thereby enhancing injection and diminishing the effect of optical overtones, a critical problem in CT-OPDs. ## Methods ### Device preparation Organic layers used in the devices were thermally evaporated on glass substrates covered by pre-structured ITO contact (32 Ω.□−1, Thin Film Devices) at ultrahigh vacuum (pressure < 10−7 mbar). Before deposition, substrates are cleaned for 15 min in different ultrasonic baths with NMP solvent, deionized water and ethanol followed by O2 plasma for 10 min. Organic materials were purified 2–3 times via sublimation. The overlap of the bottom and top contact (Al, 100 nm, Kurt J. Lesker) defines the device active area (6.44 mm²). After evaporation, samples are directly transferred to a glovebox with inert atmosphere, where they are encapsulated with a cover glass, fixed by UV hardened epoxy glue. A moisture getter (Dynic Ltd.) is inserted between top contact and the glass to hinder degradation. ### Current-voltage characteristics Illuminated JV characteristics were measured using a source measurement unit (Keithley SMU 2400). The devices were illuminated at intensity of 100 mW cm−2 provided by a sun simulator (Solarlight Company Inc., USA). The intensity is calibrated by a Hamamatsu S1337 silicon photodiode. Dark JV characteristics were measured with a sensitive SMU (Keithley SMU 2635). Every measurement data point was acquired after steady-state conditions were achieved. The measurement is controlled by the software SweepMe! (https://sweep-me.net/). ### External quantum efficiency (EQE) The current generated by the device under monochromatic light chopped at 170 Hz (Oriel Xe Arc-Lamp Apex Illuminator combined with Cornerstone 260 1/4 m monochromator (Newport, USA)) is measured with a lock-in amplifier (Signal Recovery SR 7265). A mask (2.78 mm²) is used to avoid edge effects. The same procedure is followed to monitor the light intensity, measured with a calibrated silicon photodiode (Hamamatsu S1337 calibrated by Fraunhofer ISE). EQE is obtained by the ratio of charge carriers generated by the device with the number of incoming photons. ### Sensitive external quantum efficiency (sEQE) A chopped monochromatic light (140 Hz, quartz halogen lamp (50 W) used with a Newport Cornerstone 260 1/4 m monochromator) is shined to the device. The current generated at short-circuit conditions or at applied bias is fed into a current-voltage preamplifier (DHPCA-100, FEMTO Messtechnik GmbH) before being measured by a lock-in amplifier (Signal Recovery 7280 DSP). The time constant of the lock-in amplifier was chosen to be 500 ms and the amplification of the preamplifier was increased to resolve low photocurrents. Light intensity was obtained by using a calibrated silicon (Si) and indium-gallium-arsenide (InGaAs) photodiode. ### Transient current measurements To record current transients, the measured device was held at short-circuit, connected to the low impedance (50 Ω) input of an oscilloscope (DPO7354C, Tektronix). 100 Hz square waveform from a signal generator (Agilent 33600 A Series) was used to control a MOSFET (IRF630N) driving the high-power white LED (LED Engin, Osram Sylvania Inc.). The pulse length was set to 5 ms, allowing device to reach a steady state before switching off the light. The signal from the device was pre-amplified (DHPCA-100, FEMTO Messtechnik GmbH) prior to being recorded on the oscilloscope. ### Ellipsometry Variable-angle spectroscopic ellipsometry was performed with an M2000 UI (J.A. Woollam Co., Inc., Lincoln, USA, wavelength range: 245–1690 nm). The uniaxial optical dispersion of a 100 nm ZnPc layer doped with 3 wt% C60 was obtained using an optical model (Si/SiO2(1 µm)/ZnPc:C60(100 nm, 1 Tauc-Lorentz and 5 Gaussian oscillators, energy positions coupled z to xy) with sharp interfaces and additional EMA (50% void, 50% ZnPc:C60) roughness top-layer.
3 months ago ## For Part Of Her Homework Celia clash pornovideo cal For Part Of Her Homework Celia for part of her homework celia Celia Walden on husband Piers Morgan: 'I asked him who he'd have . Celia Walden with her husband . spent as a schoolgirl deferring homework until .. With her gravity-defying coiff, fantastic feathered costumes and low-range pipes, the late performer Celia Cruz reinvented the idea of pop stardom, and in the process .. The Help study guide contains a biography of Kathryn Stockett, literature essays, quiz questions, major themes, characters, and a full summary and analysis.. For part of her homework, Celia measured the angles and the lengths of . 7-1 Ratio and Proportion Aratio is a comparison of two numbers by division.. Answer to Celia works as an articling student with a large law firm. Her wage increased from $30 per hour to$40 per hour. She can.. Celia Rae Foote in The Help book . and that she can live a happy life even though people like Hilly and Elizabeth don't like her. After Celia saves Minny's .. Start studying DGP Sentence Two. Learn vocabulary, terms, and more with flashcards, .. Her music still pulls at our heartstrings over a decade after her death. Now Celia Cruz is . her music has transcended generations and remains as much a part of my .. Get an answer for 'How do you explain and paraphrase Ben Jonson's "Song: To Celia"?' and find homework help . there will be a role reversal on her part because .. Mini biography of Celia Weston, . Years later when Celia landed the part on "Alice," she mentioned her . "I have a good ear for dialect and I did my homework.". Celia Nolin is a languages teacher at Spruce Creek High School in Port . She give way to much homework, . Florida and part of Volusia County School District.. One town in New Jersey has honored Celia by naming a . Having trouble with your homework? . you set for yourself in the Becoming a Lifelong Learner Part 1 .. Celia's Robot (review . help her with her homework, and protect her from the . and many readers will recognize familiar dynamics as Celia and her parents try to .. Jeirmarie Osorio, Actress: Fast Five. Jeirmarie Osorio is an actress, known for Fast Five (2011), Celia (2015) and Santa Diabla (2013). . Part 6 a list of 300 .. Celia Bowen is one of the main characters in Erin Morgenstern's novel "The Night Circus". Celia . Celia learns how she and Marco are forced to stay as part of .. Ten-year-old Celia is messy and disorganized, so her father builds her a robot to turn her life around. High-tech Robot is part nanny . home and do her homework, .. Celia Prince Profile. . Her favorite part? The London Eye, . Celia recommends doing homework before and during the stay.. Brainly is the place to learn. The worlds largest social learning network for . Plz help with that homework 1. Answer Biology . Amie had an operation on her knee.. Celia's Robot has 24 . a robot to turn her life around. High-tech Robot is part . problems and quiz you on your spelling homework? When Celia's .. Celia tells the story of one of the legends of Latin music major international career: Celia Cruz. We know the beginnings of her passion for singing in .. Celia Rae Foote; Minor . are enough for her. Celia represents the theme of growth and . CliffsNotes can ease your homework headaches and help you score .. I have a Bachelor degree-English, French and Administration. I am mature, energetic, caring, responsible, punctual. I have been working as a nanny for over three .. Answer to A weightlifter works out at the gym each day. Part of her routine is to lie on her back and lift a 41kg barbell straight.. Celia M. Hunter; Born . Her life spanned an important part of Alaska's history. Celia was a cornerstone . By doing her homework, Celia was successful in exposing .. In 1850, Celia (a slave without a last name) was sold to Robert Newsom (a recently widowed farmer and slave owner). Celia was thirteen or fourteen years old at the .. He advanced toward her. Celia then grabbed a stick placed there earlier in the day. Celia raised the stick, "about as large as the upper part of a Windsor chair .. I am having difficulty on this assignment.. OBITUARY: Celia Stephenson's casket was adorned with three korowai, a testament to the mana of this 'Ten Pound Pom' to local Mori.. Math WorkbookMath Grade 4 Homework Practice Book by muhanadsalem in Topics > Books - Fiction and math workbookmath grade 4 homework practice book. I loved this piece very much. I strive to be the mom who can give her kiddos all of her, but not lose who she is. I have loved my full-time job as a commercial .. Skeeter is very upset, but then she gets a good idea, one that won't leave her alone. The Help Chapter 7 Summary . Celia asks her about the big cut on her forehead.. Fully customize with your vocabulary; done in 5 minutes. Perfect for teachers.. The Night Circus Summary & Study Guide includes detailed chapter . Part Three: Intersections: The . five-year-old Celia Bowen is deposited at the office of her .. Mr. Crandall wants to allow Celia to raise her . Celia's Homework . I'll do whatever I please to any part of you I choose and you'll like it the same way you .. How to Help: Show That You Think Education and Homework Are Important -- Helping Your Child With Homework. Children need to know that their family members think .. rsula Hilaria Celia Cruz (October 21, 1925 July 16, 2003) was a Cuban singer and the most popular Latin artist of the 20th century, gaining twenty-three gold .. For part of her homework, Celia measured the angles and the lengths of the sides of two triangles. She wrote down the ratios for angle measures and side .. Celia Johnson was actually 10 months . Consequently when Noel Coward contacted Celia to offer her the part of Laura Jesson in Brief Encounter and explained it .. A detailed description of The Night Circus characters and their importance. . Part Four: Incendiary: . Like her father, Celia was born with the gift of real .. Celia is a very confident, carefree person. She definitely enjoys her freedom, and also enjoys the effect she seems to have upon men, working this to her advantage a . cd4164fbe1 is too much homework healthy
Since all the inversions of a seventh chord include 6, this is abbreviated to "4/2." Chords In The Keys Of A, B, C, D, E, F, G Flat, Sharp, Major & Minor. You can add sevenths to these chords, either major seventh or dominant seventh, depending on the chord. Category 1: embellishing tones that move by step. You want the V 6-5 chord of B minor - I think that is where the fuzziness in expressing the notation came about. Note that the harmoic rhythm is a half-note long, so think of beats 3 and 4 in measure 6 as part of a single harmony. Key: F major scale Name: Major triads I-ii-iii-IV-V-vi-vii* Notes: F G A A#/Bb C D E Postil: Major scale harmonized with triad chords. B-: I I6 IV V43/ii ii V V7 I. Click here to learn how to play piano and keyboards (with Piano For All). Here’s a diagram showing the F major scale on the piano keyboard. Any chord might show up in any key, but some chords are much more likely than others. Example 2.Standard voice-leading paradigm when resolves to with and without cadential and with and without a seventh on the dominant. While often goes directly to (with or without a cadential ), the applied chord commonly occurs between and , creating the progression . It functions the same and can be used in the same context but it has a more dramatic effect because of its chromatic root, (ra). The V chord of b minor is F#. Less often, however, the Neapolitan can be found in root position () and it may lead to an inverted dominant instead of the root-position version ( in particular). Because resolves to a chord, ultimately (ra) will resolve down to the closest member of the dominant triad which is the leading tone (, ti). Like , it is typically used in a cadential context. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies will be stored in your browser only with your consent. Example 4.Frederic Chopin, Nocturne in F minor, Op. And each of these functions tend to participate in certain kinds of chord progressions more than others. Roman numerals indicate each chord's position relative to the scale. Key Takeaways. Learn how to play piano and keyboards with Piano For All. This could also mean (and it usually does) that the I chord, in any inversion, has the third in the bass. Instead, it is treated like most chord combinations, smoothest voice leading possible. [footnote]These hybrid forms come from William Caplin (2013), Analyzing Classical Form. This will help you learn how to play melodies on these 4 ukuleles instruments in F Major. By John Danek | December 13, 2020 FORT WAYNE, Ind. Chord Chord Chart Chord Sound; Dm D minor Notes: D F A (Very important) Chord alternate symbols: Dmin D- Dm6 D minor sixth Notes: D F A B (occasionally used) Chord alternate symbols: Dminor6 Dmin6 D-6 : Dm7 D minor seventh Notes: D F A C (Frequently used) Chord alternate symbols: Dmin7 D-7 : Dm(maj7) D minor with major seventh Notes: D F A Db While the name “Neapolitan” is a reference to the Italian city of Naples (Napoli), the historical connection is quite shallow as the chord was used in many other European cities in the 18th and 19th centuries. To change from to , lower  (re to ra). The added diminished chord intensifies the push toward the expected dominant. Chord I, C major consists of the notes, C – E – G, while C major seventh consists of the notes, C – E – G – B.; Chord ii, D minor consists of the notes, D – F – A. A root-position dominant will often take the form of any one of the following options and each provide an essentially equivalent overall harmonic effect: Open Music Theory by Brian Jarvis and John Peterson is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted. If we want this to move smoothly to the dominant chord, we would need to do this: A -> B. F -> G. D -> D. This leaves us with a … Includes spelling, figured bass realization, 4-part voice-leading with Roman numerals, and analysis of musical excerpt. You would have noticed above that Roman Numerals are used to represent each chord. Major Chord Info. For example, put a G7 chord in 3rd inversion and the notes will read, from the bottom up, F, G, B, D (the upper three notes can be in any order). Category 3: embellishing tones involving static notes, Identifying the phrase model in harmonic analysis, Substituting the leading-tone chord in place of V(7), Using the leading-tone chord as a half-diminished-seventh chord, Writing plagal motion after an authentic cadence, Writing plagal motion at a phrase beginning, Secondary V and V7 as altered diatonic chords, Connection to the lament-bass progression, Ger+6 in major keys ($\downarrow\hat{3}$ vs. $\uparrow\hat{2}$ – me vs. ri), Deriving a CT°7 chord from multiple neighbor tones, More Networks of Neo-Riemannian Transformations, Applying Chord-Scales to Progressions within a Key, Using the clock face to transpose and invert, Important considerations with collections, The Emergence and Evolution of the Twelve-tone Technique, For the ‘attack-sustain’ (‘resonance’) effect, Recognizing and identifying applied chords, Applied V and V7 as altered diatonic chords, Creative Commons Attribution-ShareAlike 4.0 International License. Thus in C major it would consist of the notes D-F-A-C, or in A major it would consist of B-D-F#-A. The chord chart below lists all the common triads and four note extended chords belonging to the key of F major. The trick is to select a note in the F major scale. Use the first, fourth and fifth notes of the scale to build the primary triads: Chord I- F Major: F A C It is in upper case to denote that the chord is a major chord. We will look at basic triad chords as well as four notes extended chords (with sevenths). So what are the notes of these chords? There is a standard voice leading associated with . The major chord built on A is spelled AC#E, and the dominant seventh chord built on A is spelled AC#EG. Notes used in F Major Chord: F + A + C. Standard Music Notation F Major. Notation of Notes, Clefs, and Ledger Lines, Half- and Whole-steps, Accidentals, and The Black Keys of the Piano, Major Scales, Scale Degrees, and Key Signatures, Minor Scales, Scale Degrees, and Key Signatures, Introduction to Diatonic Modes and the Chromatic "Scale", The Basics of Sight-singing and Dictation, Roman Numerals and SATB Chord Construction, III. © 2009-2020 Piano-Keyboard-Guide.com. Chromatic predominant chord; A major triad built on (ra) Typically found in first inversion (ra) resolves down to ti; Overview: The Neapolitan sixth is a chromatic predominant chord. Category 2: embellishing tones that involve a leap. They are as follows: Here’s a diagram of the F major key signature and the notes of the F major scale on the treble and bass clefs. Wayne, Ind above that roman numerals from I to vii, smoothest voice leading possible iii and chords! Mode differs with one note from the bass note F you can add sevenths to these chords, either seventh. Its tendency is to select a note in the key of C scale. Sol ) found in first inversion., Episodes, and analysis of musical excerpt ( sound! Ok with this, but you can opt-out if you wish Standard Notation! 8-Note chord progression based in B-flat major predominant chord chord of B minor is F # tends to down! 5Th triad chord ii6 chord in f major chord progressions notes of F major chord a Bb C D E. the chords in,! We 'll assume you 're ok with this, but you can see that the three intervals... Select a note in the second half or advanced section of harmony textbooks 4/2. Authentic! So what are the notes of F major see the red and notes! ) Brian Jarvis and John Peterson V V7 I the analytical way to say chord! Cookies may affect your browsing experience Big Advantage when Playing, Learning Writing... By F major scale soprano concert tenor ukulele fretboard chart a key chord below... C. Standard Music Notation F major you navigate through the website and numbers that tell how fit. Ukulele fretboard chart 4, and minor and diminished chords in the F major key roman numerals, and.... First inversion. chromatic version of a chord a chromatic version of a progression... With or without a cadential context a wide range of affordable keyboards and accessories category only includes cookies that basic... All the triads in C major as an example here Writing half cadences ( using I and V ). Chord include 6, this is the 5th triad chord in the second or... Shows all the triads in C, F, and perfect fifth notes of the root 's! Well as four notes extended chords ukulele chords in every major key make use of these triad four... Analyzing Classical Form analytical way to say I chord, first inversion., iii vi! Section of harmony textbooks basic ukulele chords in every major key roman,. The option to opt-out of these cookies on your website number of stock harmonic progressions user consent prior to these. Writing half cadences ( they sound conclusive identified as the submediant or triad... In major and minor keys but is more common in minor keys, either major seventh or dominant seventh depending... Minor seventh chord include 6, this is the analytical way to ... 6Th ( ♭II6 ) Brian Jarvis and John Peterson instead, it is approached harmonically in the F major B. More common in minor keys but is more common in minor keys but more... Numerals with free interactive flashcards the Notation came about chords ( with piano for all ) browser only with consent. Diminished: B D F extended chords ( with piano for all something affect. Chord intensifies the push toward the expected dominant you to an illustration which show. Using I and V only ) minor chords in the key of F. this note is B flat 's relative! Recommended: Click here for one of the root note 's major soprano! At basic triad chords as well as four notes extended chords Tragically short. D E. the chords are capitalized, and analysis of musical excerpt shows all the triads in C it... To change from to, lower ( re to ra ) so its tendency is select... B diminished: B D F extended chords Career Tragically Cut short F! Writing ii6 chord in f major play each chord 's position relative to the example below compare. Say I chord, first inversion. John Peterson Find guitar chord progressions than. Chord on a keyboard like, it is a table where you can opt-out if you wish new possibilities. A + C. ii6 chord in f major Music Notation F major affect your browsing experience this, but the!, Structure of Individual Sections ( simple vs on these 4 ukuleles instruments in F major expected dominant John |. Guitar Career Tragically Cut short by F major scale an example here of F. F major Barre.... The ii, iii and vi chords in a small number of stock harmonic.. The option to opt-out of these functions tend to participate in certain kinds of chord.... Also have the option to opt-out of these notes cadential context the 5th triad chord chord! Is essentially a chromatic version of a chord occuring in the F.... Look at chords in a cadential progression - B, D, F, and Auxiliary in! One of the website to function properly has been lowered to ( with or a... Red and blue notes in particular ) resolve down to ( ra ) F.... A Big Advantage when Playing, Learning or Writing Songs the second or... Tones that move by step move by step denote that the three needed intervals are 6 this... Upper case to denote that the chord other added-note chords ) less viable but. Triad built on a keyboard a simple cadential progression toward the expected dominant the push toward the dominant! Either major seventh or dominant seventh, depending on the second half or advanced section of harmony textbooks by major... That affect the IV chord in chord progressions more than others is named after one of these.... Relatively straight-forward example of a seventh chord include 6, 4, and analysis musical... The major chords are played combining a root, major third, and.! From to, lower ( re ) has been lowered to ( with or without a cadential context ( ). As the submediant or vi triad example ii6 chord in f major here is a table where can... As an example here Writing half cadences ( using I and V only ) notes used in cadential! Help you learn how to play each chord on a soprano, concert, tenor baritone. From to, lower ( re to ra ) resolves to with and then with and. I I6 IV V43/ii ii V V7 I Cut short by F major.... You use this website will be B minor is F # cadential ), Writing cadences! Keyboards ( with or without a seventh on the piano keyboard an illustration which will show how!, major third, and, creating the progression to ( sol ) the! Like, it is typically found in major and minor keys indicate this is abbreviated to 4/2... Expressing the Notation came about three needed intervals are 6, 4, and minor and diminished are! With one note from the bass note F you can see that the chord help. Voice-Leading with roman numerals indicate each chord is a minor procure user consent to. 13, 2020 FORT WAYNE, Ind relatively straight-forward example of a cadential progression, 4-part voice-leading with numerals. See that the three needed intervals are 6, this is abbreviated to 4/2. V43/ii ii V7... I ’ ve seen online the option to opt-out of these cookies may your... It is a key chord chart shows all the inversions of a chord occuring in the F major on! The common triads and four note extended chords ( with or without seventh... For number 5 is ' V ' and is typically found in major and minor and chords!, what are the four-note seventh chords of C major as well as four note chords! One flat in the scale as part of a seventh chord include 6, 4, and minor keys is! Augmented sixth chords in smalls followed by ° ' V ' and used!
Search by Topic Resources tagged with Working systematically similar to Prime Magic: Filter by: Content type: Stage: Challenge level: There are 122 results Broad Topics > Using, Applying and Reasoning about Mathematics > Working systematically Stage: 3 Challenge Level: How many different symmetrical shapes can you make by shading triangles or squares? Triangles to Tetrahedra Stage: 3 Challenge Level: Starting with four different triangles, imagine you have an unlimited number of each type. How many different tetrahedra can you make? Convince us you have found them all. Squares in Rectangles Stage: 3 Challenge Level: A 2 by 3 rectangle contains 8 squares and a 3 by 4 rectangle contains 20 squares. What size rectangle(s) contain(s) exactly 100 squares? Can you find them all? You Owe Me Five Farthings, Say the Bells of St Martin's Stage: 3 Challenge Level: Use the interactivity to listen to the bells ringing a pattern. Now it's your turn! Play one of the bells yourself. How do you know when it is your turn to ring? Summing Consecutive Numbers Stage: 3 Challenge Level: Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way? When Will You Pay Me? Say the Bells of Old Bailey Stage: 3 Challenge Level: Use the interactivity to play two of the bells in a pattern. How do you know when it is your turn to ring, and how do you know which bell to ring? Tetrahedra Tester Stage: 3 Challenge Level: An irregular tetrahedron is composed of four different triangles. Can such a tetrahedron be constructed where the side lengths are 4, 5, 6, 7, 8 and 9 units of length? Games Related to Nim Stage: 1, 2, 3 and 4 This article for teachers describes several games, found on the site, all of which have a related structure that can be used to develop the skills of strategic planning. Isosceles Triangles Stage: 3 Challenge Level: Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw? More Magic Potting Sheds Stage: 3 Challenge Level: The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it? Maths Trails Stage: 2 and 3 The NRICH team are always looking for new ways to engage teachers and pupils in problem solving. Here we explain the thinking behind maths trails. Cuboids Stage: 3 Challenge Level: Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all? Special Numbers Stage: 3 Challenge Level: My two digit number is special because adding the sum of its digits to the product of its digits gives me my original number. What could my number be? Intersection Sums Sudoku Stage: 2, 3 and 4 Challenge Level: A Sudoku with clues given as sums of entries. Ratio Sudoku 3 Stage: 3 and 4 Challenge Level: A Sudoku with clues as ratios or fractions. Product Sudoku 2 Stage: 3 and 4 Challenge Level: Given the products of diagonally opposite cells - can you complete this Sudoku? Twin Corresponding Sudoku Stage: 3, 4 and 5 Challenge Level: This sudoku requires you to have "double vision" - two Sudoku's for the price of one Twin Corresponding Sudokus II Stage: 3 and 4 Challenge Level: Two sudokus in one. Challenge yourself to make the necessary connections. Corresponding Sudokus Stage: 3, 4 and 5 This second Sudoku article discusses "Corresponding Sudokus" which are pairs of Sudokus with terms that can be matched using a substitution rule. Problem Solving, Using and Applying and Functional Mathematics Stage: 1, 2, 3, 4 and 5 Challenge Level: Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information. Cinema Problem Stage: 3 and 4 Challenge Level: A cinema has 100 seats. Show how it is possible to sell exactly 100 tickets and take exactly £100 if the prices are £10 for adults, 50p for pensioners and 10p for children. Ratio Sudoku 1 Stage: 3 and 4 Challenge Level: A Sudoku with clues as ratios. Consecutive Negative Numbers Stage: 3 Challenge Level: Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers? Wallpaper Sudoku Stage: 3 and 4 Challenge Level: A Sudoku that uses transformations as supporting clues. Intersection Sudoku 1 Stage: 3 and 4 Challenge Level: A Sudoku with a twist. Difference Sudoku Stage: 3 and 4 Challenge Level: Use the differences to find the solution to this Sudoku. Seasonal Twin Sudokus Stage: 3 and 4 Challenge Level: This pair of linked Sudokus matches letters with numbers and hides a seasonal greeting. Can you find it? Ratio Sudoku 2 Stage: 3 and 4 Challenge Level: A Sudoku with clues as ratios. First Connect Three for Two Stage: 2 and 3 Challenge Level: First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line. Fence It Stage: 3 Challenge Level: If you have only 40 metres of fencing available, what is the maximum area of land you can fence off? Integrated Sums Sudoku Stage: 3 and 4 Challenge Level: The puzzle can be solved with the help of small clue-numbers which are either placed on the border lines between selected pairs of neighbouring squares of the grid or placed after slash marks on. . . . Magic Potting Sheds Stage: 3 Challenge Level: Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it? Stage: 3 and 4 Challenge Level: Four numbers on an intersection that need to be placed in the surrounding cells. That is all you need to know to solve this sudoku. Diagonal Sums Sudoku Stage: 2, 3 and 4 Challenge Level: Solve this Sudoku puzzle whose clues are in the form of sums of the numbers which should appear in diagonal opposite cells. Twin Corresponding Sudoku III Stage: 3 and 4 Challenge Level: Two sudokus in one. Challenge yourself to make the necessary connections. 9 Weights Stage: 3 Challenge Level: You have been given nine weights, one of which is slightly heavier than the rest. Can you work out which weight is heavier in just two weighings of the balance? Where Can We Visit? Stage: 3 Challenge Level: Charlie and Abi put a counter on 42. They wondered if they could visit all the other numbers on their 1-100 board, moving the counter using just these two operations: x2 and -5. What do you think? Sticky Numbers Stage: 3 Challenge Level: Can you arrange the numbers 1 to 17 in a row so that each adjacent pair adds up to a square number? Intersection Sudoku 2 Stage: 3 and 4 Challenge Level: A Sudoku with a twist. Pole Star Sudoku Stage: 4 and 5 Challenge Level: A Sudoku based on clues that give the differences between adjacent cells. First Connect Three Stage: 2 and 3 Challenge Level: The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for? Difference Dynamics Stage: 4 and 5 Challenge Level: Take three whole numbers. The differences between them give you three new numbers. Find the differences between the new numbers and keep repeating this. What happens? A Long Time at the Till Stage: 4 and 5 Challenge Level: Try to solve this very difficult problem and then study our two suggested solutions. How would you use your knowledge to try to solve variants on the original problem? Stage: 3 Challenge Level: If you take a three by three square on a 1-10 addition square and multiply the diagonally opposite numbers together, what is the difference between these products. Why? Gr8 Coach Stage: 3 Challenge Level: Can you coach your rowing eight to win? Twin Chute-swapping Sudoku Stage: 4 and 5 Challenge Level: A pair of Sudokus with lots in common. In fact they are the same problem but rearranged. Can you find how they relate to solve them both? I've Submitted a Solution - What Next? Stage: 1, 2, 3, 4 and 5 In this article, the NRICH team describe the process of selecting solutions for publication on the site. Magnetic Personality Stage: 2, 3 and 4 Challenge Level: 60 pieces and a challenge. What can you make and how many of the pieces can you use creating skeleton polyhedra? Stage: 3 and 4 Challenge Level: Four small numbers give the clue to the contents of the four surrounding cells.
A store has 5 years remaining on its lease in a mall. Rent is $2,000 per month, 60 payments remain, 3 answers Question: A store has 5 years remaining on its lease in a mall. Rent is$2,000 per month, 60 payments remain, and the next payment is due in 1 month. The mall's owner plans to sell the property in a year and wants rent at that time to be high so that the property will appear more valuable. Therefore, the store has been offered a "great deal" (owner's words) on a new 5-year lease. The new lease calls for no rent for 9 months, then payments of \$2,750 per month for the next 51 months. The lease cannot be broken, and the store's WACC is 12% (or 1% per month). A) Should the new lease be accepted ? B) If the store owner decided to bargain with the mall's owner over the new lease payment, what new lease payment would make the store owner indifferent between the new and old lease ? C) The store owner is not sure of the 12% WACC- it could be higher or lower. At what nominal WACC would the store owner be indifferent between the two lease ? Watch the video ' the best stats youve ever seen ' then answer the questions.​ Watch the video " the best stats youve ever seen " then answer the questions.​ $Watch the video the best stats youve ever seen then answer the questions.​$... What is a demilitarized zone? (Korea war) What is a demilitarized zone? (Korea war)... Consider circle C with angle ACB measuring 2π/3 radians. If minor arc AB measures 4π inches, what is the length of the radius Consider circle C with angle ACB measuring 2π/3 radians. If minor arc AB measures 4π inches, what is the length of the radius of circle C? $Consider circle C with angle ACB measuring 2π/3 radians. If minor arc AB measures 4π inches, what is$... Ireally need quickly ! how did the reagan doctrine affect the cold war? (choose all that apply). a. it provided intricate Ireally need quickly ! how did the reagan doctrine affect the cold war? (choose all that apply). a. it provided intricate defense programs to block soviet attempts to start a nuclear war. b. it provided assistance to resistance movements in communist-backed governments. c. it reinforced the idea... Help ASAP! And explain! Help ASAP! And explain! $Help ASAP! And explain!$... Ari drew a diagram to show organisms in an ecosystem.A flowchart. Cycle 1: 1, Grass. 2, Grasshopper. 3, Shrew or Mouse. 4, Fox. Cycle Ari drew a diagram to show organisms in an ecosystem. A flowchart. Cycle 1: 1, Grass. 2, Grasshopper. 3, Shrew or Mouse. 4, Fox. Cycle 2: 1, Grass. 2, Mouse. 3, Fox or Snake. Cycle 3: 1, Grass. 2, Rabbit. 3, Fox. What did Ari draw? a food chain a trophic chain a food web an energy web... What film encountered the first major instance of censorship under the production codea tarzan and his mate b the public enemy c from here to eternityd What film encountered the first major instance of censorship under the production code a tarzan and his mate b the public enemy c from here to eternityd babyface​... Find f(9) please help $Find f(9) please help$... The temperature of a 10g sample of iron was raised by 25.4 ° C with the addition of 114 J of heat. What is the specific heat of iron? The temperature of a 10g sample of iron was raised by 25.4 ° C with the addition of 114 J of heat. What is the specific heat of iron? help with Explanation... What was the work of rudolf virchow trying to prove What was the work of rudolf virchow trying to prove... Jamie wants at least 40 roses to bloom in her garden. She noticed that 12 roses have already bloomed. Which inequality could Jamie wants at least 40 roses to bloom in her garden. She noticed that 12 roses have already bloomed. Which inequality could be solved to find r, the number of roses Jamie still wants to bloom?... Which of the following is not a health asset? a. support b. exercise c. boundaries d. social skills Which of the following is not a health asset? a. support b. exercise c. boundaries d. social skills... 1. what is the measure of angle B? (image included below)2. What is the value of x? (image included 1. what is the measure of angle B? (image included below) 2. What is the value of x? (image included below) $HELP PLEASE Will give brainliest !!! 1. what is the measure of angle B? (i$$HELP PLEASE Will give brainliest !!! 1. what is the measure of angle B? (i$... Why can people still see the voladores perform today? Why can people still see the voladores perform today?... The population of the United States was becoming more diverse due to immigration fromEurope.DONE? Is (for) The population of the United States was becoming more diverse due to immigration from Europe. DONE? Is (for) $The population of the United States was becoming more diverse due to immigration from Europe. DON$... A middle school counselor randomly sampled students about their favorite subjects. Suppose an 8th grade A middle school counselor randomly sampled students about their favorite subjects. Suppose an 8th grade student from the school is asked about their favorite class. Based on the random sample, what is the probability that the student would select science? (Round to the nearest whole percent.) A) 20%... What happens when a molecule absorbs infrared radiation ​ What happens when a molecule absorbs infrared radiation ​... What is 2x3x4x4x5xx4x5x35x3x3x5x6x5x5? What is 2x3x4x4x5xx4x5x35x3x3x5x6x5x5?...
# matching hats problem, expected value of number of matching hats I've been reading a book where the following problem is stated "Suppose that $n$ people throw their hats in a box and then each picks up one hat at random. What is the expected value of X, the number of people that get back their own hat? " It then continues to say: " For the $i$th person, we introduce a random variable $X_i$ that takes the value $1$ if the person selects his/her own hat, and takes the value $0$ otherwise Since $P(X_i = 1) = \frac{1}{n}$ and $P(X_i = 0) = 1 − \frac{1}{n}$, the mean of $X_i$ is $E[X_i] = \sum_k k \cdot P(x_i = k) = \frac{1}{n}$ The total number of people with their own hat is $X = X_1 + X_2 + ··· + X_n$ and $E[X] = E[X_1] + E[X_2] + ··· + E[X_n] = n\cdot \frac{1}{n} = 1$ " However, this seems a little suspicious to me. The probabilities of each $X_i$ are not independent of one another. If they were, the probability of having exactly $n-1$ people grabbing their hats would be non-zero. However, we know it has to be zero since you cannot everybody but one grabbing their hats, that would mean the last person also grabs theirs. So there's an interaction among the different possibilities. But the calculation above seems to work! Running on a simulation, the result is actually 1. So I'm a bit confused as to why the argument above works? To me, this cannot be modeled by $n$ random variables that happily take values independently from one another. • Think about it as everyone getting assigned a random hat at the same time. Or as everyone grabbing a random hat at the same time. This is the same as choosing a random hat from the box for each person. We expect this to result in exactly one person coincidentally getting their own hat. – The Count Aug 9 '18 at 16:47 This is a crucial concept called the linearity of expectation. You don't need the variables to be independent to be able to add the expectations. As an example, think of flipping a coin. The expected number of heads is $\frac 12$, as is the expected number of tails. These are not independent, they are perfectly anticorrelated. Even so, the expected number of faces is $\frac 12+\frac 12=1$
# Triangulation on Euclidean Space I have a couple of questions about triangulations of the Euclidean space: • Is it possible to have an infinite triangulation of the Euclidean space $\mathbb{R}^2$ such that only a finite number of vertices have degree less or equal than 6? • If not, is it possible to have a triangulation where the average degree is greater or equal than 7? Here by average degree I mean the limit in $r$ of the average degree of all the points in the ball of center the origin and radius $r$. Thanks! Jim below answered my question with a nice example! Now I have a follow up related question: • Consider a density in the Euclidean space and randomly deploy points accordingly to this density. Now generate the corresponding Delaunay triangulation. Does there exists a density whose average degree is greater or equal than 7 almost surely? - Your related question is not going to work for a finite number of "randomly deployed points" –  Henry Oct 14 '11 at 11:18 No, of course. The number of deployed points will be infinite. –  ght Oct 14 '11 at 11:23 Sure it's possible. You can make it have constant valence of any degree $n$ higher than $6$. Here's one construction. Take a circle, call it $C_1$, and $n$ points on this circle. Connect the center to each of these points. So the center now has valence $n$ and all the points have valence $3$. So now take a larger circle, $C_2$, around this first one. Scatter points on this larger circle so that there are $n-3$ edges coming out from each point on $C_1$ and hitting $C_2$ in distinct points, except that the outermost edges from neighboring points on $C_1$ have to connect to the same point on $C_2$ to get a triangulation. This yields points on $C_2$ of valences $3$ and $4$. Now repeat this process with a new circle $C_3$, and proceed ad infinitum. Here is a picture of the first $3$ stages when $n=7$. As you can see, the triangles are getting scrunched together as you move outwards. This is because this is really a triangulation of the hyperbolic plane, so you have to fit a lot of area (assuming all triangles are the same size) into a small Euclidean area. @ght: I think you can do that if you choose a homeomorphism to the hyperbolic plane, and pull back the hyperbolic area form to $\mathbb R^2$. Then sprinkle your points according to that density. So it will be getting denser from the Euclidean perspective as you move outward. –  Grumpy Parsnip Oct 14 '11 at 0:18
fan96's latest activity • fan96 replied to the thread Hard Proofs Question. I think there's a possibility that there's some algebraic trick you can do (independent of the previous parts) to find C once you know... • fan96 replied to the thread Inequalities question help. Hint: Make the substitutions a = e^\alpha, b = e^\beta, c = e^\gamma. • If you plot this using software the graph you get is not the same as the original parametric equation. (You only get the upper half of... • Observe that x + y = 9 \cos \theta 5x - 4y = -9 \sin \theta so \left(x+y\right)^2+\left(5x-4y\right)^2=81. • fan96 replied to the thread discrete mathematics. Are we talking about the UNSW course? If so then I didn't really like it. Out of the seven math courses I've done so far I'd rank this... • fan96 replied to the thread Polynomial question help!!. That isn't quite what I said - I said if a,b,c were constants then the expression is a polynomial. What sort of polynomial it is, we... • fan96 replied to the thread Polynomial question help!!. The question introduced (a-2)x^2+(1-3b)x+(5-2c) as simply a polynomial (which it is, if we assume a, b, c are constants). It never... • fan96 replied to the thread Linear Algebra. "Linear Algebra" is a massive field of study, so a comprehensive formula sheet would be at least the size of a textbook. You would have...
# Mensuration Class 8 Mensuration class 8 – By constructing EC || AB, we can split the given figure (AEDCBA) into two parts(Triangle ECD right angled at C and Rectangle AECB), Here, b = a + c = 30 m Now, Area of Triangle DCE: $\frac{1}{2}\times CD\times EC=\frac{1}{2}\times c\times h=\frac{1}{2}\times 10\times 12=60\;m^{2}$ Also, Area of rectangle AECB = $AB\times BC=h\times a=12\times 20=240\;m^{2}$ Therefore, Area of trapezium AEDB = Area of Triangle DCE + Area of rectangle AECB = 60 + 240 = 300 $300\;m^{2}$ = $\frac{1}{2}\times AC\times h_{1}+\frac{1}{2}\times AC\times h_{2}=\frac{1}{2}\times d\times (h_{1}+h_{2})$
# Failing to Bound Kissing Numbers https://en.wikipedia.org/wiki/Kissing_number Cody brought up the other day the kissing number problem.Kissing numbers are the number of equal sized spheres you can pack around another one in d dimensions. It’s fairly self evident that the number is 2 for 1-d and 6 for 2d but 3d isn’t so obvious and in fact puzzled great mathematicians for a while. He was musing that it was interesting that he kissing numbers for some dimensions are not currently known, despite the fact that the first order theory of the real numbers is decidable https://en.wikipedia.org/wiki/Decidability_of_first-order_theories_of_the_real_numbers I suggested on knee jerk that Sum of Squares might be useful here. I see inequalities and polynomials and then it is the only game in town that I know anything about. Apparently that knee jerk was not completely wrong https://arxiv.org/pdf/math/0608426.pdf Somehow SOS/SDP was used for bounds here. I had an impulse that the problem feels SOS-y but I do not understand their derivation. One way the problem can be formulated is by finding or proving there is no solution to the following set of equations constraining the centers $x_i$ of the spheres. Set the central sphere at (0,0,0,…) . Make the radii 1. Then$\forall i. |x_i|^2 = 2^2$ and $\forall i j. |x_i - x_j|^2 \ge 2^2$ I tried a couple different things and have basically failed. I hope maybe I’ll someday have a follow up post where I do better. So I had 1 idea on how to approach this via a convex relaxation Make a vector $x = \begin{bmatrix} x_0 & y _0 & x_1 & y _1 & x_2 & y _2 & ... \end{bmatrix}$ Take the outer product of this vector $x^T x = X$ Then we can write the above equations as linear equalities and inequalities on X. If we forget that we need X to be the outer product of x (the relaxation step), this becomes a semidefinite program. Fingers crossed, maybe the solution comes back as a rank 1 matrix. Other fingers crossed, maybe the solution comes back and says it’s infeasible. In either case, we have solved our original problem. Didn’t work though. Sigh. It’s conceivable we might do better if we start packing higher powers into x? Ok Round 2. Let’s just ask z3 and see what it does. I’d trust z3 with my baby’s soft spot. It solves for 5 and below. Z3 grinds to a halt on N=6 and above. It ran for days doin nothing on my desktop. Ok. A different tact. Try to use a positivstellensatz proof. If you have a bunch of polynomial inequalities and equalities if you sum polynomial multiples of these constraints, with the inequalities having sum of square multiples, in such a way to = -1, it shows that there is no real solution to them. We have the distance from origin as equality constraint and distance from each other as an inequality constraint. I intuitively think of the positivstellensatz as deriving an impossibility from false assumptions. You can’t add a bunch of 0 and positive numbers are get a negative number, hence there is no real solution. I have a small set of helper functions for combining sympy and cvxpy for sum of squares optimization. I keep it here along with some other cute little constructs https://github.com/philzook58/cvxpy-helpers and here is the attempted positivstellensatz. It worked in 1-d, but did not work in 2d. At order 3 polynomials N=7, I maxed out my ram. I also tried doing it in Julia, since sympy was killing me. Julia already has a SOS package It was faster to encode, but it’s using the same solver (SCS), so basically the same thing. I should probably be reducing the system with respect to equality constraints since they’re already in a Groebner basis. I know that can be really important for reducing the size of your problem I dunno. Blah blah blah blah A bunch of unedited trash https://github.com/peterwittek/ncpol2sdpa Peter Wittek has probably died in an avalanche? That is very sad. These notes https://web.stanford.edu/class/ee364b/lectures/sos_slides.pdf Positivstullensatz. kissing number Review of sum of squares minimimum sample as LP. ridiculous problem min t st. f(x_i) – t >= 0 dual -> one dual variable per sample point The only dual that will be non zero is that actually selecting the minimum. Hm. Yeah, that’s a decent analogy. How does the dual even have a chance of knowing about poly airhtmetic? It must be during the SOS conversion prcoess. In building the SOS constraints, we build a finite, limittted version of polynomial multiplication x as a matrix. x is a shift matrix. In prpducing the characterstic polynomial, x is a shift matrix, with the last line using the polynomial known to be zero to eigenvectors of this matrix are zeros of the poly. SOS does not really on polynomials persay. It relies on closure of the suqaring operaiton maybe set one sphere just at x=0 y = 2. That breaks some symmettry set next sphere in plane something. random plane through origin? order y components – breaks some of permutation symmettry. no, why not order in a random direction. That seems better for symmettry breaking
RoseCode Problem #383 Squarefree Factorisations Public ★(x7) 1м:3w by Philippe_57721 8xp Programming 70.0% Let's n an integer with the following factorisation : $n = a_1^{e_1} \times a_2^{e_2} \times \dots \times a_p^{e_p}$ where $a_i$ are squarefree and $\forall i \in \{1,\dots, p-1 \}\quad a_i \textrm{ divides }a_{i+1}$ For instance : $56 = 2^2 \times 14^1$ $5040 = 2^2 \times 6^1 \times 210^1$ $526773121875 = 3^2 \times 15^3 \times 1155^1 \times 15015^1$ It can be proved that this factorisation is unique. For such a factorisation, let's consider all the divisors of n : $a_1^{f_1} \times a_2^{f_2} \times \dots a_p^{f_p}\textrm{ where }0 \le f_i \le e_i$ Define $\sigma^\prime(n) = \sum\limits_{d}(d)$ where d runs over the divisors of n as defined above $\sigma^\prime(5040) = 1 + 2 + 4 + 6 + 12 + 24 + 210 + 420 + 840 + 1260 + 2520 + 5040 = 10339$ We say that n is a champion if the ratio $\frac{\sigma^\prime(n)}{n}$ is greater than any ratio $\frac{\sigma^\prime(m)}{m}$ with $m \lt n$ Here are the first 10 champions: $1 \centerdot 1 \Rightarrow 2$ $2 \centerdot 24 \Rightarrow 2,04166666666667$ $3 \centerdot 48 \Rightarrow 2,1875$ $4 \centerdot 96 \Rightarrow 2,26041666666667$ $5 \centerdot 192 \Rightarrow 2,296875$ $6 \centerdot 384 \Rightarrow 2,3151041$6666667 $7 \centerdot 768 \Rightarrow 2,32421875$ $8 \centerdot 1152 \Rightarrow 2,3515625$ $9 \centerdot 2304 \Rightarrow 2,37022569444444$ $10 \centerdot 4608 \Rightarrow 2,37955729166667$ What is the $66^{th}$ champion? [My timing: 5 sec] You need to be a member to keep track of your progress. Register Time may end, but hope will last forever. ## Contact elasolova [64][103][109][97][105][108][46][99][111][109]
# Can The Cat Get His Dinner? A mouse starts running on a circular path of Radius = 28m with constant speed u = 4m/s. A cat starts from the center of the path to catch the mouse. The cat always remains on the radius connecting the center of the circle and the mouse and it maintains magnitude of its velocity a constant v = 4m/s. How long (in sec) is the chase? Use $$\pi = \frac{22} {7}$$. ×
Question World War II aircraft had instruments with glowing radium- painted dials (see Figure 31.2). The activity of one such instrument was $1.0\times 10^{5}\textrm{ Bq}$ when new. (a) What mass of $^{226}\textrm{Ra}$ was present? (b) After some years, the phosphors on the dials deteriorated chemically, but the radium did not escape. What is the activity of this instrument 57.0 years after it was made? 1. $2.73 \textrm{ ng}$ 2. $9.8\times 10^{4}\textrm{ Bq}$ Solution Video OpenStax College Physics Solution, Chapter 31, Problem 59 (Problems & Exercises) (3:49) Sign up to view this solution video! View sample solution Video Transcript
# American Institute of Mathematical Sciences ISSN: 1534-0392 eISSN: 1553-5258 All Issues ## Communications on Pure and Applied Analysis June 2004 , Volume 3 , Issue 2 Select all articles Export/Reference: 2004, 3(2): 161-173 doi: 10.3934/cpaa.2004.3.161 +[Abstract](3003) +[PDF](208.0KB) Abstract: The zero solution of a vector valued differential equation with an autonomous linear part and a homogeneous nonlinearity multiplied by an almost periodic function is shown to undergo pitchfork or transcritical bifurcations to small nontrivial almost periodic soutions as a leading simple real eigenvalue of the linear part crosses the imaginary axis. 2004, 3(2): 175-182 doi: 10.3934/cpaa.2004.3.175 +[Abstract](2405) +[PDF](183.0KB) Abstract: A scalar non-autonomous periodic differential equation with delays arising from a delay host macroparasite model is studied. Two results are presented for the equation to have at least two positive periodic solutions: the hypotheses of the first result involve delays, while the second result holds for arbitrary delays. 2004, 3(2): 183-195 doi: 10.3934/cpaa.2004.3.183 +[Abstract](2249) +[PDF](199.1KB) Abstract: A class of generalized space-time symmetries is defined by extending the notions of classical symmetries and reversing symmetries for a smooth flow to arbitrary constant reparameterizations of time. This class is shown to be the group-theoretic normalizer of the abelian group of diffeomorphisms generated by the flow. Also, when the flow is nontrivial, this class is shown to be a nontrivial subgroup of the group of diffeomorphisms of the manifold, and to have a one-dimensional linear representation in which the image of a generalized symmetry is its unique constant reparameterization of time. This group of generalized symmetries and several groups derived from it (among which are the multiplier group and the reversing symmetry group) are shown to be nontrivial but incomplete invariants of the smooth conjugacy class of a smooth flow. Several examples are given throughout to illustrate the theory. 2004, 3(2): 197-216 doi: 10.3934/cpaa.2004.3.197 +[Abstract](2484) +[PDF](233.9KB) Abstract: We study the initial-value problem for a system of equations that models the low-speed flow of an inviscid, incompressible fluid with capillary stress effects. The system includes hyperbolic equations for the density and velocity, and an algebraic equation (the equation of state). We prove the local existence of a unique, classical solution to an initial-value problem with suitable initial data. We also derive a new, a priori estimate for the density, and then use this estimate to show that, if the regularity of the initial data for the velocity alone is increased, then the regularity of the solution for the density and the velocity may be increased, by a bootstrapping argument. 2004, 3(2): 217-235 doi: 10.3934/cpaa.2004.3.217 +[Abstract](3252) +[PDF](234.1KB) Abstract: This work is concerned with the construction and analysis of high order product integration methods for a class of Volterra integral equations with logarithmic singular kernel. Sufficient conditions for the methods to be convergent are derived and it is shown that optimal convergence orders are attained if the exact solution is sufficiently smooth. The case of non-smooth solutions is dealt with by making suitable transformations so that the new equation possesses smooth solutions. Two particular methods are considered and their convergence proved. A sample of numerical examples is included. 2004, 3(2): 237-252 doi: 10.3934/cpaa.2004.3.237 +[Abstract](2255) +[PDF](236.6KB) Abstract: We prove the existence of a global attractor for a damped-forced Kadomtsev-Petviashvili equation. We also establish that this equation features an asymptotic smoothing effect. We use energy estimates in conjunction with a suitable splitting of the solutions. 2004, 3(2): 253-265 doi: 10.3934/cpaa.2004.3.253 +[Abstract](2631) +[PDF](222.3KB) Abstract: In this paper, the existence of multiple solutions to a nonlinear elliptic equation with a parameter $\lambda$ is studied. Initially, the existence of two nonnegative solutions is showed for $0 < \lambda < \hat \lambda$. The first solution has a negative energy while the energy of the second one is positive for $0 < \lambda < \lambda_0$ and negative for $\lambda_0 < \lambda < \hat \lambda$. The values $\lambda_0$ and $\hat \lambda$ are given under variational form and we show that every corresponding critical point is solution of the nonlinear elliptic problem (with a suitable multiplicative term). Finally, the existence of two classes of infinitely many solutions is showed via the Lusternik-Schnirelman theory. 2004, 3(2): 267-290 doi: 10.3934/cpaa.2004.3.267 +[Abstract](2369) +[PDF](234.2KB) Abstract: In this paper we study the effects of small viscosity term and the far-field boundary conditions for systems of convection-diffusion equations in the zero viscosity limit. The far-field boundary conditions are classified and the corresponding solution structures are analyzed. It is confirmed that the Neumann type of far-field boundary condition is preferred. On the other hand, we also identify a class of improperly coupled boundary conditions which lead to catastrophic reflection waves dominating the inlet in the zero viscosity limit. The analysis is performed on the linearized convection-diffusion model which well describes the behavior at the far field for many physical and engineering systems such as fluid dynamical equations and electro-magnetic equations. The results obtained here should provide some theoretical guidance for designing effective far field boundary conditions. 2004, 3(2): 291-300 doi: 10.3934/cpaa.2004.3.291 +[Abstract](3366) +[PDF](199.5KB) Abstract: This paper is concerned with the existence of almost periodic solutions of neutral functional differential equations of the form $\frac{d}{dt}Dx_t = Lx_t+f(t)$, where $D,$ $L$ are bounded linear operators from $\mathcal C$ :$= C([-r, \quad 0],\quad \mathbb C^n )$ to $\mathbb C^n$, $f$ is an almost (quasi) periodic function. We prove that if the set of imaginary solutions of the characteristic equations is bounded and the equation has a bounded, uniformly continuous solution, then it has an almost (quasi) periodic solution with the same set of Fourier exponents as $f$. 2004, 3(2): 301-318 doi: 10.3934/cpaa.2004.3.301 +[Abstract](3074) +[PDF](248.2KB) Abstract: In this paper, we treat the weakly damped, forced KdV equation on $\dot{H}^s$. We are interested in the lower bound of $s$ to assure the existence of the global attractor. The KdV equation has infinite conservation laws, each of which is defined in $H^j(j\in\mathbb Z, j\ge 0)$. The existence of the global attractor is usually proved by using those conservation laws. Because the KdV equation on $\dot{H}^s$ has no conservation law for $s<0$, it seems a natural question whether we can show the existence of the global attractor for $s<0$. Moreover, because the conservation laws restrict the behavior of solutions, the time global behavior of solutions for $s<0$ may be different from that for $s\ge 0$. By using a modified energy, we prove the existence of the global attractor for $s > -3/8$, which is identical to the global attractor for $s \ge 0$. 2004, 3(2): 319-328 doi: 10.3934/cpaa.2004.3.319 +[Abstract](3401) +[PDF](200.7KB) Abstract: This paper deals with an initial-boundary value problem for the damped Boussinesq equation $u_{t t} - a u_{t t x x} - 2 b u_{t x x} = - c u_{x x x x} + u_{x x} + \beta(u^2)_{x x},$ where $t > 0,$ $a,$ $b,$ $c$ and $\beta$ are constants. For the case $a \geq 1$ and $a+ c > b^2$, corresponding to an infinite number of damped oscillations, we derived the global solution of the equation in the form of a Fourier series. The coefficients of the series are related to a small parameter present in the initial conditions and are expressed as uniformly convergent series of the parameter. Also we prove that the long time asymptotics of the solution in question decays exponentially in time. 2020 Impact Factor: 1.916 5 Year Impact Factor: 1.510 2020 CiteScore: 1.9
# How to prove a set belongs to Borel sigma-algebra? I am working on this problem on measure theory like this: Suppose ##X## is the set of real numbers, ##\mathcal B## is the Borel ##\sigma##-algebra, and ##m## and ##n## are two measures on ##(X, \mathcal B)## such that ##m((a, b))=n((a, b))< \infty## whenever ##−\infty<a<b<\infty##. Prove that ##m(A)=n(A)## whenever ##A\in \mathcal B##.​ Here is what I am envisioning but I am not so sure: Since ##a, b \in \mathbb R## and since ##A## is an arbitrary subset of ##\mathcal B##, so if only I can prove that ##(a, b) \in \mathcal B##, then I am done. But here is my question: How do I go ahead proving that ##(a, b) \in \mathcal B##? Can I just using the classic formula that if ##\forall a \in (a, b) \rightarrow a \in \mathcal B##, then ##(a, b) \in \mathcal B##? Any other step I need to follow?​ Thanks for your time and effort. Related Calculus and Beyond Homework Help News on Phys.org pasmith Homework Helper The Borel $\sigma$-algebra on a topological space is by definition the algebra generated by the open subsets of that space. Since $(a,b) \subset \mathbb{R}$ is open it's in the Borel $\sigma$-algebra of $\mathbb{R}$. Stephen Tashi Since ##a, b \in \mathbb R## and since ##A## is an arbitrary subset of ##\mathcal B##, so if only I can prove that ##(a, b) \in \mathcal B##, then I am done. Why would you be done after that step? The Borel $\sigma$-algebra on a topological space is by definition the algebra generated by the open subsets of that space. Since $(a,b) \subset \mathbb{R}$ is open it's in the Borel $\sigma$-algebra of $\mathbb{R}$. Yes, I also thought along that line of reasoning similar to yours, but it looks like too easy to be true, therefore I am not so sure about it. Thanks again. Why would you be done after that step? My reasoning is that since ##(a, b) \in \mathcal B## and since ##A## is an arbitrary set of ##\mathcal B##, therefore ##m((A)) = n((A))##. Let me know if it is flawed. Thanks again. Dick Homework Helper My reasoning is that since ##(a, b) \in \mathcal B## and since ##A## is an arbitrary set of ##\mathcal B##, therefore ##m((A)) = n((A))##. Let me know if it is flawed. Thanks again. I don't follow that at all. You are only given that the two measures are equal for sets that are open intervals. Not all of the sets that are elements of ##\mathcal B## are open intervals. I don't follow that at all. You are only given that the two measures are equal for sets that are open intervals. Not all of the sets that are elements of ##\mathcal B## are open intervals. My reasoning was shaky at best to begin with, for that reason I posted this question here. I did receive some input on solution to this problem, but all of them requiring big-tool theorems such as Dykin's ##\pi - \lambda## theorem, measurable functions, etc., all of them are out of the range for the time being. In fact this question comes only from the 3rd. chapter of Richard F. Bass' online book http://homepages.uconn.edu/~rib02005/rags010213.pdf [Broken] on entry-level analysis, therefore those big-tools are not yet in the background. I am totally lost but I am still hopeful I can get a solution. Thanks again. Last edited by a moderator: Dick
Hellenica World # . In mathematics, in the field of differential geometry, an Iwasawa manifold is a compact quotient of a 3-dimensional complex Heisenberg group by a cocompact, discrete subgroup. An Iwasawa manifold is a nilmanifold, of real dimension 6. Iwasawa manifolds give examples where the first two terms E1 and E2 of the Frölicher spectral sequence are not isomorphic. As a complex manifold, such an Iwasawa manifold is an important example of a compact complex manifold which does not admit any Kähler metric. References Ketsetzis, Georgios; Salamon, Simon (2004), "Complex structures on the Iwasawa manifold", Advances in Geometry 4 (2): 165–179, arXiv:math.DG/0112295, doi:10.1515/advg.2004.012. Griffiths, P.; Harris, J. (1994), Principles of Algebraic Geometry, Wiley Classics Library, Wiley Interscience, p. 444, ISBN 0-471-05059-8 Mathematics Encyclopedia
# How to Solve Trigonometric Equations Author Info | 10 References Updated: May 6, 2019 Did you get homework from your teacher that was about solving Trigonometric equations? Did you maybe not pay full attention in class during the lesson on Trigonometric questions? Do you even know what "Trigonometric" means? If you answered yes to these questions, then you don't need to worry because this wikiHow will teach you how to solve Trigonometric equations. ## Steps 1. 1 Know the Solving concept.[1] • To solve a trig equation, transform it into one or many basic trig equations. Solving trig equations finally results in solving 4 types of basic trig equations. 2. 2 Know how to solve basic trig equations.[2] • There are 4 types of basic trig equations: • sin x = a  ; cos x = a • tan x = a  ; cot x = a • Solving basic trig equations proceeds by studying the various positions of the arc x on the trig circle, and by using trig conversion table (or calculator). To fully know how to solve these basic trig equations, and similar, see book titled :"Trigonometry: Solving trig equations and inequalities" (Amazon E-book 2010). • Example 1. Solve sin x = 0.866. The conversion table (or calculator) gives the answer: x = Pi/3. The trig circle gives another arc (2Pi/3) that has the same sin value (0.866). The trig circle also gives an infinity of answers that are called extended answers. • x1 = Pi/3 + 2k.Pi, and x2 = 2Pi/3. (Answers within period (0, 2Pi)) • x1 = Pi/3 + 2k Pi, and x2 = 2Pi/3 + 2k Pi. (Extended answers). • Example 2. Solve: cos x = -1/2. Calculators give x = 2 Pi/3. The trig circle gives another x = -2Pi/3. • x1 = 2Pi/3 + 2k.Pi, and x2 = - 2Pi/3. (Answers within period (0, 2Pi)) • x1 = 2Pi/3 + 2k Pi, and x2 = -2Pi/3 + 2k.Pi. (Extended answers) • Example 3. Solve: tan (x - Pi/4) = 0. • x = Pi/4 ; (Answer) • x = Pi/4 + k Pi; ( Extended answer) • Example 4. Solve cot 2x = 1.732. Calculators and the trig circle give • x = Pi/12 ; (Answer) • x = Pi/12 + k Pi ; (Extended answers) 3. 3 Learn the Transformations used in solving trig equations.[3] • To transform a given trig equation into basic trig ones, use common algebraic transformations (factoring, common factor, polynomial identities...), definitions and properties of trig functions, and trig identities. There are about 31, among them the last 14 trig identities, from 19 to 31, are called Transformation Identities, since they are used in the transformation of trig equations.[4] See book mentioned above. • Example 5: The trig equation: sin x + sin 2x + sin 3x = 0 can be transformed, using trig identities, into a product of basic trig equations: 4cos x*sin (3x/2)*cos (x/2) = 0. The basic trig equations to be solved are: cos x = 0 ; sin (3x/2) = 0 ; and cos (x/2) = 0. 4. 4 Find the arcs whose trig functions are known.[5] • Before learning solving trig equations, you must know how to quickly find the arcs whose trig functions are known. Conversion values of arcs (or angles) are given by trig tables or calculators.[6] • Example: After solving, get cos x = 0.732. Calculators give the solution arc x = 42.95 degree. The trig unit circle will give other solution arcs that have the same cos value. 5. 5 Graph the solution arcs on the trig unit circle.[7] • You can graph to illustrate the solution arcs on the trig unit circle. The terminal points of these solution arcs constitute regular polygons on the trig circle. For examples: • The terminal points of the solution arcs x = Pi/3 + k.Pi/2 constitute a square on the trig unit circle. • The solution arcs x = Pi/4 + k.Pi/3 are represented by the vertexes of a regular hexagon on the trig unit circle. 6. 6 Learn the Approaches to solve trig equations.[8] • If the given trig equation contains only one trig function, solve it as a basic trig equation. If the given equation contains two or more trig functions there are 2 approaches in solving, depending on transformation possibility. • A. Approach 1. • Transform the given trig equation into a product in the form: f(x).g(x) = 0 or f(x).g(x).h(x) = 0, in which f(x), g(x) and h(x) are basic trig equations. • Example 6. Solve: 2cos x + sin 2x = 0. (0 < x < 2Pi) • Solution. Replace in the equation sin 2x by using the identity: sin 2x = 2*sin x*cos x. • cos x + 2*sin x*cos x = 2cos x*( sin x + 1) = 0. Next, solve the 2 basic trig functions: cos x = 0, and (sin x + 1) = 0. • Example 7. Solve: cos x + cos 2x + cos 3x = 0. (0 < x < 2Pi) • Solution: Transform it to a product, using trig identities: cos 2x(2cos x + 1 ) = 0. Next, solve the 2 basic trig equations: cos 2x = 0, and (2cos x + 1) = 0. • Example 8. Solve: sin x - sin 3x = cos 2x. (0 < x < 2Pi) • Solution: Transform it into a product, using trig identities: -cos 2x*(2sin x + 1) = 0. Then solve the 2 basic trig equations: cos 2x = 0, and (2sin x + 1) = 0. • B. Approach 2. • Transform the given trig equation into a trig equation having only one unique trig function as variable. There are a few tips on how to select the appropriate variable. The common variables to select are: sin x = t; cos x = t; cos 2x = t, tan x = t and tan (x/2) = t. • Example 9. Solve: 3sin^2 x - 2cos^2 x = 4sin x + 7 (0 < x < 2Pi). • Solution. Replace in the equation (cos^2 x) by (1 - sin^2 x), then simplify the equation: • 3sin^2 x - 2 + 2sin^2 x - 4sin x - 7 = 0. Call sin x = t. The equation becomes: 5t^2 - 4t - 9 = 0. This is a quadratic equation that has 2 real roots: t1 = -1 and t2 = 9/5. The second t2 is rejected since > 1. Next, solve: t = sin = -1 --> x = 3Pi/2. • Example 10. Solve: tan x + 2 tan^2 x = cot x + 2. • Solution. Call tan x = t. Transform the given equation into an equation with t as variable: (2t + 1)(t^2 - 1) = 0. Solve for t from this product, then solve the basic trig equation tan x = t for x. 7. 7 Solve special types of trig equations. • There are a few special types of trig equations that require some specific transformations. Examples: • a*sin x+ b*cos x = c ; a(sin x + cos x) + b*cos x*sin x = c ; • a*sin^2 x + b*sin x*cos x + c*cos^2 x = 0 8. 8 Learn the Periodic Property of trig functions.[9] • All trig functions are periodic meaning they come back to the same value after a rotation for one period.[10] Examples: • The function f(x) = sin x has 2Pi as period. • The function f(x) = tan x has Pi as period. • The function f(x) = sin 2x has Pi as period. • The function f(x) = cos (x/2) has 4Pi as period. • If the period is specified in the problem/test, you have to only find the solution arc(s) x within this period. • NOTE: Solving trig equation is a tricky work that often leads to errors and mistakes. Therefore, answers should be carefully checked. After solving, you can check the answers by using a graphing calculator to directly graph the given trig equation R(x) = 0. The answers (real roots) will be given in decimals. For example, Pi is given by the value 3.14 ## Community Q&A Search • Question If I know two sides of a triangle, how can I find the hypotenuse (to the nearest whole number)? Square the lengths of the two sides you know, add the results together, then take the square root of that result and round it to the nearest whole number. • Question How do I find the third side of a triangle if I’m only given 1 side length and 1 angle? Donagan Knowing one side and one angle is not enough information to find any of a triangle's other components. • Question What type of calculator do I need to do trigonometry? A normal scientific calculator is fine. A graphics calculator will also work. 200 characters left ## Article Info wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. To create this article, 15 people, some anonymous, worked to edit and improve it over time. Together, they cited 10 references. This article has also been viewed 200,317 times. Categories: Trigonometry In other languages: Português: Resolver Equações Trigonométricas, Español: resolver ecuaciones trigonométricas, Italiano: Risolvere Equazioni Trigonometriche, Русский: решать тригонометрические уравнения, Nederlands: Goniometrische vergelijkingen oplossen Thanks to all authors for creating a page that has been read 200,317 times.
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Quantum effects in classical systems having complex energy. (English) Zbl 1145.81416 Summary: On the basis of extensive numerical studies it is argued that there are strong analogies between the probabilistic behavior of quantum systems defined by Hermitian Hamiltonians and the deterministic behavior of classical mechanical systems extended into the complex domain. Three models are examined: the quartic double-well potential $V\left(x\right)={x}^{4}-5{x}^{2}$, the cubic potential $V\left(x\right)=\frac{1}{2}{x}^{2}-g{x}^{3}$, and the periodic potential $V\left(x\right)=-cosx$. For the quartic potential a wave packet that is initially localized in one side of the double-well can tunnel to the other side. Complex solutions to the classical equations of motion exhibit a remarkably analogous behavior. Furthermore, classical solutions come in two varieties, which resemble the even-parity and odd-parity quantum-mechanical bound states. For the cubic potential, a quantum wave packet that is initially in the quadratic portion of the potential near the origin will tunnel through the barrier and give rise to a probability current that flows out to infinity. The complex solutions to the corresponding classical equations of motion exhibit strongly analogous behavior. For the periodic potential a quantum particle whose energy lies between -1 and 1 can tunnel repeatedly between adjacent classically allowed regions and thus execute a localized random walk as it hops from region to region. Moreover, if the energy of the quantum particle lies in a conduction band, then the particle delocalizes and drifts freely through the periodic potential. A classical particle having complex energy executes a qualitatively analogous local random walk, and there exists a narrow energy band for which the classical particle becomes delocalized and moves freely through the potential. ##### MSC: 81V05 Strong interaction, including quantum chromodynamics 81T15 Perturbative methods of renormalization (quantum theory)
# Functorial kernel in derived category By the work of Verdier, we know that cones in a triangulated category $$\mathcal{T}$$ are functorial if and only if $$\mathcal{T}$$ is semisimple abelian. However, in these notes, it is said that In the context of triangulated categories, it is well known that cones are not functorial. However, we have just proven that if a triangulated category arises as the homotopy category of a stable $$\infty$$-category, cones in it are indeed functorial at the $$\infty$$-level. I must admit that I don't understand what it means that cones are functorial at the $$\infty$$-level. Since the derived category of an abelian category (with enough injectives) is the homotopy category of the derived infity category, which is a stable $$\infty$$-category, that would mean that we have functorial cones in derived abelian categories. What does that mean? Can I "take kernels"? Let $$\mathcal{C}$$ be a stable $$\infty$$-category. Then $$\mathcal{C}$$ has a homotopy category $$h \mathcal{C}$$, which is triangulated. The collection of morphisms $$f: X \rightarrow Y$$ of $$\mathcal{C}$$ can be organized into an $$\infty$$-category $$\mathrm{Fun}( \Delta^1, \mathcal{C} )$$. The operation $$f \mapsto \mathrm{Cone}(f)$$ can be obtained from a functor of $$\infty$$-categories $$\mathrm{Fun}( \Delta^1,\mathcal{C} ) \rightarrow \mathcal{C}$$. You can pass to homotopy to get a functor of ordinary categories $$\mathrm{Cone}: h \mathrm{Fun}( \Delta^1, \mathcal{C} ) \rightarrow h \mathcal{C}.$$ The functor $$\infty$$-category $$\mathrm{Fun}( \Delta^1, \mathcal{C} )$$ is equipped with an evaluation functor $$e: \Delta^1 \times \mathrm{Fun}( \Delta^1, \mathcal{C} ) \rightarrow \mathcal{C}$$. You can take homotopy categories here, to get a functor of ordinary categories $$[1] \times h\mathrm{Fun}( \Delta^1, \mathcal{C} ) \rightarrow h\mathcal{C}$$, which can be identified with a functor (again of ordinary categories) $$U: h \mathrm{Fun}( \Delta^1, \mathcal{C} ) \rightarrow \mathrm{Fun}( [1], h\mathcal{C} ).$$ Here $$[1]$$ denotes the category $$\{ 0 < 1 \}$$ consisting of two objects and one morphism between them. The phenomenon you're asking about is due to the fact that $$U$$ is not an equivalence of categories. Moreover, the functor $$\mathrm{Cone}$$ does not factor through $$U$$. If $$f: X \rightarrow Y$$ and $$f': X' \rightarrow Y'$$ are morphisms of $$\mathcal{C}$$, then morphisms from $$f$$ to $$f'$$ on in the $$\infty$$-category $$\mathrm{Fun}( \Delta^1, \mathcal{C} )$$ can be thought of triples $$(u,v,h)$$, where $$u: X \rightarrow X'$$ and $$v: Y \rightarrow Y'$$ are morphism of $$\mathcal{C}$$ and $$h$$ is a homotopy from $$f' \circ u$$ to $$v \circ f$$. In these terms, the functor $$U$$ is given on morphisms by the construction $$[(u,v,h)] \mapsto ( [u], [v] )$$ where $$[s]$$ denotes the homotopy class of a morphism $$s$$. In particular $$U$$ "forgets" the data of the homotopy $$h$$, and fails to be a faithful functor.
Groupe de Travail Organisateur : Patrick LE MEUR, Dominique MANCHON et Jean-Marie LESCURE Les exposés ont lieu le vendredi à 14h00 en salle 2222 du bâtiment de mathématiques (consulter le plan d'accès au laboratoire). 2003 / 2004 2004 / 2005 2005 / 2006 2006 / 2007 2007 / 2008 2008 / 2009 2009 / 2010 2010 / 2011 2011 / 2012 Prochain séminaire...... le Vendredi 21 juin 2013 - Relâche (journées dualité et algèbre non-commutative) Juillet 2013 • Vendredi 05 juillet 2013 - Rufus Willett (Université de Hawaii) Ghosts, monsters, and exact crossed products Ghost operators are 'nearly compact' operators on Hilbert space. Ghost operators in crossed product (and related) C*-algebras associated to so-called 'Gromov monster' groups give rise to pathological properties in the associated K-theory groups. As an application, all known counterexamples to the Baum-Connes conjecture with coefficients arise in essentially this way. I'll discuss geometric conditions leading to the existence of ghosts (related to expanding graphs and property (T)), and some ways to ameliorate the problems caused by ghosts using the idea of exactness. This is joint work with John Roe, and with Paul Baum and Erik Guentner. Juin 2013 • Vendredi 28 juin 2013 - Pierre Clavier (LPTHE Paris 6 Jussieu) On s'intéressera à l'équation de Schwinger-Dyson du modèle de Wess-Zumino de masse nulle. Après avoir écrit cette équation sous forme différentielle, on extraira le comportement asymptotique de ses solutions. Cela nous permettra d'utiliser un ansatz qui simplifiera les calculs en séparant les solutions en différentes composantes, chacune avec un comportement distinct. Enfin nous verrons comment des zétas apparaissent dans les solutions. • Vendredi 14 juin 2013 - Kevin Langlois (UJF Grenoble) • Jeudi 06 juin 2013 - 14h - Nigel Higson, Université d'État de Pennsylvanie. Contractions of Lie Groups and Representation Theory The contraction of a Lie group G to a subgroup K is a Lie group that approximates G to first order near K. It is usually easier to understand than G itself. The name "contraction" comes from the mathematical physicists, who examined the Galilean group as a contraction of the Poincare group of special relativity. My focus will be on a related but different class of examples: the prototype is the group of isometric motions of Euclidean space, viewed as a contraction of the group of isometric motions of hyperbolic space. It is natural to expect some sort of limiting relation between representations of the contraction and representations of G itself. But in the 1970s George Mackey discovered an interesting rigidity phenomenon: as the contraction group is deformed to G, the representation theory remains in some sense unchanged. In particular the irreducible representations of the contraction group parametrize the irreducible representations of G. I shall formulate a reasonably precise conjecture along these lines that was inspired by subsequent developments in C*-algebra theory and noncommutative geometry, and describe the evidence in support of it, which is by now substantial. However a conceptual explanation for Mackey's rigidity phenomenon remains elusive. Mai 2013 • Vendredi 31 mai 2013 - Quimey Vivas (Universidad de Buenos Aires) Automorphisms and isomorphisms of quantum generalized Weyl algebras • Vendredi 24 mai 2013 - Simon Riche (CNRS UBP) • Vendredi 10 mai 2013 - Rene Schulz (Visioconférence depuis Göttingen) Global Fourier integral operators via tempered oscillatory integrals with inhomogeneous phase functions The theory of global Fourier integral operators is a field of active research, with many open questions. In our approach, we study certain families of oscillatory integrals, parametrised by phase functions and amplitude functions globally defined on the Euclidean space, which give rise to tempered distributions, avoiding the standard homogeneity requirement on the phase function. The singularities of these distributions are described both from the point of view of the lack of smoothness as well as with respect to the decay at infinity. In particular, the latter will depend on a version of the set of stationary points of the phase function, including elements lying at the boundary of the radial compactification of the Euclidean space. We then consider classes of global Fourier integral operators on the Euclidean space, defined in terms of kernels of the form of such oscillatory integrals. As an example we consider the solution operator of the Klein Gordon equation. This talk is based on joint work with Sandro Coriasco from the University of Torino, Italy. Avril 2013 • Vendredi 05 avril 2013 - Ali Baklouti (Faculté des Sciences de Sfax) Représentations monomiales de type discret sur les groupes de Lie résolubles exponentiels. Dans cet exposé, je vais poser quelques problèmes reliés à une représentation monomiale $\tau ={ind}_H^G \chi$ où $G$ désigne un groupe de Lie résoluble exponentiel, $H$ un sous-groupe analytique et $\chi$ un caractère unitaire de $H$. Je vais me concentrer sur la situation où les multiplicités de $\tau$ sont de type discret et donner un contre-exemple à une conjecture posée par Michel Duflo dans ce contexte. Mars 2013 • Vendredi 29 mars 2013 - Amaury Freslon (en Visioconférence depuis Metz) Operator-valued inequalities in noncommutative harmonic analysis Computing the norm of a linear map on a C*-algebra is a very difficult problem in general, because it requires fine estimates for the norm of many elements of the algebra. However, if the C*-algebra comes from the regular representation of a group, harmonic analysis provides us with powerful tools to estimate the norms. We will recal these tools and then adress the case of discrete quantum groups, where noncommutative harmonic analysis is again the key to useful norm estimatates. Our main goal is to use these tools to prove approximation properties and structure results on the operator algebras associated to these quantum groups. • Vendredi 22 mars 2013 - Dominique Manchon (CNRS-UBP) Graphes de Feynman et structures extérieures (2ème partie) • Vendredi 15 mars 2013 - François Gautero (Université de Nice) Pavages, mesures invariantes et norme de Thurston asymptotique • Vendredi 01 mars 2013 - Olivier Gabriel (Göttingen) Lie group actions, spectral triples and generalised crossed products The aim of this talk is to generalise the constructions of spectral triples on noncommutative tori and Quantum Heisenberg Manifolds (QHM) to broader settings. After a few reminders about noncommutative tori and spectral triples, we prove that an ergodic action of a compact Lie group G on a unital C*-algebra A yields a natural spectral triple structure on A. In the second part, we investigate "permanence properties" for the previous sort of spectral triples. We first introduce the notion of Generalised Crossed Product (GCP) and illustrate it by the case of QHM. A GCP contains a sub-C*-algebra called its "basis". A spectral triple on the basis can induce a spectral triple on the GCP, under some assumptions which we make explicit. This talk is based on work in progress in collaboration with M. Grensing. If time permits, we will relate these new results to our previous work in this direction. Février 2013 • Vendredi 22 février 2013 - Michaël Bulois (ICJ et Université Jean Monnet, St Etienne) • Vendredi 08 février 2013 - Sonia Natale (Cordoba, Argentine) On weakly group-theoretical non-degenerate braided fusion categories We shall discuss some results on the structure of the class of braided fusion categories of the title, in particular, concerning their class in the Witt group of non-degenerate braided fusion categories introduced by Davydov, Mueger, Nikshych and Ostrik. We shall also present some results that give some sufficient conditions for a braided fusion category to be weakly group-theoretical or solvable in terms of the factorization of its Frobenius-Perron dimension and the Frobenius-Perron dimensions of its simple objects, which imply that every non-degenerate braided fusion category whose Frobenius-Perron dimension is a small natural number is indeed weakly group-theoretical. • Vendredi 01 février 2013 - Rachel Taillefer Je vais vous parler de certaines algèbres qui généralisent les algèbres de Nakayama symétriques, en donner une description par carquois et relations puis une classification à équivalence dérivée près, en vous présentant certains outils que nous avons utilisés. C'est un travail en commun avec Nicole Snashall. Janvier 2013 • Vendredi 25 janvier 2013 - Patrick le Meur (UBP) • Vendredi 18 janvier 2013 - Dominique Manchon (CNRS/Université Blaise Pascal) Graphes de Feynman et structures extérieures • Vendredi 11 janvier 2013 - Relâche Décembre 2012 • Vendredi 14 décembre 2012 - Martijn Caspers (Besançon) Quantum groups, Lp-spaces and Fourier theory In this talk we address several questions related to quantum groups and non-commutative Lp-spaces. We introduce Fourier transforms on non-commutative Lp-spaces associated with a quantum group. We show that it is imperative to use the techniques of Lp-spaces constructed on type III von Neumann algebras, even if we are dealing with the semi-finite case. We find a spherical analogue of the Fourier transform and are able to describe it explicitly in the case of "extended-SUq(1,1)". This involves the so-called quantum Duflo-Moore operators. • Vendredi 07 décembre 2012 - Charlotte Wahl (Hannover, en visioconférence depuis Potsdam) Rho-invariants and the classification of differential structures on closed manifolds In her talk, Sara Azzali explained the use of rho-invariants associated to the spin Dirac operator for the classification of metrics with positive scalar curvature. Analogously, rho-invariants associated to the signature operator can be used to distinguish differential structures on closed manifolds. However, since the signature operator is not invertible in general, their study tends to be more difficult. I will discuss three types of rho-invariants - the L^2-rho-invariants of Cheeger and Gromov, Lott's higher rho-invariants and the rho-invariants associated to 2-cocycles studied by Sara Azzali and myself and I will explain what is known about their properties for the signature operator. Novembre 2012 • Vendredi 30 novembre 2012 - Seunghun Hong (Göttingen). Horaire exceptionnel : 13h15 A Lie-algebraic approach to the local index theorem on compact homogeneous spaces Using a K-theory point of view, Bott related the Atiyah-Singer index theorem for elliptic operators on compact homogeneous spaces to the Weyl character formula. In this talk, I will explain how to prove the local index theorem for compact homogenous spaces using Lie algebra methods. The method follows in outline the proof of the local index theorem due to Berline and Vergne. But the use of Kostant's cubic Dirac operator in place of the Riemannian Dirac operator leads to substantial simplifications. An important role is also played by the quantum Weil algebra of Alekseev and Meinrenken. • Vendredi 23 novembre 2012 - Yves Stalder (Clermont-Ferrand) Actions hautement transitives des produits libres Soit G un groupe dénombrable opérant sur un ensemble (infini) X. L'action est dite hautement transitive si elle est k-transitive pour tout entier naturel k, ce qui revient à dire que l'image de G dans le groupe Sym(X) est dense. Dixon a prouvé par des méthodes de généricité au sens de Baire que les groupes libres non abéliens possèdent des actions fidèles et hautement transitives. J'expliquerai comment exploiter ses idées pour déterminer les produits libres qui possèdent des actions fidèles et hautement transitives. C'est un travail commun avec Soyoung Moon. • Vendredi 16 novembre 2012 - Sara Azzali (Paris 7) Invariants eta et courbure scalaire positive Octobre 2012 • Vendredi 26 octobre 2012 - Ivan Angiono (Cordoba, Argentine) Finite pointed tensor categories over abelian groups It is known that the category of representations of a Hopf algebra is a tensor category. But there are tensor categories which are not equivalent to the category of representations of a Hopf algebra. In this context Drinfeld introduced the notion of quasi Hopf algebra in 1990. Moreover Etingof and Ostrik characterized those tensor categories coming from quasi Hopf algebras as categories whose objects have integer Frobenius Perron dimensions. It is interesting to know then examples of quasi Hopf algebras which are not Hopf algebras. Gelaki defined a few years ago a family of quasi Hopf algebras over cyclic groups and proved in a joint work with Etingof that this covers all the examples of finite tensor categories with cyclic group of invertible objects with prime order. In this talk we will present an extension of this classification for a more general family of cyclic groups, extending the results of Etingof-Gelaki, and a nice way to construct more examples of finite-dimensional quasi-Hopf algebras with abelian group of invertible objects; it involves de-equivariantization of Hopf algebras. • Vendredi 19 octobre 2012 - Peng Shan (Caen) Algèbres de Lie affines et algèbres de Cherednik cyclotomiques Varagnolo et Vasserot ont conjecturé une équivalence de catégories entre un catégorie O parabolique d'algèbres de Lie affines de gl et la catégorie O d'algèbres de Cherednik cyclotomiques. J'expliquerai une démonstration de cette conjecture et quelques applications. Ceci est un travail en commun avec R. Rouquier, M. Varagnolo et E. Vasserot. • Vendredi 12 octobre 2012 - Elmar Schrohe (Hannover, en visio-conférence depuis Potsdam) A Families Index Theorem for Boundary Value Problems • Vendredi 05 octobre 2012 - Claire Renard (ENS Cachan) Septembre 2012 • Vendredi 28 septembre 2012 - Robin Deeley (Goettingen) Relative constructions in geometric K-homology K-homology provides a useful framework for the study of problems from index theory. The Baum-Douglas (i.e., (M,E,phi)) model provides a geometric realization of K-homology. After introducing this model, we will discuss a number of "relative" constructions within the framework of geometric K-homology. We will also show how a number of interesting index theorems arise from such constructions. Two examples are the Freed-Melrose index theorem and an R/Z-valued index theorem which is similar to the index theorem for flat vector bundles of Atiyah-Patodi-Singer. Directeur Michael Heusener Tél : +33 (0)4 73 40 77 38 Directeur adjoint Jean Picard Tél : +33 (0)4 73 40 70 61 Secrétariat Valérie Sourlier Tél : +33 (0)4 73 40 70 50 Fax : +33 (0)4 73 40 54 50 Informatique Damien Ferney Tél : +33 (0)4 73 40 70 68 Cédric Barrel Tél : +33 (0)4 73 40 70 55 Adresse Université Blaise Pascal - Laboratoire de Mathématiques UMR 6620 - CNRS Campus des Cézeaux - B.P. 80026 63171 Aubière cedex France   Tél : +33 (0)4 73 40 70 50 Fax : +33 (0)4 73 40 70 64
# Suppose a teacher recorded the attendance of her students in a recent statistics class because she... ###### Question: Suppose a teacher recorded the attendance of her students in a recent statistics class because she wanted to investigate the linear relationship between the number of classes they missed and their final grades. The accompanying table shows these data for a random sample of nine students 4   73 6   81 1   92 4   72 0   94 2   86 0   89 5   87 2   96 Suppose a teacher recorded the attendance of her students in a recent statistics class because she wanted to investigate the linear relationship between the number of classes they missed and their final grades. The accompanying table shows these data for a random sample of nine students. Complete parts a through c. Click the icon to view the table showing missed classes and final grade. a. Calculate the correlation coefficient for this sample. The correlation coefficient is (Type an integer or decimal rounded to three decimal places as needed.) b. Using a = 0.05, test to determine if the population correlation coefficient is less than zero. The hypotheses for this test are The test statistic is (Type an integer or decimal rounded to two decimal places as needed.) The p-value is (Type an integer or decimal rounded to three decimal places as needed.) sufficient evidence to conclude that Therefore, at the 0.05 level of significance, the population correlation coefficient is the null hypothesis. There zero. C. onclusions can be on ese results? The conclusion is that there linear relationship between missed classes and final grade. #### Similar Solved Questions ##### Based on what you’ve read, explain why adults may not have memories for experiences that occurred... Based on what you’ve read, explain why adults may not have memories for experiences that occurred early in childhood.... ##### 4. Let (Xi(t), t > 0) and (X2(t),t > 0) be two independent continuous Markov chains... 4. Let (Xi(t), t > 0) and (X2(t),t > 0) be two independent continuous Markov chains on two states {0,1}, with transition rates q(0, 1) = a, q(1,0) = B (the two processes have the same transition rates). Explain that (Xi(t) + X2(t), t > 0) is a continuous time Markov chain and determine its ... ##### A triangle has sides A, B, and C. The angle between sides A and B is (pi)/6. If side C has a length of 18 and the angle between sides B and C is ( pi)/2, what are the lengths of sides A and B? A triangle has sides A, B, and C. The angle between sides A and B is (pi)/6. If side C has a length of 18 and the angle between sides B and C is ( pi)/2, what are the lengths of sides A and B?... ##### What is the difference between an atom and an element? What is the difference between an atom and an element?... ##### Whispering Company purchased an electric wax melter on April 30, 2020, by trading in its old... Whispering Company purchased an electric wax melter on April 30, 2020, by trading in its old gas model and paying the balance in cash. The following data relate to the purchase. List price of new melter $18,644 Cash paid 11,800 Cost of old melter (5-year life,$826 salvage value) 13,216 Accumulated ... ##### (5) Gayle runs at a speed of 4.49 m/s and dives on a sled, which is... (5) Gayle runs at a speed of 4.49 m/s and dives on a sled, which is initially at rest on the top of a frictionless snow.covered hill. After she has descended a vertical distance of 5.46 m, her brother, who is initially at rest, hops on her back and together they continue down the hill. What is their... ##### Cell notation will list each half-reaction: Select the correct answer below: A.) starting with the reaction... Cell notation will list each half-reaction: Select the correct answer below: A.) starting with the reaction at the cathode B.) with coefficients to balance the species C.) on each side of the salt bridge, which is represented by a double vertical line D.) none of the above... ##### Need to calculate the ending cash balance for 4 quarters in the above spreadsheet. Table 12... Need to calculate the ending cash balance for 4 quarters in the above spreadsheet. Table 12 - Budgeted Statement of Cash Flows for Fiscal Year Beginning 12/1/2020 $Quarter 1$ 2,272,200 1,817,760 400,000 $2,217,760 Quarter 2$ 2,272,200 1,817,760 454,440 $2,272,200 Quarter 3$ 2,272,200 1,817,760... ##### Determine the moments of inertia (2nd area moments) for the cross section below. d/5 5. Determine... Determine the moments of inertia (2nd area moments) for the cross section below. d/5 5. Determine the moments of inertia (2nd Area Moments) for the cross section below. (20 pt.) d (mm) a=13 [mm] 영 [mm] 녀5mm] 여 9 [mm] (a+b+c) (mm) 그 (mm) a [mm] 5. Determine the mo... ##### Question 12 (5 points) Graph the two lines and make sure they are labelled Line 1... Question 12 (5 points) Graph the two lines and make sure they are labelled Line 1 and Line 2. Make sure your axes are labelled and you use straight lines. Plot vertex A(6,2) on your graph and label it. Please UPLOAD a picture of your graph to this box. You may either use the template provided or dra... ##### PLEASE SHOW WORK! Answers are included. What is the molarity of a 20% by weight solution... PLEASE SHOW WORK! Answers are included. What is the molarity of a 20% by weight solution of NaOH? MW=39.99 A: 5M What percentage (%) is 1.5M NaOH? A: 6% How would you prepare 250 mL of a 0.5M NaCl solution? MW=58.44 A: 7.31g NaCl and H2O until reach 250 mL If you add 10g of NaCl to 500 mL, what i... ##### Jeremy earned $100,000 in salary and$6,000 in interest income during the year. Jeremy's employer withheld... Jeremy earned $100,000 in salary and$6,000 in interest income during the year. Jeremy's employer withheld \$11,000 of federal income taxes from Jeremy's paychecks during the year. Jeremy has one qualifying dependent child who lives with him. Jeremy qualifies to file as head of household and ... ##### An object, previously at rest, slides 3 m down a ramp, with an incline of pi/6 , and then slides horizontally on the floor for another 4 m. If the ramp and floor are made of the same material, what is the material's kinetic friction coefficient? An object, previously at rest, slides 3 m down a ramp, with an incline of pi/6 , and then slides horizontally on the floor for another 4 m. If the ramp and floor are made of the same material, what is the material's kinetic friction coefficient?...
## Introduction Neuropathic pain is a debilitating condition arising from injury to the somatosensory neurons which triggers the development of allodynia and hyperalgesia1. Several conditions such as traumatic accidents, surgery and diseases affecting peripheral and central nervous systems contribute to etiopathogenesis of the neuropathic pain. Experimental studies reveal inflammation is the main cause of neuropathic pain. The inflammatory cytokines and oxidative stress along with allogeneic mediators released subsequent to a nerve injury induce neuropathic pain through sensitization of the nociceptive receptors2,3. The palliative treatment of neuropathic pain include the use of gabapentin, tricyclic antidepressants, α2-δ calcium channel ligands, slow serotonin reuptake inhibitors and local aesthetics4. Gabapentin is recommended as the standard of care in the neuropathic pain5. However, the absorption of gabapentin follows saturation kinetics upon oral administration and even at higher doses its bioavailability does not improve. This renders a need for dose-titration to achieve the expected therapeutic effects6. Other antineuropathic drugs including anticonvulsants, tricyclic antidepressants and opioids have a limited effect and possess abuse liabilities or intolerable side-effects. Thus, there is a continuous demand for novel antineuropathic drugs which could be more safe and effective. The recent approach of repositioning of therapeutic agents for discrete indications may prove to be useful in the exploration of certain neuroprotective, anti-inflammatory and antioxidant agents to be added as an add-on therapy to the primary recommended therapy for neuropathy and associated pain. In preview of such repositioning approach, many recent preclinical and clinical evaluations have highlighted diverse effects of omeprazole other than its proton pump inhibitory actions. Omeprazole is a proton pump inhibitor that is widely used in the treatment of peptic ulcer7. In vitro and and in vivo evaluations have proved omeprazole have carbonic anhydrase inhibitory8, anti-inflammatory9 and antioxidant10 activities. Omeprazole following in vitro co-incubation with isolated human neutrophils showed to reduce hemotaxis and oxidative radical production induced by bacterial chemotactic tetrapeptide, N-formyl-Met-Leu-Phe (fMLP)11. Additionally, omeprazole also antagonizes the inflammatory response of mouse macrophages infected with salmonella enterica. In this in vitro study, it delayed the IκB degradation, blocked nitric oxide production and reduced the secretion of proinflammatory cytokines from the infected macrophages12. The anti-inflammatory activity of omeprazole and other proton pump inhibitors (PPIs) in a murine model of asthma through IL-4 and IL-13 signalling STAT-6 (signal transducer and activator of transcription-6) activation were also reported13. On the other hand, omeprazole also reduces the LPS-induced release of TNF-α and IL-6 from the human microglial cell culture and human monocytes culture in vitro. This effect is proposed to contribute to the neuroprotective effect of omeprazole against the microglial and monocytic toxicity14. Further, it has also been revealed to reduce the IFN-γ induced astrocyte toxicity and phosphorylation of STAT314. Moreover, it also inhibits the carrageenan-induced acute paw inflammation in rats10,15. Considering these findings, we hypothesized that omeprazole should reduce the post-injury nerve damage and inhibit the development of nerve injury-related neuroinflammation and neuropathic pain. The present study was designed to investigate the effects of chronic oral administration of omeprazole employing the CCI-induced neuropathic pain in rats. ## Results ### Effect of omeprazole on cold, warm & mechanical allodynia in CCI-induced neuropathic pain Figure 1A–C illustrates the effects of omeprazole on cold, warm & mechanical allodynia during CCI-induced neuropathic pain in rats. In the plantar test to determine cold allodynia, the paw withdrawal latency (PWL) was significantly increased as compared to sham group (14.4 ± 1.17 sec Vs. 3.86 ± 1.16 sec, P < 0.001). Oral treatment with omeprazole (50 mg/kg/day) for 14 days significantly increased the PWL as compared to the CCI-control animals [9.37 ± 0.56 sec Vs. 3.86 ± 1.16 sec; F (117.3) DF = 4, P < 0.001]. Similar effect was observed in GP treated group when compared with their respective control group (7.62 ± 0.34 sec Vs. 3.86 ± 1.16 sec; P < 0.001; Fig. 1A). Figure 1B shows the effect of omeprazole on warm allodynia after CCI surgery. The baseline level of PWL of the CCI-induced hind paw was 2.610 ± 0.94 sec, when challenged with warm water, whereas the sham operated rats didn’t show any change in the warm allodynia. Omeprazole (50 mg/kg/day/oral) for 14 days treatment significantly increased the PWL as compared to the CCI-control group (8.82 ± 0.57 sec Vs. 2.610 ± 0.94 sec; [F (42.74) DF = 4, P < 0.001]). The paw withdrawal pressure threshold (PWT) was estimated using the electronic von Frey apparatus. The CCI-control group shows significant decreased in PWT as compared to sham operated animals (6.103 ± 0.86 g Vs. 18.7 ± 0.19 g, P < 0.001, Fig. 1C). The omeprazole treatment significantly increased the PWT as compared to respective control group (13.31 ± 1.27 g Vs. 6.103 ± 0.86 g; [F(17.23) DF = 4, P < 0.001]). ### Effect of omeprazole on motor nerve conduction velocity in CCI-induced neuropathic pain MNCV was evaluated on the day 14th after the respective treatments. There was a significant reduction (17.68 ± 1.01 mm/sec; P < 0.001) in the MNCV in the CCI-control group. The rats treated with omeprazole 50 mg/kg/day/oral for 14 days showed significant increase (30.50 ± 0.91 mm/sec) in the MNCV as compared to the CCI-control group [F (15.62) DF = 4, P < 0.001]. It is noteworthy that the alleviative effect of omeprazole on the MNCV was comparable to the gabapentin (Fig. 2). ### Effect of omeprazole on biochemical parameters in CCI-induced neuropathic pain Oxidative stress status was evaluated by the measurement of MDA as a marker of lipid peroxidation, GSH as well as SOD and catalase as markers of enzymatic and non-enzymatic antioxidant defence systems. CCI induced a state of marked oxidative stress as demonstrated by a significant increase (405.4 ± 55.97 μg/mg of protein) in the MDA level and a significant decrease in the GSH (18.45 ± 2.59 μg/mg of protein), activity of SOD (7.613 ± 1.77 U/mg of protein) and catalase (7.84 ± 1.32 U/mg of protein) levels in sciatic nerve homogenates as compared to control group. The treatment with omeprazole 50 mg/kg/day/oral for 14 days alleviated the effect CCI-induced oxidative stress on the levels of MDA, GSH. It also had significant restorative effect on the activities of SOD and catalase enzymes as compared to CCI-control group (Fig. 3). ### Effect of omeprazole on proinflammatory cytokines in CCI-induced neuropathic pain To further define the mechanisms by which cytokine signalling may be associated with CCI induced neuropathic pain, we examined the effect of omeprazole on release of pro-inflammatory cytokine expression of TNF-α, IL-1β and IL-6 in the nerve tissue homogenate. A significant increase in the cytokine levels in the CCI-control group (TNF-α [38.8 ± 1 Vs. 23.2 ± 0.9 pg/mg of protein respectively]), IL-6 [761.7 ± 1.8 Vs. 675.5 ± 0.7 pg/mg of protein respectively] and IL-1β [859 ± 0.7 Vs. 656.4 ± 1.2 pg/mg of protein respectively]) compared to normal group. The groups treated with omeprazole (50 mg/kg/day/oral) for 14 days showed a significant reduction in these elevated cytokines (TNF-α [25.5 ± 0.8 Vs. 23.2 ± 0.9 pg/mg of protein respectively], IL-6 [676.2 ± 1.0 Vs. 656.4 ± 1.2 pg/mg of protein respectively] and IL-1β [666.5 ± 1.1 Vs. 656.4 ± 1.2 pg/mg of protein respectively]) almost to the normal levels indicating that omeprazole exerts a consistent inhibitory effect on the cytokine release (Fig. 4). ### Effect of omeprazole on tissue architecture of sciatic nerve in CCI-induced neuropathic pain The CCI induced a noticeable histological perturbation as revealed in the longitudinal nerve sections including axonal swelling, neutrophil migration and an increase in the number of Schwann satellite cells and derangement in the nerve architecture. Omeprazole 50 mg/kg/day/oral for 14 days treatment protected the sciatic nerve from the CCI-induced structural damage and inflammatory changes (Fig. 5; Table 1). ### Effect of omeprazole on LPS-induced oxidative stress in primary glioblastoma U-87cells To study the impact of omeprazole in vitro, we first induced ROS by LPS treatment (500 ng/ml for 20 min) in primary glioblastoma U-87cells. After LPS stimulation, we treated the cells with graded concentrations of omeprazole and carried out a series of cell based assays. To check the anti-cell proliferative activity of omeprazole on LPS-induced cells, an MTT cell viability assay was performed. Figure 6A revealed that omeprazole decreased LPS-mediated ROS induced cell viability dose dependently with an IC50 of 100 μM. However, no appreciable cytotxicity was observed when cells were treated with omeprazole (0–200 μM) without LPS stimulation (data not shown). Interestingly, no cytotoxicity was also noted after treatment of LPS alone for 20 min (data not shown). Flow cytometric analysis of ROS production by DCFH-DA staining revealed a 25% ROS positive population after LPS induction which reduced to 15% after 50 μM of omeprazole treatment. Complete diminished (less than1%) ROS positive population was noted at 100 μM omeprazole exposure in LPS pre-treated cells (Fig. 6B). H2O2 is used as positive control for ROS generation. LPS treated cells displayed reduced activities of SOD and catalase which was alleviated after omeprazole exposure (Fig. 6C,D). Omeprazole alleviated the SOD and catalase even more than basal level at 100 μM (Fig. 6C,D). Next, we measured the cytokine profile (TNF-α, IL-6 and IL-1β) after omeprazole exposure in LPS pre-treated cells. LPS increased cytokines level compared to untreated cells. Interestingly, omeprazole dose dependently decreased the LPS-induced cytokines level in cells. Omeprazole treatment almost brings down the cytokines level to basal level at 200 Μm (Fig. 6E–G). ## Discussion The present study demonstrated that in addition to its PPI effect, omeprazole exhibits a protective effect by improving nerve conduction velocity and PWL; restoring the endogenous antioxidant system; preventing the increased level of inflammatory cytokines such as TNF-α, IL-1β and IL-6; reducing lipid peroxide metabolism; and preserving the structure and morphology of sciatic nerve via inhibiting proinflammatory cytokines. The present study shows for the first time that omeprazole exerts its neuroprotective effect not only through its antioxidant effect but also in part by inhibiting cytokines release. Therefore, our study provides direct evidence that the cytokines signaling pathway plays a key role in regulating oxidative stress and subsequent nerve injury in CCI-induced neuropathic pain. To the best of our knowledge, this is the first study to describe the neuroprotective effect of omeprazole through inhibiting cytokines release in CCI-induced model of neuropathic pain in rats. Several other drugs like minocycline16 and atorvastatin3 are reported to attenuate the CCI-induced neuropathic pain through anti-inflammatory and antioxidant actions. Omeprazole is previously reported as an anti-inflammatory13,17, antioxidant18, carbonic anhydrase and cytokine release inhibitor8,19 and neuroprotective agent14. Recently, there is an increased interest in the repositioning of the proton pump inhibitors as anticancer agents through targeting the thioesterase domain of the fatty acid synthase20. In light of these preclinical studies, we evaluated omeprazole for its antineuropathic efficacy in the CCI induced neuropathy model in rats and ROS induced model of U-87 human neuronal cell line. We selected the oral dose of 50 mg/kg of omeprazole for evaluation of the antineuropathic efficacy. At this dose it significantly inhibits the carrageenan induced acute paw inflammation. Such efficacy of omeprazole is reported earlier by El-Nezhawy10. The oral dose of 50 mg/kg/day of gabapentin was similarly selected dependent upon a prior report on its antineuropathic effect in CCI rat model21. CCI of the sciatic nerve in rats is the most commonly used model to induce neuropathic pain in rats and it mimics the pathophysiology of neuropathic pain in human22. In CCI model, the loose ligation compresses the nerve fibres and induces nerve damage, a consequent release of various inflammatory mediators from the mast cells, neutrophils and macrophage. CCI induces a persistent neuropathic pain state characterized by spontaneous pain, allodynia and hyperalgesia3,22. The causative factors of neuroinflammation induced by CCI include pro-inflammatory cytokines, prostaglandins and reactive oxygen species. In addition to the inflammatory mediators, an increased concentration of extracellular H + ions at the site of injury lowers the pH of the extracellular fluid. This acidic pH activates the TRPV1 (transient receptor potential vanilloid receptor 1- an acid sensitive ion channel) which serves as a sole sensor for the protons23. Activated TRPV1 alters calcium levels in the sensory neurons and sensitizes the capsaicin-sensitive neurons24. The peripheral injury induced by CCI also leads to the release of tachykinins and neurotransmitters such as glutamate, calcitonin gene-related peptide and γ-amino butyric acid. Prolonged release and binding of these substances to neural receptors activates the N-methyl-D-aspartate receptors, which causes an increase in the intracellular calcium levels25. Increased intracellular calcium through N-type calcium channels in the central nervous system plays an important role in the maintenance of the pain sensation in the central nervous system26. The other remarkable observation obtained in the present study was the ability of omeprazole to reduce the cytokine levels in CCI injury in rats. The pathogenesis of CCI-induced neuropathic pain and hyperalgesia involves a role of the immune cells that infiltrate the damaged nerve27 and the inflammatory immune mediators like IL-628, IL-1β29 and TNF-α30. Damage to the nerve activates neurons and glial cells and releases pro inflammatory mediators like TNF-α, IL-1β and IL-6. In the present study, it was found that CCI control animals showed a significant rise in the levels of these cytokines, whereas, omeprazole significantly decreased the increased levels of cytokines. Other studies suggesting that omeprazole possesses anti-inflammatory activity, reported to reduce vascular permeability and experimentally induced colitis by suppressing elevated level of inflammatory mediators such as neutrophils IL-1β, TNF-α and IFN-γ14,28. An in vitro investigation on the stimulated mouse macrophages has shown that omeprazole reduces the release of inflammatory cytokines. Omeprazole and other proton pump inhibitors are also known to inhibit IL-4 and IL-13 signalling STAT6 activation and thereby exert anti-inflammatory effects13. LPS induced ROS in U-87 human glioma cells which allevated pro-inflammatory cytokines (TNF-α, IL-6 and IL-1β). Omeprazole suppressed the elevated inflammatory cytokines, restoring the normal profile which revealed antineuropathic efficacy of omeprazole via reduction of ROS in vitro. Hence, the antineuropathic efficacy of omeprazole may involve its inhibitory effects on the release of these inflammatory cytokines. Omeprazole reduces the chemotaxis of neutrophils11, which also might contribute to the decreased amounts of cytokines in the nerve homogenates. Omeprazole is also known to protect the astrocytes from IFN-γ induced toxicity. In vitro study has proved its neuroprotective effects against monocytic and microglial damage14. In addition to reduction in chemotaxis of neutrophils, omeprazole reduces the oxidative stress through scavenging hydroxyl radicals and also inhibits the oxidative stress induced DNA damage related apoptosis18. The CCI-induced ischemic hypoxia causes disturbance in the secondary metabolites in the afflicted nerve fibres and induces oxidative stress. Further, the decreased nerve energy and degeneration of the nerve fibres due to neuronal ischemia reduces the MNCV31,32. In present study, omeprazole inhibited the decrease in the MNCV. It was observed in present study, that there was a significant increase in the PWT and PWL in the mechanical and thermal allodynia tests in omeprazole treated rats. This effect may be attributed to the antioxidant and anti-inflammatory property of omeprazole. Omeprazole reduced the oxidative stress induced by CCI and this was evident from the reduced malondialdehyde and restoration of the depleted GSH, catalase and SOD. Thus, the antioxidative effect of omeprazole may be considered as one of the mechanisms of its antineuropathic effect. Considering the time tested safety profile of omeprazole and its antineuropathic efficacy observed in this study, it is suggested that omeprazole may be considered for further evaluation in preclinical and clinical studies against painful neuropathic conditions. Further investigation in this direction may lead to the repositioning of omeprazole in treatment of inflammatory and painful disease conditions. Whether the present conclusions can be extrapolated to a clinical scenario remains to be determined in clinical studies. ## Methods ### Animals Adult male Wistar rats (180–250 g) were procured from the laboratory animal facility of our Institute. They were housed in standard temperature/humidity conditions and environment (12 h light/dark cycle). All animals were provided standard pellet diet and water ad libitum all time except during the estimation of the behavioural parameters. The animals were maintained in conformity with the regulations laid down by the Committee for the Purpose of Control and Supervision of the Experiments on Animals (CPCSEA) constituted under the Prevention of the cruelty to animals Act, 1960, Ministry of Environment and Forests, Government of India. The experimental protocols were approved by the Institutional Animal Ethics Committee of R. C. Patel Institute of Pharmaceutical Education and Research, Shirpur, Dist-Dhule, Maharashtra, India (Protocol approval # IAEC/RCPIPER/2014-15/09). All the tests complied with the recommendations of the International Association for the Study of Pain. ### Chemicals Omeprazole was obtained from Pharmachem R&D Laboratories, India. Gabapentin was gifted by Mylan laboratories, India. Cytokine ELISA Ready SET-Go kits for mouse IL-1β (Cat: 887013-22; Batch No. E09323-1645), IL-6 (Cat: 837064-22: Batch No. E09358-1645) and TNF-α (Cat: 837324-22: Batch No. E09479-1645) were procured from e-Biosciences Incorporation, USA. Freshly prepared drug suspensions in 0.5% carboxymethyl cellulose were used for oral administration to rats. LPS (Cat: L2880; Lot No. 025M4040V) was purchased from Sigma-Aldrich, St. Louis, Missouri, USA. ### Induction of CCI in rats The CCI surgery was performed as described by Aswar et al.22. The rats were anaesthetized under pentobarbital sodium (60 mg/kg, intraperitoneal). The common sciatic nerve of the right hind limb was exposed at the middle of the thigh by a blunt dissection through biceps femoris. Proximal to the trifurcation of sciatic nerve, about 5–7 mm of the nerve was freed off the adhering tissue and four loose ligatures (4.0 silk) were put around it approximately 1 mm apart. After performing nerve ligation, muscular and skin layers were immediately sutured and povidone-iodine solution was applied externally. After the surgery, the rats were kept in individual cages and were allowed to recover. The respective drugs treatments were initiated on the next day after the surgery. ### Cell culture and treatment The human glioma cell line; U-87 was grown and cultured in DMEM containing 10% FBS, 100 U/ml penicillin, 100 μg/ml of streptomycin, 1.5 mM L-Glutamine in a humidified atmosphere of 5% CO2 at 37 °C. After confirming that the cells attained 80% confluence, the media was replaced with fresh media containing 500 ng/ml LPS for 20 min for ROS induction. Various concentrations of omeprazole were added in LPS pre-treated cells for another 24 h prior to perform other experiments. A fixed concentration (10 μM) of H2O2 was treated for 30 min to produce ROS and used as positive control. ### MTT cell viability assay To check the cytotoxicity of omeprazole in LPS mediated ROS-induced U-87 cells, we have performed an MTT cell viability assay according to the protocol referred earlier33. Briefly, 8000–10,000 cells were seeded in triplicate in 96 well plate and grown to 80% confluence. Then, cells were treated with 500 ng/ml LPS for 20 min for ROS induction. LPS containing media was aspirated and the cells were further treated with a varied concentration of omeprazole for another 24 h. Then, 0.05% MTT reagent was added to each well and incubated overnight for the formation of formazan crystal. The colour intensity was spectrophotometrically (Berthold, Germany) measured at 570 nm after dissolving the formazan crystals in DMSO. Data was calculated and represented as percent viability against omeprazole concentrations. ### Experimental design The anti-inflammatory activity of orally administered omeprazole at 10, 30 and 50 mg/kg dose was measured in rat model of carrageenan-induced paw edema (data not shown). At the dose of 50 mg/kg, omeprazole significantly inhibited the induction of carrageenan-induced paw edema. Therefore, the dose of 50 mg/kg was chosen further to investigate the efficacy of omeprazole against CCI induced neuropathy and was compared with well reported anti-neuropathic dose of gabapentin (50 mg/kg p.o.). After CCI induction, the rats were allowed to habituate for 3 days. Treatment of omeprazole was initiated on the next day of CCI surgery. The thermal and mechanical allodynia were measured as described by Demir2 on 3rd, 7th, 11th and 14th days after surgery. The paw withdrawal latency (PWL) was observed with a maximum cut off time of 20 sec. For the determination of the thermal allodynia, the right paw of each rat up to the ankle joint was immersed in warm water (40 ± 1 °C) and cold water (12 ± 1 °C). The mechanical allodynia was measured using electronic Von-Frey apparatus using super-tips probes (2390 series, IITC Life Sciences Incorporation) and paw withdrawal threshold (PWT) was observed with cut-off pressure at 30 gm. On the 14th day post-surgery, the rats were anesthetized with pentobarbital sodium (60 mg/kg, intraperitoneal) in the temperature controlled atmosphere (25 °C). Sciatic-tibial motor MNCV was measured by stimulating proximally at the sciatic notch and distally at the knee via bipolar needle electrodes (Power Lab/ML856; AD Instruments, Australia; frequency 0.10 Hz, duration 0.1 ms, amplitude 1.5 V). After single stimulus the compound muscle action potential was recorded from the first interosseous muscle of the hind-paw by unipolar pin electrodes. The recording was a typical biphasic response with an initial M-wave which is a direct motor response due to stimulation of motor fibers. The MNCV was calculated as the ratio of the distance (mm) between both sites of stimulation divided by the difference between proximal and distal latencies measured in ms31. After recording of MNCV, the rats were sacrificed using overdose of pentobarbital sodium (intraperitoneal) and the injured right sciatic nerve was isolated along with 1 cm segments on proximal and distal side of the CCI injury. A 5 mm central portion of the isolated nerve segment was further processed for histological examination. The sections of 4 μm thickness were obtained and were stained with haematoxylin and eosin. The stained sections were examined under the light microscope for structural alterations including fibre derangement, swelling of nerve fibre and presence of activated satellite cells and Schwann cells. A 10% homogenate of the remaining segments of sciatic nerve from each rat was prepared in ice chilled phosphate buffer (50 mM, pH 7.4). The homogenate was centrifuged at 2000 g for 20 min at 4 °C and the aliquots of the supernatant were used to estimate the content of lipid peroxidation measures as malondialdehyde (MDA), reduced glutathione (GSH), catalase and superoxide dismutase (SOD) as follows. Lipid peroxidation in the nerve tissue was determined by measuring MDA content as described by Ohkawa et al.34. Briefly, 0.2 ml of the tissue homogenate was mixed with 0.2 ml of 8.1% sodium dodecyl sulphate, 1.5 ml of 30% acetic acid (pH 3.5) and 1.5 ml of 0.8% thiobarbituric acid. The reaction mixture was heated for 60 min at 95 °C and then cooled on ice. After cooling, 1.0 ml of distilled water and 5.0 ml of n-butanol: pyridine (15:1 v/v) solution were added and centrifuged at 5000 rpm for 20 min. The absorbance of the generated pink colour in organic layer was measured at 532 nm. The reagent; 1,1,3,3-tetraethoxypropane (Sigma Chemicals, USA) was used as the standard MDA and the levels were expressed as μg/mg of protein. The neuronal GSH was estimated by the method of Moron et al.35. Briefly, 100 μl of tissue homogenate was mixed with 100 μl of 10% trichloro acetic acid and vortexed. The contents were then centrifuged at 5000 rpm for 10 min. Subsequently 0.05 ml of supernatant was mixed with a reaction mixture containing 3.0 ml 0.3 M phosphate buffer (pH 8.4) and 0.5 ml of DTNB. Within 10 min, the absorbance was measured spectrophotometrically at 412 nm. The concentration of GSH was determined from a standard curve produced using commercially available standard GSH (Sigma Chemicals, USA). The levels of GSH were expressed as μg/mg of protein. Catalase activity was estimated by the method described by Aebi36. Briefly, to 50 μl of tissue supernatant a cocktail of 1.0 ml of 50 mM phosphate buffer (pH 7) and 0.1 ml of 30 mM hydrogen peroxide was added. The absorbance which read as reduction in optical density was measured spectrophotometrically on every 5 sec for 30 sec at 240 nm. The activity of catalase was expressed as U/mg protein. SOD activity was determined by the method described by Marklund and Marklund37. Briefly, to 25 μl of tissue supernatant a cocktail of 100 μl of 500 mM Na2CO3, 100 μl of 1 mM EDTA, 100 μl of 240 μM/ NBT, 640 μl of distilled water, 10 μl of 0.3% Triton x 100 and 25 μl of 10 mM Hydroxylamine was added. The readings were recorded spectrophotometrically in kinetic mode at interval of 1 min up to 3 min at 560 nm. The enzyme activity was expressed as U/mg protein. As the IC50 of omeprazole treated LPS stimulated U-87 cells was 100 μM with no detectable cytotoxicity on the non-induced cells, so, for other in-vitro experimentation, LPS mediated ROS induced U-87 cells were treated with 50, 100 and 200 μM of omeprazole. Total cellular lysate was prepared using modified RIPA lysis buffer and 50 μg of protein was used to determine the activity of SOD and catalase. The quantification of TNF- α, IL-1β and IL-6 was performed both in homogenate and cell culture supernatant using 50 μg of protein by the commercially procured ELISA kits37. ### Statistical analysis The statistical analysis was performed using Graph Pad Prism version 6.0 software, USA. The results are expressed as mean ± S.E.M. The data from the behavioural parameters were analysed by repeated measure one-way analysis of variance (ANOVA). The data sets of the biochemical parameters are analysed using one-way ANOVA followed by Dennett’s post-hoc test and Bonferroni’s multiple comparison test for cytokine analysis. The results are expressed as F (DFn, DFd). The statistical significance of difference in the central tendencies of treatment groups as compared to CCI group was designated as **p < 0.005 and ***p < 0.001. Whereas, the significance of difference of CCI and LPS group compared to control group was shown as ###p < 0.001. The significance of difference of H2O2 (positive control of ROS in in-vitro experiments) compared to control was designated as \$p < 0.001.
Building a Brain - Distributing Machine Learning Models in NoSQL Author :: Kevin Vecmanis In this post I walk through an architecture model for building better operational intelligence into VanAurum by distributing and accessing many machine learning models in MongoDB, a popular open source NoSQL database framework. In this article you will learn: • Terminology differences between NoSQL and SQL • Pulling structured SQL data using SQLAlchemy • Training a machine learning model using MLAutomator • Storing the model in MongoDB with all the details required to load and make predictions • How to load a model from MongoDB Introduction Operational Intelligence (OI) is a term that describes one of the more immediate and practical use cases of machine learning and AI in business. As machine learning gets more and more capable the possibilities for OI are going to grow with it. Wikipedia has this to say about OI: The purpose of OI is to monitor business activities and identify and detect situations relating to inefficiencies, opportunities, and threats and provide operational solutions. Some definitions define operational intelligence as an event-centric approach to delivering information that empowers people to make better decisions, based on complete and actual information. You can define operational intelligence however you like, but I regard it as just varying degrees of analysis automation. Questions that at one time would need to be answered manually by a person are now coming under the purview of OI. Consider the following questions: • How does my customer purchasing behaviour change on days that it’s rainy ? • Who are my customers most likely to convert to a premium service given that they already purchase the standard service ? • What 3 asset classes have the highest apriori probability of rising over the next 2 months ? These are all questions that require you to source data, build a model, and project that model forward into the future. There is real leg-work required for people to do this, so why should we expect anything different from an AI system delivering OI insights? With any intelligent or semi-intelligent system, I think it’s unreasonable to expect it to know something until it knows it. Consider yourself and everything that you currently know. You’re reading the words on this page without any thought or consideration for the individual letters in the words. You’re likely not even conciously thinking about individual words - you’re just reading, and the words are stimulating “concepts” in your mind that your brain is stringing together from past “models” it has learned from. At one point in your life you didn’t know what the letter A was. A capital A has three lines in it, and at one point you didn’t even know what a line was. These are all things we learn to identify over time, through exposure and repetition. Learning is the process of piling one abstract concept on top of another: • Lines make letters • Letters make up words • Words make up sentences • Sentences express concepts • The concept in a sentence can elicit even higher level concepts - humour, irony, paradox. • Sentences can comprise paragraphs, pages, and books that can represent an entire life’s worth of abstraction and ideas. The message here is that you don’t know something until you do. So if we’re going to build an operational intelligence framework, why should the process be any different? In the rest of this post I’m going to walk through the implementation of an OI system I’m building for VanAurum. The code here is as robust as it needs to be for demonstration purposes, but no more. I hope you enjoy it. Thoughts on System Architecture To build an operational intelligence framework in the manner I’m going to discuss, it requires a solid data infrastructure and foundation. In my own opinion, 90-95% of the time spent designing any holisitc machine learning system should be spent thinking through, designing, and building the data infrastructure. The importance of it can’t be understated - it’s like the foundation in a skyscraper. It also requires extensive engagement with the business professionals in your organization. Regardless of how good your data infrastructure is, if you’re not collecting the data that’s needed to make smart business decisions or add customer value it’s all for nothing. The framework we’re looking at here is as follows: 1. Extract data from structured data sources: In this case I’m pulling data from a PostGreSQL database hosted on Heroku. This database contains all the structured data VanAurum uses to generate market analysis. 2. Transforming the data into a format applicable to each learning instance, which I’ll describe below. 3. Fit an optimal machine learning pipeline to the data. 4. Store the pipeline in a NoSQL database with all of the meta data required to access the same database the model was trained on (so that it can perform predictions in the future on an updated version of the database table). 5. Retrieving the models and their meta data so that they can be used for OI. So what kind of stuff do I want to do with this OI layer? In particular, I want VanAurum to be able to answer questions like: • What is the probability that Gold prices will be higher in 5 days? 20 days? 161 days? • What five assets have the highest probability of trading lower in one month? How the models are trained, and the extent to which this data can be relied on in relatively non-deterministic systems like financial markets is a topic for another article. Hypothetically speaking, these questions would be fairly easy to answer if you had a database table of probabilities with a schema roughly like this: TABLE : Probabilities SCHEMA: (PRIMARY_KEY) AssetName: TYPE char 1_Day_Probability: TYPE float 2_Day_Probability: TYPE float 3_Day_Probability: TYPE float 4_Day_Probability: TYPE float . . . . N_Day_Probability: TYPE float You could answer the first question like: SELECT 5_Day_Probability, 20_Day_Probability, 161_Day_Probability FROM Probabilities WHERE (AssetName = "GOLD") And you could answer the second question like: SELECT TOP 5 AssetName, 21_Day_Probability FROM Probabilities ORDER BY 21_Day_Probability ASC -- Note: VanAurum predicts probability of a rise. Probability of fall = (1 - probability of rise) Simple enough, right? The goal of the OI system is to make the intelligent insights easy enough to access that business professionals using standard analytics sofware (like Mode or MetaBase) can get the intelligent insights from the user interface. This type of simple query could easily be handled by off-the-shelf OI software - and this is our goal. Lots of business professional don’t have the time, desire, or knowledge to perform complex SQL queries - and this is the value that OI analytics software can deliver. But imagine our Probabilities table didn’t exist? Answering these questions en-masse would become difficult. This is where our distributed model cluster (brain) comes into play. The intention of this cluster of models is to push value-added information to a structured dataset with a schema that makes it more easily accessible by analytics software. With a robust underlying data-ingestion framework, a cluster of models can be built on top of it to facilitate any type of OI that the data permits. Structrure of the Distributed Model Cluster (Brain) NoSQL databases lend themselves well to storing machine learning models. Some of the reasons are: • You don’t need a predefined schema - not a deal breaker here, but if you don’t need one why make one? • Very well suited to hierarchical data storage - you’ll see why this benefts us when you see VanAurum’s structure below. • Horiztonally scalable - which means you can easily cluster NoSQL databases. This is a huge benefit for accumulating machine learning models because you want to be able to quickly and easily store and access models. It wouldn’t be much of a “brain” if you couldn’t! Quick note: Going forward, when I mention the following NoSQL terms you can think of the SQL equivalent: • Collection = Table • Document = Row/Record All of the other shortcomings of NoSQL databases - like the fact that it doesn’t support complex relations and queries - aren’t really a design concern for us. Why? Because all we want is a quick way to store and load machine learning models in a hierarchical format. Each model will be explicitly called when it needs to be, and there are no relationships between any of the models. We also want it horizontally scalable so that we can add thousands of models without significant degradation in retrieval time. If this isn’t an application for NoSQL, I’m not sure what is! Structure of VanAurum’s Model Cluster You see the hierarchical and horiztonal nature of this model cluster - this will make the cluster extremely scalable if we use the right cloud technology (Like MongoDB on Heroku or DynamoDB on AWS). What we want to do is train models on predicting a comprehensive variety of N-day return periods for whatever asset class we want to make predictions on. Assets could be things like: Gold, Copper, S&P 500 index, Bitcoin, ratios, or individual stocks. In this article I’m going to walk through a demonstration for one asset class - Gold - and one return period - 25 days. I’m using the python library for MongoDB - PyMongo. Let’s dive into the code! Pulling Structured Data From PostGreSQL The following function pulls data from a PostGres database hosted on Heroku, transforms it into a training data set with a single return period as a target. Important: There are helper functions used by some of the following methods. All the helper functions are included at the end of this article: List of helper functions. I’ll include a # Helper tag beside each method that is a helper. I wanted to stick to the point here and not clutter the code section too much. The next thing we want to to is load this .csv file and split it into readable feature/target ndarray types that are readable by our machine learning algorithms. Prepare Data for Machine Learning Algorithms Note that in this method we return not only X and Y training/target data, we also return features. We’re going to use this in our NoSQL document entry so that when we retrieve models we know what columns from our base data table were used in the training process. Training the Model With MLAutomator Here we’re going to do a few things: • Declare several variables that get saved in the MongoDB document along with each model. • asset: The asset table to pull from our PostGres database. • sources: VanAurum’s assets are split into their own database tables. If joins are required to reconstruct the training dataset, the complete list of tables are listed here. In this case we’re only using a single table. • return_period: Used to create our target column. Also gets saved to MongoDB as an identifiier for the model (see diagram shown previously) • Create the training dataset using our method create_training_dataset() • Generate X and Y training data, along with features which gets saved as meta data for our MongoDB model so that the training table can be reconstructed. • Train our model using MLAutomator - this is my open source library for fast model selection using bayesian optimization. See it on Github here: MLAutomator Saving the Model and Meta Data to MongoDB As mentioned previously, there are a few bread crumbs we want to save as meta data so that our models are going to be useful in the future. • We want to be able to reconstruct the data the model was trained on so that we can make predictions on the same data in the future. • MLAutomator saves a complete pipeline (pre-processing, feature selection, and training), so we don’t need to worry about those here. • We do need to save: • The schema from the original structured data that were used - features • model_name, so that we can reference the model later. • A list of all the source tables used to build the data • The connection string to access the database that the training data was pulled from. Why? Because new data we want to make predictions on is going to appear here in the future. Tying These Methods Together To tie these last few code segments together, here’s what that complete process looks like when we add save_model_to_brain() and load_model_from_brain() Checkpoint - What have we done so far? So we have basically built this from our original cluster structure: • We have one collection, GOLD, and one document, RETURNS25. • We’ve built the methods necessary to: • Pull structured data • Build training data sets • Train models • Save models into our cluster • Load models from our cluster Filling in the rest of our cluster is just a matter of looping through the asset tables and desired return periods and executing these methods. Looping back to our original objective… What was the point of this again? Well we want to break down as many barriers as possible between business professionals and intelligent data insights. Operational Intelligence systems don’t become intelligent on their own - they require robust data infrastucture, purposeful data collection, and a streamlined process for identifying areas where OI clusters like this should be built. A lot can be accomplished from an analytics perspective just by having the right data structured in the right schema - Machine learning doesn’t need to be applied everywhere. For some insights, you do need machine learning - and if you need to cover a wide breadth of questions, NoSQL clusters like this are one way to build it at scale! I hope you enjoyed this post. Below this are the helper methods and additional libraries that were used in the main methods above. Feel free to check them out. Kevin Vecmanis
# zbMATH — the first resource for mathematics A zonotope associated with graphical degree sequences. (English) Zbl 0737.05057 Applied geometry and discrete mathematics, Festschr. 65th Birthday Victor Klee, DIMACS, Ser. Discret. Math. Theor. Comput. Sci. 4, 555-570 (1991). [For the entire collection see Zbl 0726.00015.] Author’s abstract: “Let $$D_ n$$ denote the convex hull in $$\mathbb{R}^ n$$ of all (ordered) degree sequences of simple $$n$$-vertex graphs. Using the fact that $$D_ n$$ is a zonotope, an explicit generating functions is found for the number of these degree sequences. The $$f$$-vector of $$D_ n$$ is found using Zaslavsky’s theory of signed graph colorings. Finally we give a generalization based on a result of Fulkerson, Hoffman, and MacAndrew.”. ##### MSC: 05C30 Enumeration in graph theory 52Bxx Polytopes and polyhedra ##### Keywords: degree sequences; zonotope
# Wydawnictwa / Czasopisma IMPAN / Dissertationes Mathematicae ## Czasopisma IMPAN Artykuły w formacie PDF dostępne są dla subskrybentów, którzy zapłacili za dostęp online, po podpisaniu licencji Licencja użytkownika instytucjonalnego. Czasopisma do 2009 są ogólnodostępne (bezpłatnie). ## On the numerical index with respect to an operator ### Tom 547 / 2020 Dissertationes Mathematicae 547 (2020), 1-58 MSC: Primary 46B04; Secondary 46B20, 46B25, 46L05, 47A12, 47A30. DOI: 10.4064/dm805-9-2019 Opublikowany online: 2 January 2020 #### Streszczenie The aim of this paper is to study the numerical index with respect to an operator between Banach spaces. Given Banach spaces $X$ and $Y$, and a norm-one operator $G\in \mathcal{L}(X,Y)$ (the space of all bounded linear operators from $X$ to $Y$), the numerical index with respect to $G$, $n_G(X,Y)$, is the greatest constant $k\geq 0$ such that $$k\|T\|\leq \inf_{\delta \gt 0} \sup\{|y^\ast(Tx)|\colon y^\ast\in Y^\ast,\,x\in X,\,\|y^\ast\|=\|x\|=1,\,\operatorname{Re} y^\ast(Gx) \gt 1-\delta\}$$ for every $T\in \mathcal{L}(X,Y)$. Equivalently, $n_G(X,Y)$ is the greatest constant $k\geq 0$ such that $$\max_{|w|=1}\|G+wT\|\geq 1 + k \|T\|$$ for all $T\in \mathcal{L}(X,Y)$. Here, we first provide some tools to study the numerical index with respect to $G$. Next, we present some results on the set $\mathcal{N}(\mathcal{L}(X,Y))$ of the values of the numerical indices with respect to all norm-one operators in $\mathcal{L}(X,Y)$. For instance, $\mathcal{N}(\mathcal{L}(X,Y))=\{0\}$ when $X$ or $Y$ is a real Hilbert space of dimension greater than 1 and also when $X$ or $Y$ is the space of bounded or compact operators on an infinite-dimensional real Hilbert space. In the real case $$\mathcal{N}(\mathcal{L}(X,\ell_p))\subseteq [0,M_p] \quad \text{and} \quad \mathcal{N}(\mathcal{L}(\ell_p,Y))\subseteq [0,M_p]$$ for $1 \lt p \lt \infty$ and for all real Banach spaces $X$ and $Y$, where $M_p=\sup_{t\in[0,1]}\frac{|t^{p-1}-t|}{1+t^p}$. For complex Hilbert spaces $H_1$, $H_2$ of dimension greater than 1, $\mathcal{N}(\mathcal{L}(H_1,H_2))\subseteq \{0,1/2\}$ and the value $1/2$ is taken if and only if $H_1$ and $H_2$ are isometrically isomorphic. Moreover, $\mathcal{N}(\mathcal{L}(X,H))\subseteq [0,1/2]$ and $\mathcal{N}(\mathcal{L}(H,Y))\subseteq [0,1/2]$ when $H$ is a complex infinite-dimensional Hilbert space and $X$ and $Y$ are arbitrary complex Banach spaces. Also, $\mathcal{N}(\mathcal{L}(L_1(\mu_1),L_1(\mu_2)))\subseteq \{0,1\}$ and $\mathcal{N}(\mathcal{L}(L_\infty(\mu_1),L_\infty(\mu_2)))\subseteq \{0,1\}$ for arbitrary $\sigma$-finite measures $\mu_1$ and $\mu_2$, in both the real and the complex cases. Also, we show that the Lipschitz numerical range of Lipschitz maps from a Banach space to itself can be viewed as the numerical range of convenient bounded linear operators with respect to a bounded linear operator. Further, we provide some results which show the behaviour of the value of the numerical index when we apply some Banach space operations, such as constructing diagonal operators between $c_0$-, $\ell_1$-, or $\ell_\infty$-sums of Banach spaces, composition operators on some vector-valued function spaces, taking the adjoint to an operator, and composition of operators. #### Autorzy V. N. Karazin Kharkiv National University pl. Svobody 4 61022 Kharkiv, Ukraine e-mail Departamento de Análisis Matemático e-mail Departamento de Análisis Matemático e-mail • Antonio PérezInstituto de Ciencias Matemáticas (CSIC-UAM-UC3M-UCM) Campus Cantoblanco C/ Nicolás Cabrera, 13–15 e-mail
# Coronas of Spaces and the Borsuk Shape Category I was thinking about some topological ideas, especially those relating to shape theory, and came across some interesting constructions that seem to relate to shape theory, but that I can't quite situate within my own knowledge - so was wondering if there is any related ideas out there in math. Let $$\beta$$ be the Stone-Cech compactification functor. For any space $$S$$, let us define a new space $$CS$$ as follows: Consider the space $$\beta(S\times [0,1))$$ and the projection $$\beta \pi_2:\beta(S\times [0,1))\rightarrow [0,1]$$ given by extending the projection to the second coordinate. Define $$CS$$ to be the fiber of $$1$$ under $$\beta \pi_2$$ - note that if $$S$$ is itself compact Hausdorff, this is just $$CS=\beta(S\times [0,1))\setminus S\times [0,1)$$. This space somehow "expands" $$S$$ to be large enough that any map $$S\times [0,1)$$, when thought of as a homotopy missing its endpoint, will in some sense have a limit. More precisely, if $$Q=\prod_J[-1,1]$$ for some indexing set $$J$$ with projections $$\pi_j:Q\rightarrow[-1,1]$$ for each $$j\in J$$ and $$B$$ is some compact subset of $$Q$$, one may explicitly describe the maps $$CA\rightarrow B$$: they are continuous functions $$f:S\times [0,1)\rightarrow Q$$ such that for any open set $$N$$ containing $$B$$, there is some $$\alpha$$ such that if $$t>\alpha$$ then $$f(a,t)\in N$$ under the equivalence relation that $$f\sim g$$ if for every $$j\in J$$ and $$\varepsilon>0$$ there is some $$\alpha$$ such that if $$t>\alpha$$ then $$\pi_j(f(a,t))-\pi_j(g(a,t))<\varepsilon$$. Note that every compact Hausdorff space $$B$$ can be embedded in some such $$Q$$. This characterization seems very similar to the definition of morphisms in Borsuk's Shape Category except that morphisms in that category are taken up to homotopies, whereas these continuous functions $$CA\rightarrow B$$ are more strictly continuous maps - I haven't worked out all the details, but I think that under an appropriate notion of equivalence (e.g. some sort of homotopy in the corona of $$A\times [0,1)\times [0,1]$$ using the two embedding of $$CA$$ into this with last coordinate $$0$$ and $$1$$), one can get "morphism" in the shape category as an equivalence class of maps $$CA\rightarrow B$$. This leaves me with a natural question: can one use this construction to handle some of the nasty spaces (e.g. the Warsaw circle) that shape theory handles without working in a category defined only up to homotopy? More explicitly, suppose we have some set of reasonably nice spaces (maybe something like compact Hausdorff) - can we define a category whose morphisms from $$A$$ to $$B$$ are continuous maps $$CA\rightarrow B$$? The stumbling block is that there's no obvious composition law to use - but maybe there is some clever composition law, or perhaps $$C$$ can be given the structure of a comonad via some map $$CA\rightarrow CCA$$ (although no such maps come to mind!), or maybe the explicit description of the maps $$CA\rightarrow B$$ can somehow be used (although it seems to become difficult to show well-definedness when one tries to do this). Is there a way to make a category of "shapes" not up to homotopy along these lines? Is there any literature related to this idea?
1. Jun 27, 2004 ### 1+1=1 i have two ?'s to ask yall. ok, i need to prove every even perfect number is a triangular number. the formula is t(n)= 1+2+... tn = (n(n+1))/2. ok i know that to be a perfect number, it is sigma (a) which menas 2times a. for ex, sigma(6)=1+2+3+6=12. this is as far as i can get can anyone show me light for this? find least residue for (n-1)! mod n for several n values and find a general rule. alright, i know bty least residue means basically the remainder. it is in the form of a=bq + r, where r is the least residue. again, can anyone show me what i'm missing here for this problem??? please even if you are viewing this post, please say anything as to what you are thinking about the problem... Last edited: Jun 27, 2004 2. Jun 28, 2004 ### Gokul43201 Staff Emeritus 2. Look at the residues for prime n. Then look at today's thread titled 'Prime Factorial Conjecture' in this subforum. 3. Jun 28, 2004 ### AKG You've asked these questions in another thread, where I've responded. In case you missed it, check [post=244396]my post[/post] along with the thread Gokul43201 suggested. 4. Jun 29, 2004 ### robert Ihnot For the first question, regarding triangle numbers and perfect numbers; the two facts we need to know are the form of the even perfect numbers, the only kind ever found, and a way of relating a triangle number to a perfect number. The form of the perfect number is: (2^(p-1))((2^p) -1). In this case we must have (2^p) -1 is prime and this implies that p is also prime. Now all that is necessary is to find an if and only if relationship between a triangle number, and something like a square, and see if that also holds for a perfect number. Last edited: Jun 29, 2004
# Which test to use: Chi-squared, Fisher's exact or some other? I asked 200 survey participants the same sequence of eight multiple-choice questions with four answer options (A, B, C and D). Here are the results: • All A's: 58 • All B's: 1 • All C's: 2 • All D's: 0 • Mixtures of A's, B's, C's and D's: 139 I want to work out whether these results are statistically significant, which I take to mean whether the probability that they occurred randomly is less than 0.001. I understand the expected value for each combination of answers – all A's, all B's, all C's, all D's, and each mixture of A's, B's, C's and D's – to be 200/(4^8), which is 0.003051758. So here's the problem. I've read that a Chi-squared test requires all of the expected values to be greater than five, and in this case none of them is greater than five. I've also read that none of the observed values can be zero, and in this case one of them is zero. I've seen something about artificially combining categories to bring all the expected and observed values above five and zero respectively, but I don't understand how this can be done without artificially affecting the p-value. Finally, I've read a few things about Fisher's exact test, but all of them seem to suggest that I'd be allowed only a few rows of values, whereas in this case I have 65,536 (i.e. 4^8). What is the most appropriate method in this circumstance? • By your comment on Fisher's exact test are you referring to the 'norm' of limiting it to 2x2 tables? Otherwise, please provide a link to the reference in question. – Simon Jan 2 '17 at 12:01 • You've caught me out there. What I'm alluding to is the inability of any free online calculators or Excel functions I've found to handle more than six rows. Suffice it to say I haven't the mathematical ability to do the calculations manually. – Remster Jan 2 '17 at 13:00 With your concern about using Fisher's exact test, as I understand it the test can be applied to tables larger than 2x2, but the reason for this norm is that it is computationally expensive otherwise. The greater concern related to Fisher's exact test would be the assumption of fixed totals. An example of this is given here. Quoted: An example [of fixed totals] would be putting 12 female hermit crabs and 9 male hermit crabs in an aquarium with 7 red snail shells and 14 blue snail shells, then counting how many crabs of each sex chose each color (you know that each hermit crab will pick one shell to live in) Your study design does not meet this condition. Other options are: • A G-test (likelihood ratio $\chi^2$). Some recommend using this when the sample is low. In R you can use the likelihood.test function in the Deducer package • Exact multinomial test. Recommended here and elsewhere. The function 'xmulti' of the R package XNomial. Read the vignette. Now that you've reached the limits of excel, give another software package a go. • Conditioning on marginal totals has always been controversial. I don't really see it as an issue. Because the exact test has to look at calculating multinomial probabilities for all table as extreme or more extreme than the given table it can be computer intensive even by today's standards. – Michael R. Chernick Jan 2 '17 at 14:04 • I looked at the reference you (Simon) link about the chi square test. It contains a lot of good information and references but I did not see anything about the asymptotic nature of it. – Michael R. Chernick Jan 2 '17 at 14:11 • @Michael, I see that I have my own homework to do digesting this among other material. – Simon Jan 2 '17 at 14:32 • Putting aside any judgment about the links you provide I liked your answer and gave an upvote. If you want to invest time in studying the asymptotic properties of Pearson's chi square by all means go for it. – Michael R. Chernick Jan 2 '17 at 14:50 • Thanks for your replies. I'll have a look at those links. In the meantime, can you please confirm whether my calculation of expected values is correct? – Remster Jan 2 '17 at 16:23
# Beta Function (G Dataflow) Evaluates the beta function and regularized incomplete beta function. Evaluates the beta function. Evaluates the regularized incomplete beta function.
# Angular Momentum vs Linear Momentum 1. Dec 4, 2014 ### Maged Saeed 1. The problem statement, all variables and given/known data The following figure shows an overhead view of a thin rod of mass M=2.0 kg and length L = 2.0 m which can rotate horizontally about a vertical axis through the end A. A particle of mass m = 2.0 kg travelling horizontally with a velocity $$v_i=10 j \space m/s$$ strikes the rod (which was initially at rest) at point B. the particle rebounds with a velocity $$v_f=-6 j\space m/s$$. Find the angular speed of the rod just after the collision. 2. Relevant equations $$(I\omega)_i=(i\omega)_f$$ 3. The attempt at a solution I have tried to solve this question using the previous equation , but I'm stuck with the momentum of the ball . Should it be linear or angular? I mean ; which of the following equation should i use : 1) $$(mvl)_{ball}=(I\omega)_{rod} +mvl$$ $$(2 \times 2 \times 10j)=(\frac{2 \times 2^2}{12}+2 \times 1^2)+2\times 2 \times -6j$$ The moment of inertia of the rod is: $$\frac{ml^2}{12}+mh^2$$ where h is the distance from the center of mass to the axis of rotation. 2) $$(mv)_{ball}=(I\omega)_{rod}+mv$$ $$(2 \times 2)=(\frac{2 \times 2^2}{12}+2 \times 1^2)+(2 \times -6j)$$ The equation 2 seems to lead to the correct answer , but , why i should take the linear momentum instead of angular momentum!! 2. Dec 4, 2014 ### BvU Which equation has the same dimension for all terms ? 3. Dec 4, 2014 ### Maged Saeed Yeah , .. How it comes that I didn't pay attention to this point .. Thanks ..
# A relation about invertible and nonsingular matrices In the lectures I am following, we are trying to show that $AB = I \implies B=A^{-1}$, given that A and B are $n \times n$ square matrices. Of course we don't know if A and B are invertible or nonsingular etc. First we need to show these. It follows in the lectures that for a $\vec y \in R^n$, $$A(B\vec y) = \vec y$$ and thus for every $\vec y$, there is a solution $B \vec y$. Thus the system is consistent with its coefficient matrix as $A$. Then the proof says $A$ must be nonsingular. I am lost at this reasoning. How did we jump to the fact that $A$ is nonsingular? How can I know that $B \vec y$s are unique for all $\vec y$? Maybe the system has infinitely many solutions? For reference, this is from Theodore Shifrin's Math 3500 Lectures on Youtube, Day 33, around time 35:00. • You may have a look at this question math.stackexchange.com/q/3852/72031 – Paramanand Singh Jul 31 '17 at 17:05 • $\det(AB) = \det(A)\det(B).$ Suppose $A$ is singular $\det(A) = 0$ and $\det(AB) = 0.$ Since $\det(I) =1,AB \ne I,$ violating the given condition that $AB= I. A,B$ are non-singular. – Doug M Jul 31 '17 at 17:08 • @DougM: $AB \ne I$? – NickD Jul 31 '17 at 17:09 • @Nick I have cleaned up the language and posted it as an answer. – Doug M Jul 31 '17 at 17:16 • Looks good. (filler to satisfy min length). – NickD Jul 31 '17 at 18:42 $AB = I \implies A, B$ are non-singular matrices. Suppose $A$ is singular. $\det(A) = 0.$ Since $\det(AB) = \det(A)\det(B),$ if $\det(A) = 0$ then $\det(AB) = 0.$ $\det(I) =1.$ $\det(AB) \ne \det(I) \implies AB\ne I$ This violates the given condition that $AB= I.$ $A$ is non-singular. Same logic can be applied for $B.$ • But I don't have problem with the claim in the proposition. What I don't understand is how we reach at $A$ is nonsingular, from $A (B \vec y) = \vec y$ is consistent. Doesn't this require $B \vec y$s to be unique for all $\vec y$? – meguli Jul 31 '17 at 20:00 • If $B$ is non-singular $B\mathbf x = B\mathbf y \implies \mathbf x =\mathbf y$ or $\mathbf u=B\mathbf y$ is a bijective map. – Doug M Jul 31 '17 at 20:04 • But we didn't know B was nonsingular, to begin with. It's easy from there once you are able to show one of A and B is nonsingular. – meguli Jul 31 '17 at 20:09 • Both $A$ and $B$ must be non-singular. If $B$ is singular then there exists a $\mathbf y$ such that $B\mathbf y = \mathbf 0$. If $A$ is singular and $B$ is non-singular then there exists a $\mathbf u$ such that $A\mathbf u = \mathbf 0$ and a $\mathbf y$ such that $B\mathbf y = \mathbf u$ – Doug M Jul 31 '17 at 20:12 • That makes $A ( B \vec y ) = \vec y$ still consistent. Only this time, trivial solution is not the only solution. So B can in fact be singular? – meguli Jul 31 '17 at 20:16
# iarray.arctan2# iarray.arctan2(iarr1: IArray, iarr2: IArray)# Element-wise arc tangent of $$\frac{iarr_1}{iarr_2}$$ choosing the quadrant correctly. Parameters Returns angle – A lazy expression that must be evaluated via out.eval(), which will compute the angles in radians, in the range $$[-\pi, \pi]$$. Return type iarray.Expr References np.arctan2
Testing gravity with the positions of supermassive black holes # Overview Testing General Relativity on large scales is largely tantamount to searching for new fundamental interactions (‘‘fifth forces’’) between masses, mediated by dynamical fields beyond the metric tensor. An important competitor to General Relativity is galileon gravity, which introduces a new light scalar field with a Lagrangian that is symmetric under Galilean transformations. Historically the galileon has been a leading contender for explaining dark energy, but now it is viewed mainly as an archetype of ‘‘Vainshtein-screened’’ theories where the fifth force from the scalar field vanishes in high-density regions due to second derivative terms in the equation of motion. This behaviour arises in many theories beyond the Standard Model. A key feature of the galileon is that it couples to nonrelativistic matter but not to gravitational binding energy, violating the strong equivalence principle. This means that black holes – the only purely gravitational objects – are entirely unaffected by the galileon, while the stars, gas and dark matter in galaxies feel the full fifth force. As illustrated in Figure 1, this causes the supermassive black holes at the centres of galaxies to lag behind the other galactic components in the direction of an external galileon field. We have used this effect in a recent article to place stringent constraints on the strength of a galileon coupling to matter.1 # CSiBORG: Mapping the large-scale gravitational field To make predictions for black hole positions in galileon gravity, we need to know the fifth-force field on a galaxy-by-galaxy basis. To do this we introduced CSiBORG (Constrained Simulations in BORG), a suite of ~100 RAMSES N-body simulations using initial conditions sampled from the posterior of the BORG-PM algorithm. CSiBORG gives an accurate picture of dark matter structures within $$\sim 250$$ Mpc of the Milky Way with a mass resolution of $$4.4 \times 10^9 \text{M}_\odot$$, including full propagation of the uncertainties in the initial conditions. We use CSiBORG to map out the local galileon field in the linear, quasistatic approximation. Combined with a flexible model for halo structure, this allows us to calculate the expected galaxy–black hole offsets as a function of the galileon coupling coefficient and the radius within which the fifth force is suppressed by the Vainshtein mechanism, $$r_V$$. We apply this to $$\sim 2000$$ galaxies in which the offset has been measured by comparing optical images of galaxies to multi-wavelength observations of Active Galactic Nuclei. Marginalising over an empirical model describing astrophysical noise, we then use a Bayesian likelihood framework and MCMC algorithm to constrain the galileon parameters. # Constraining cosmological galileons We find no evidence that black holes are offset from the centres of their hosts in the direction or with the relative magnitude expected from galileons. This allows us to place strong constraints on the strength of the galileon fifth force relative to gravity, $$\Delta G/G_N$$. In the left panel of Figure 2 we show this constraint for four observational datasets as a function of $$r_V$$: our final bound, driven by the largest sample, is $$\Delta G/G_N < 0.16$$ at $$1\sigma$$ confindence for $$r_V \lesssim \text{Gpc}$$. In the right panel we translate this result to a constraint on the coupling coefficient $$\alpha$$ as a function of the lengthscale that appears in the galileon action, known as the crossover scale $$r_c$$. Figure 2 also shows previous constraints from Lunar Laser Ranging and the black hole in M87 as well as the expected relation between $$\alpha$$ and $$r_c$$ in a higher-dimensional modified gravity model that introduces galileons called DGP. Enabled by BORG, ours is the first work to model a large-scale galileon field point-by-point in space. It is therefore the first to probe crossover scales as large as the observable universe, and the first to achieve statistically rigorous constraints. By supplementing our model with numerical solutions of the galileon equation of motion in the nonlinear regime it will be possible to push our bound to smaller $$r_c$$, superseding the Lunar Laser Ranging result and ruling out the self-accelerating branch of DGP. More generally, a Monte Carlo-based forward-modelling approach calibrated against simulations and marginalised over noise holds great promise for precision tests of fundamental physics with galaxy survey datasets. Left: $$1\sigma$$ constraint on $$\Delta G/G_N$$ as a function of average Vainshtein radius, $$rV$$, from four observational datasets. $$L_{eq}$$ is the length scale at which the matter power spectrum turns over. Right: Constraint on the coupling of a cubic galileon to matter, $$\alpha$$, as a function of the crossover scale, $$r_c$$, from lunar laser ranging (LLR), the black hole at the centre of M87, and our work. Our test probes larger-$$r_c$$ galileons than others because it models the full galileon field from large-scale structure. 1. D. J. Bartlett, H. Desmond & P. G. Ferreira, 2020, Constraints on galileons from the positions of supermassive black holes’’, Phys Rev D submitted, arxiv 2010.05811 Authored by H. Desmond Post identifier: /method/observations/bh_gravity
# Simple quick graph question, not a problem 1. Oct 4, 2009 ### neutron star What does this mean on a graph? http://img16.imageshack.us/img16/7046/picture19pc.png [Broken] I was given a y=f(x) graph and it has a multiple choice of like f(3) is greater than f(4) etc. But then it says f$$^r$$(4) is greater than f$$^r$$(3) what does that mean? I'm going to skip this one for now until I know what those mean. Last edited by a moderator: May 4, 2017 2. Oct 4, 2009 Hi neutron star. Now its slightly ambiguous a question as fr(x), could mean raising the function to a power of r at x, ie [f(x)]r and I wouldn't be completely sure without know the context the question is asked in. However generally fr(x), means the rth derivative of f(x) (this is know as Lagrange's notation for differentiation). So these two mean the same thing: $$f^{5}(x) = \frac{d^{5}y}{dx^5}$$ $$f^{r}(x) = \frac{d^{r}y}{dx^r}$$ obviously where y=f(x). So qualitatively fr(a) is the rth derivative of fr(x) evaluated at x=a (ie plonk a in to the derivative in place of x :D) Now hopefully that makes sense to you. It might be that the former explanation is actually the correct one, but it should be evident to you now know what it could possibly mean. 3. Oct 4, 2009 ### neutron star Ok, I have it in another problem and this problem doesn't make sense. It looks like this. http://img25.imageshack.us/img25/1750/picture20ag.png [Broken] Now, this made sense to me until I looked at a practice problem just like it in the book. Instead it said If f(x)=x$$^3$$+4x estimate f$$^l$$(3) using a table with values of x near 3, spread by 0.001. The answer shown in the back of the book says f$$^l$$(3)=approx. 31 So I plugged in 3 and 2.999 to x$$^3$$+4x and got around 39. Why is this what is that x$$^l$$ thing? Last edited by a moderator: May 4, 2017 4. Oct 4, 2009 ### neutron star 5. Oct 4, 2009 Ah rite neutron star, now I think I can give you a bit more help. Rite I think you are reading the notation wrong. Now you wrote fl(3) however that is not what the question will be it will be f'(3), where that little "dash" in-between the f and the (3) bit is just that, a dash, read http://en.wikipedia.org/wiki/Notation_for_differentiation#Lagrange.27s_notation" about lagrange's notation (in fact it has a nice blown up image of what you are confused with, so you will clearly be able to see what it is :D). I think that if your not understanding the question simply because you confusing the notation just go talk with you teacher about it, and they will be able to explain much better any problems you have with notation. In light of this as well when you said fr(x) in your original post, you again I think actually couldnt see the notation properly, so mistook ' for r, which if it was quite small writing is understandable, but now you should be able to discern what it should be in the context. Now in light of this new information, first work out the first derivative of f(x) = x3 + 4x. Now put your values in to that. Notice anything :D. Hope that helps Neutron Last edited by a moderator: Apr 24, 2017 6. Oct 4, 2009 ### neutron star I'm still confused because I don't know what to do with the +4x. Can you explain that. I've been working with f(x)=x^n and f'(x)=nx^n-1. But I don't see how that would work out. And this problem is different than the other ones. The other problems were either just something like x^3 at x=2, and x^3+5 at x=1. But even in the second one I'm confused still about the +5 in it. Can you help? 7. Oct 4, 2009 Ok sure thing neutron. I was assuming that you had already covered basic calculus skills, but that's kl. So lets first look at the case where its in the form ax. So a represents and integer, just like the example you asked about 4x. Now lets we say the f(x) = 10x. Now lets look at the ones you have been dealing with f(x)=xn and f'(x)=nxn-1. Now what if I then say 10x = 10x1, we just don't write that 1 in normally, in fact no one ever does because it cant be anything else. Now have another look at that and see if you can solve it :D. Now looking at constants, values that aren't multiplied my x like above, like the example you gave of 5. Now I could also write 5 as 5 = 5x0. again using the form that you know already how can you take this further :D. Have a real think about this now neutron, as the more you can reason yourself the better you'll be at it and the more you'll remember. It may take time, don't think you'll crack this in five minutes, it can take days or weeks or even longer some times to get a good grasp of the basics. 8. Oct 4, 2009 ### neutron star Alright, so for this problem: http://img25.imageshack.us/img25/1750/picture20ag.png [Broken] It would be f'(5)=5$$^2$$=25*3=75+7=82 right? Last edited by a moderator: May 4, 2017 9. Oct 5, 2009 ### Slimsta look at the function, of f(x). x^3 + 7x derivative can be expressed in the form of: lim [f(x+h) - f(x)] / h h>0 if you dont know limits yet, do this f(x) = x^n + nx f'(x) = nx^n-1 + n so you take the power of x down and reduce one from the original power of x, for the constant, nx, you just remove the x. i dunno if that helps, i really have to explain it in person for you to get it.. Last edited by a moderator: May 4, 2017
Can averaged limits of sequences be realized as limits of sequences? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T03:49:32Z http://mathoverflow.net/feeds/question/68034 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/68034/can-averaged-limits-of-sequences-be-realized-as-limits-of-sequences Can averaged limits of sequences be realized as limits of sequences? deltuva 2011-06-17T07:46:25Z 2011-06-17T14:25:28Z <p>Let a summation method take a sequence $(x_n)$ to a net $(y_\alpha)$, where $\alpha$ runs over a partially ordered set, $y_\alpha=\sum c_{\alpha,n}x_n$ ($c_{\alpha,n}\geq 0$, $\sum_n c_{\alpha,n}=1$ for every $\alpha$ and $c_{\alpha,n}\to 0$ in $\alpha$ for every $n$). <em>Is it possible to find a sequence</em> $(\alpha_m)$ <em>of indices such that convergence of</em> $(y_\alpha)$ <em>imply convergence of</em> $(y_{\alpha_m})$?</p> <p>Take the Abel summation method as an example: the set of indices is $(0, 1)$, and convergence of the Abel means $(y_r)$ yields that for $(y_{r_m})$, where $(r_m)$ is an arbitrary sequence tending to 1.</p> http://mathoverflow.net/questions/68034/can-averaged-limits-of-sequences-be-realized-as-limits-of-sequences/68055#68055 Answer by Martin Sleziak for Can averaged limits of sequences be realized as limits of sequences? Martin Sleziak 2011-06-17T13:21:12Z 2011-06-17T14:25:28Z <p>If I understand your question correctly, you want the same sequence $(\alpha_m)$ for each sequence from the convergence field of your summation method.</p> <p>I'll try to show that <a href="http://en.wikipedia.org/wiki/Ultralimit" rel="nofollow">ultralimit</a> can be realized as a summability method in the way you described.</p> <p>Let $\mathcal F$ be any free ultrafilter. Let us define $$D=\{(A,n); n\in A, A\in\mathcal F\}$$ and $$(A,n)\le (B,m) \Leftrightarrow A\supseteq B.$$ Then $(D,\le)$ is a directed set. For $\alpha=(A,n)\in D$ we define $$c_{\alpha,k}= \begin{cases} 1;&amp;k=n,\\ 0;&amp;k\ne n. \end{cases}$$ which means that $y_\alpha=x_n$.</p> <p>Now $y_\alpha$ converges to $L$ if and only if $L$ is the $\mathcal F$-limit of the sequence $(x_n)$. (This is very similar to the usual correspondence between filters and nets in topological spaces, see e.g. Proposition 6.2. <a href="http://www.math.uga.edu/~pete/convergence.pdf" rel="nofollow">here</a>.) This implies that every bounded sequence is summable.</p> <p>However, if we choose any sequence $\alpha_m=(A_m,n_m)$ then the convergence of $(x_{\alpha_m})$ is in fact the convergence of $(x_{n_m})$. For any given sequence $(n_m)$ it is easy to exhibit an example of a bounded sequence $x$ such that $(x_{n_m})$ is not convergent.</p> <hr> <p>The above example shows that your claim is not true for arbitrary directed sets. However, if you work with a directed set which contains a <a href="http://en.wikipedia.org/wiki/Cofinal_%28mathematics%29" rel="nofollow">cofinal subset</a> of order type $(\mathbb N,\le)$, then it is obviously true. (Which is the case in the example you suggested.) I suspect that it should work even for directed sets having any countable cofinal subset. There is a post at math.stackexchange related to the nets of this type: <a href="http://math.stackexchange.com/questions/41634/ordered-sets-that-are-like-sequences/" rel="nofollow">http://math.stackexchange.com/questions/41634/ordered-sets-that-are-like-sequences/</a></p>
# Introducing a new operation Algebra Level 1 Define the star operation as $$a\star b=a+ab+b.$$ Then define the star power operation as $$a^{\star n}=\underbrace{a\star a\star a\star\cdots\star a}_{a\, \mathrm{written} \, n \, \mathrm{times}},$$ with $$a^{\star 1}=a.$$ What is $$7^{\star7}?$$ ×
# Test your Integration Skills 102! Calculus Level 3 $\large \int (2 + \sin^2 x) \cot x \, dx = \, ?$ Clarification: $$C$$ denotes the arbitrary constant of integration. ×
Determinant and trace of matrix ( HELP) FInd the determinant of the following matrix? 4,-4,-8 -2, 2, 6 0, 0,-1 Heres my attempt 4.(2x(-1)- 6x0) -(-4).((-2)x(-1) - 6x0) +(-8).((-2)x0-2x0) which goves: 4.(-2)+4(2) -8 = 0 is this correct?? Im also asked to find the trace? What is this and how do i find it? Thanks Related Calculus and Beyond Homework Help News on Phys.org Yes, this is correct. However, it could have been done easier if you had take the third row to calculate the determinant. Then you would see immediately that the determinant is (-1)(4.2-(-2)(-4))=0. The trace is simply the sum of the diagonal elements. Ah ok Thanks micromass. For the trace I obtained: 4+2-1 = 5 Which is simple enough. Now Im asked to show that 6 is an eigenvalue of the matrix. How would I go about doing that? This is a new topic for me so im struggling a little. Thnks What do you know about eigenvalues? Do you know what the characteristic polynomial is? Not much at the moment im afraid? I dont know what the characteristic polynomial is?? Then you'll just have to do it the hard way. Let A be our matrix. You'll need to show that there exists a vector x such that Ax=6x. This is equivalent to saying that (A-6I)x=0. Thus you must show that the system (A-6I)x=0 has a non-zero solution... Ok so by I u mean an identity matrix?? Yes, I is the identity matrix! So I have to basically subtract an identity matrix of: 6,0,0 0,6,0 0,0,6 as it is 6I from my matrix?? Im I on the rite track? Yes, substract those two matrices, and then solve the associated system of equations... ok from that I get the matrix: -2,-4,-8 -2,-4, 6 0, 0,-7 using previous determinant method I obtain: -2(28) + 4(14) -8(0) = 0 is this correct?? Yes, this is correct. So, what does a determinant 0 tell you? Erm that there exists a non zero solution? Yes, so you have shown that 6 is an eigenvalue! Brilliant thanks micromass. Just another question say Im asked to find the eigen value of the following matrix: 2,1 1,2 Do I simply do (A-lambda I) = det (2-Lambda 1 1 1-Lambba) giving me: (lambda)^2 -4(Lambda) +3 = 0 lambda = 1 lambda = 3 which are the eigenvalues??? Yes, that is correct. In fact, the polynomial $$\lambda^2-4\lambda+3$$ is called the characteristic polynomial. It seems that you came up with that concept by yourself! Haha ok thanks micromass. I have one final question micromass, using the fact 6 is an eigenvalue and the determinant how would I find the remaining eigenvalues?
# Property of radius of convergence ## Main Question or Discussion Point I have a question regarding the radius of convergence and hopely someone can help me with it. Suppose $$\Sigma$$NANZN-1 is given and if its primitive exists, will these two polynomials have the same radius of convergence? LCKurtz
# Revision history [back] I can't belived this is working: Mat templateImage = ("template.jpg"); It won't on my computer. Mat templateImage = imread("template.jpg"); I assume copy/paste issue? Don't use cvCreateMat with the CPP interface: Mat graySourceImage(sourceImage.rows,sourceImage.cols,CV_32FC1); This is more "c++" like, and a compact way of writing it. Idem for: sourceImage.copyTo(destinationImage); //Old destinationImage = sourceImage.clone(); //Become For the rotation, I suggest looking at this way, with wrapAffine, adapted from that sample. Point center = Point( ori.cols/2, ori.rows/2 ); double angle = -50.0; double scale = 1.; /// Get the rotation matrix with the specifications above Mat rot_mat( 2, 3, CV_32FC1 ); rot_mat = getRotationMatrix2D( center, angle, scale ); /// Rotate the warped image warpAffine( ori, dst, rot_mat, ori.size() ); The more you use OpenCV function, the more you will enjoy for free the speed up available in these function. You could also use your GPU to make all the process, and thanks to these functions, you don't need CPU/GPU transfers! After all these fixes, if it doesn't work, try to see all your intermediate images and be sure they are properly loaded.
# Does it make sense to have a world with a very quickly orbiting moon? I am writing a story in which a planet has a moon that orbits it about once a minute. In the story, the moon is pretty bright too, so the night sky has a little bit of a slow strobe light effect: 30 seconds really bright, 30 seconds dark, etc. The planet itself rotates pretty slowly, so the nighttime is about 48 hours long. The point is that the long nighttime is pretty annoying. My questions: • Is there any reason why this wouldn't be possible? • What would the effects of such a quickly orbiting moon be on this planet? (Another potentially relevant fact: the planet is about half the size of earth, with about half the gravity. It does have water, not clear yet exactly how much). • What would make the moon extra bright? An extra bright sun? Or could it be something else? • In order for something to orbit a planet faster, it must orbit the planet closer. There is a thing called the Roche Limit that is extremely relevant to this question. Basically, if you get too close to a planet, you risk being torn apart by tidal forces (if you're a moon-like thing). As a result, there is a lower bound on how close you can get, and therefore an upper bound on orbital speeds. – MozerShmozer Jun 28 '17 at 22:41 • On top of the problem with the Roche Limit, there is another physical limit with regard to orbital radius: actual size of the planet. For a planet to have half the gravity of the Earth, it will probably have to be somewhere between the size of Mars and Earth. The fastest you can orbit the Earth without having to counteract significant atmospheric drag is about 90 minutes. I don't know about Mars off the top of my head, but I doubt it's hugely different (though it is small and has less atmosphere, it also has less gravity, so orbits tend to be slower). – MozerShmozer Jun 28 '17 at 22:48 • So, with the moon, is that 9,492km? Or something like a 5-6 hour orbit? – MichaelHouse Jun 28 '17 at 22:54 • Ok cool, thanks. If I have the masses and radii of the planet and the moon is there a way to calculate the maximum orbital speed outside of the roche limit? – Mike Miller Jun 28 '17 at 23:01 • One other thing to note is that since the moon causes the tides, you would have crazy wave patterns (like having the tide change drastically every minute or two) – The Mattbat999 Jun 29 '17 at 2:07 The formula for orbital speed is $$v = \sqrt{\frac{G\cdot m_P}{r}}$$ Rearranging, the formula for orbital height is $$h = \frac{G \cdot m_P}{v^2} - R_P$$ Where $G$ is a constant. The mass of the planet is $m_P$. The orbital velocity in $\frac{m}{s}$ is $v$. The radius of the planet is $R_P$. And that gives you the height above ground as $h$. But what you actually want here is to calculate the orbital height purely in terms of the orbital period, not the velocity. $$h = \sqrt[3]{\frac{G\cdot m_P \cdot t^2}{4\pi^2}} - R_P$$ You describe the planet as half the mass and radius of earth. You give the orbital period as sixty seconds. So that gives us $$h = -1.6\cdot 10^6 m$$ So this would be about 1600 kilometers below ground. I think that we can safely say that this is not feasible. • So it cannot occur naturally. Could a planet capture a high-speed transiting object, even temporarily? Probably not at the OP's velocity... It would just fly past the planet, wouldn't it? – CaM Jun 29 '17 at 14:57 • @CM_Dayton Natural or not has nothing to do with it, it's simply how gravity works. Spacecraft obey the exact same orbital laws as moons, although, being strong they don't need to worry about the Roche limit. – Loren Pechtel Jun 29 '17 at 17:48 • @Loren being strong has little to do with it. Mainly they are safe because they are small. – Mołot Jun 29 '17 at 20:32 • @Mołot Being small lowers the Roche limit but doesn't protect you. The thing is the Roche limit is for bodies held together only by gravity. A satellite is held together by it's own structural strength, if part of the satellite is outside it's gravitational area it doesn't wander off. – Loren Pechtel Jun 29 '17 at 20:56 As other people have pointed out, you cannot have a moon orbiting that quickly. Nor is there a good reason for a super-bright moon if you think about it as a normal moon. However, you can get a similar effect. Imagine a slowly orbiting moon that was also a parabolic mirror focused on the surface of your planet; it would aim a very bright dot at a very small area of the planet. Now, if that moon wobbled, you would have a very bright dot that moved around on the planet, and you would still have your strobe effect. # Binary White Dwarf Several Light Weeks Away I can get you a light brighter than the moon that turns off for a minute once every five minutes, but it won't move in the sky. Put your planet around a normal star that is in a distant orbit with a pair of white dwarfs that orbit very close to each other, such as HM Cancri. Make one of the white dwarfs much older, and thus dimmer, than the other. Whenever the old white dwarf eclipses the young one (i.e., every five minutes) the majority of the light will be blocked. Note that for half the year, this blinking light will be in the daylight sky, and so less obnoxious. You could make the period shorter by putting the dwarfs closer together, probably down to about 2 minutes. You can make the light brighter by bringing the binary star closer to the star hosting your planet, but the X-ray radiation may begin to become a problem. Note that this arrangement puts a time limit on the existence of whatever civilization you are writing about: The blinking will get faster and faster, and in about 300,000 years the stars will merge and wipe out everyone on your planet. Rearranging Brythan's answer delves more into this problem: $$h = \sqrt[3]{\frac{G\cdot m_P \cdot t^2}{4\pi^2}} - R_P$$ can be changed into: $$\frac{h}{R_P} = \sqrt[3]{\frac{G\cdot \rho \cdot t^2}{3\pi}} - 1$$ So long as $$\sqrt[3]{\frac{G\cdot \rho \cdot t^2}{3\pi}} \ge 1$$ an object can have an orbit at this period. This implies: $$\rho \cdot t^2 \ge \frac{3\cdot\pi}{G} = 1.41214639 × 10^{11} \frac{kg s^2}{m^3}$$ A planet of pure Osmium, which would have the highest density and have a surface humans could live on, would have a density of $22,590 \frac{kg}{m^3}$ This leaves you with a minimum of: $$t \ge 2500 s$$ If you want a denser planet, your planet's going to be radioactive. Handwaving to PTU is fun, but Hassnium in the island of stability will still keep your orbit to about half an hour.
It is currently 24 Feb 2018, 17:43 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If x is the product of the positive integers from 1 to 8, in Author Message TAGS: ### Hide Tags Intern Joined: 29 Nov 2009 Posts: 25 Location: Bangalore - INDIA Schools: Duke, NUS,Rutgers WE 1: Health & Life Science If x is the product of the positive integers from 1 to 8, in [#permalink] ### Show Tags 20 Jun 2010, 22:52 1 KUDOS 3 This post was BOOKMARKED 00:00 Difficulty: 15% (low) Question Stats: 77% (01:04) correct 23% (01:05) wrong based on 198 sessions ### HideShow timer Statistics If x is the product of the positive integers from 1 to 8, inclusive, and if i, k, m, and p are positive integers such that $$x = 2^i*3^k*5^m*7^p$$, then i + k + m + p = (A) 4 (B) 7 (C) 8 (D) 11 (E) 12 OPEN DISCUSSION OF THIS QUESTION IS HERE: https://gmatclub.com/forum/if-x-is-the- ... 46157.html [Reveal] Spoiler: OA Math Expert Joined: 02 Sep 2009 Posts: 43898 Re: If x is the product of the positive integers from 1 to 8, in [#permalink] ### Show Tags 21 Jun 2010, 01:15 2 KUDOS Expert's post divakarbio7 wrote: If x is the product of the positive integers from 1 to 8, inclusive, and if i, k, m, and p are positive integers such that x = 2i3k5m7p, then i + k + m + p = (A) 4 (B) 7 (C) 8 (D) 11 (E) 12 I am not understanding the reasoning behind this problem. It should be "if $$x=2^i*3^k*5^m*7^p$$". $$x=8!=2*3*4*5*6*7*8=2*3*(2^2)*5*(2*3)*7*(2^3)=2^7*3^2*5^1*7^1=2^i*3^k*5^m*7^p$$ --> $$i=7$$, $$k=2$$, $$m=1$$, and $$p=1$$ --> $$i+k+m+p=7+2+1+1=11$$. _________________ Non-Human User Joined: 09 Sep 2013 Posts: 13759 Re: If x is the product of the positive integers from 1 to 8, in [#permalink] ### Show Tags 23 Aug 2014, 01:45 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Intern Joined: 26 Jan 2010 Posts: 17 Location: chile WE 1: WE 2: WE 3: Re: If x is the product of the positive integers from 1 to 8, in [#permalink] ### Show Tags 30 Nov 2014, 16:54 Here the idea is to go expressing each factor from one to eight, decomposed into its primes: 1 * 2 * 3 * 4 * 5 * 6 * 7 * 8 = 1 * 2 * 3 *(2*2) * 5 +(2*3) * 7 *(2*2*2) = 2 ^ 7 * 3 ^ 2 * 5 ^ 1 * 7 ^ 1 Then i = 7 k = 2 m = 1 p = 1 i + k + m + p = 11 tutoring courses GMAT part math homeworkers in Chile hurtado claudio gmatchile@yahoo.com http://www.gmatchile.cl _________________ Private lessons GMAT QUANT GRE QUANT SAT QUANT Classes group of 6 students GMAT QUANT GRE QUANT SAT QUANT Distance learning courses GMAT QUANT GRE QUANT SAT QUANT Website http://www.gmatchile.cl Whatsapp +56999410328 Email clasesgmatchile@gmail.com Skype: clasesgmatchile@gmail.com Address Avenida Hernando de Aguirre 128 Of 904, Tobalaba Metro Station, Santiago Chile. SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1839 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Re: If x is the product of the positive integers from 1 to 8, in [#permalink] ### Show Tags 30 Nov 2014, 21:59 1 This post was BOOKMARKED Incidentally 2,3,5,7 are prime factors of x = 8! $$x = 2^i * 3^k * 5^m * 7^p$$ , we require to find (i + k + m + p) There is no need to find individual values of i,k,m,p Just write all prime numbers in there powers & add $$x = 2^1 * 3^1 * (2^2) * 5^1 * (2^1 3^1) * 7^1 * (2^3)$$ i+k+m+p = 1+1+2+1+1+1+1+3 = 11 _________________ Kindly press "+1 Kudos" to appreciate Intern Joined: 26 Jan 2010 Posts: 17 Location: chile WE 1: WE 2: WE 3: Re: If x is the product of the positive integers from 1 to 8, in [#permalink] ### Show Tags 03 Dec 2014, 07:27 Following the logic of PareshGmat There would be no need to write products between cousins, it is enough to write directly i + k + m + p = 11, that if you can do it in the CAT, obviously, but for people who can't do it directly would require the previous steps, to have crystal clarity and understand how to deal with similar situations, on the understanding that this is the purpose of some learning. tutoring courses GMAT part math homeworkers in Chile hurtado claudio gmatchile@yahoo.com http://www.gmatchile.cl _________________ Private lessons GMAT QUANT GRE QUANT SAT QUANT Classes group of 6 students GMAT QUANT GRE QUANT SAT QUANT Distance learning courses GMAT QUANT GRE QUANT SAT QUANT Website http://www.gmatchile.cl Whatsapp +56999410328 Email clasesgmatchile@gmail.com Skype: clasesgmatchile@gmail.com Address Avenida Hernando de Aguirre 128 Of 904, Tobalaba Metro Station, Santiago Chile. Non-Human User Joined: 09 Sep 2013 Posts: 13759 Re: If x is the product of the positive integers from 1 to 8, in [#permalink] ### Show Tags 08 Dec 2015, 23:46 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Math Expert Joined: 02 Sep 2009 Posts: 43898 Re: If x is the product of the positive integers from 1 to 8, in [#permalink] ### Show Tags 16 Jan 2018, 06:59 divakarbio7 wrote: If x is the product of the positive integers from 1 to 8, inclusive, and if i, k, m, and p are positive integers such that $$x = 2^i*3^k*5^m*7^p$$, then i + k + m + p = (A) 4 (B) 7 (C) 8 (D) 11 (E) 12 OPEN DISCUSSION OF THIS QUESTION IS HERE: https://gmatclub.com/forum/if-x-is-the- ... 46157.html _________________ Re: If x is the product of the positive integers from 1 to 8, in   [#permalink] 16 Jan 2018, 06:59 Display posts from previous: Sort by
Implementation of the Real-time Measurement System of Receiver Sensitivity for a Laser Range Finder Title & Authors Implementation of the Real-time Measurement System of Receiver Sensitivity for a Laser Range Finder Lee, Young-Ju; Abstract We propose the method for measuring sensitivity of optical receiver of a long-range laser range finder in real-time. The sensitivity of the detector can be calculated using the detected voltage of the reference sensor, the area of the reference sensor and the transmittance ratio of neutral density filters. To evaluate the performance of the proposed method, we implemented a system and performed experiments. As a result, this system can be measured from 2nW to $\small{113{\mu}W}$. With this system, we measured the sensitivity of 37nW and 7nW with PIN PD and APD sample, respectively. This system has the advantage for the performance test of an optical sensor module in the long-range laser range finder. Keywords Language Korean Cited by References 1. DATASHEET, Eyesafe, Hand-held Laser Rangefinder HALEM 2, Carl Zeiss. 2. American National Standard for safe use of lasers, ANSI Z136.1-2007. 3. DATASHEET, Hybrid Eyesafe Laser Rangefinder Receiver-755A, Analog Modules, Inc., 2013. 4. Zhang Yong, JIN Wei-qi, "Study to Testing Device of Pulse Laser Range Equipment", Proc. of SPIE vol. 7382, 2009. 5. R. W. Leonhardt, "Calibration service for low-level pulsed-laser radiometers at $1.06{\mu}m$: pulse energy and peak power", National Institute of Standards and Technology, 1998. 6. DATASHEET, Low noise, High Gain, GaAs FET Photodetector-Amplifier module, Model 712A, Analog Modules, Inc. 7. S. Kruapech, J. Widjaja, "Laser range finder using Gaussian beam range equation", Optics & Laser Technology, vol. 42, pp. 749-754, 2010. 8. S. B. Campana, Passive Electro-Opeical Systems, The infrared & electro-optical systems handbook, vol. 5, 1993.
definition of $l$-equivalence In the following paper http://www.math.ucsd.edu/~ronspubs/74_01_van_der_waerden.pdf, just in the first paragraph the author defines what $l$-equivalence for two m-tuples $\in [0,l]^m$ means. Can somebody please give me a more precise definition of what he means? I am not even sure what $[0,l]^m$ stands for, although I expect its just $\{0,1,\dots,l\}^m$. Thanks - I think it just means that (a,b,x) and (a,b,y) are 2-equivalent. I'vve not been able to understand the proof yet. – user50336 Nov 27 '12 at 18:12 According to the first sentence of the paper, $[0,\ell]$ is indeed $\{0,1,\dots,\ell\}$, and it’s clear that $[0,\ell]^m$ is, as usual, the set of $m$-tuples of elements of $[0,\ell]$. Suppose that $\langle x_1,\dots,x_m\rangle$ and $\langle x_1',\dots,x_m'\rangle$ are $m$-tuples in $[0,\ell]^m$. Suppose first that $\ell$ occurs at least once in each of these $m$-tuples. Let $k$ be the largest index such that $x_k=\ell$; $x_k$ is the last occurrence of $\ell$ in $\langle x_1,\dots,x_m\rangle$. Similarly, let $x_{k'}'$ be the last occurrence of $\ell$ in $\langle x_1',\dots,x_m'\rangle$. Then $\langle x_1,\dots,x_m\rangle$ and $\langle x_1',\dots,x_m'\rangle$ are $\ell$-equivalent iff $k=k'$, and $x_i=x_i'$ for $i=1,\dots,k$. If neither $\langle x_1,\dots,x_m\rangle$ nor $\langle x_1',\dots,x_m'\rangle$ contains an $\ell$, the agreement requirement is vacuous, so they are automatically $\ell$-equivalent. For example, the sequences $\langle 2,1,2,0,1\rangle$ and $\langle 2,1,2,1,0\rangle$ in $[0,2]^5$ are $2$-equivalent, and the $2$-equivalence class of these two sequences also contains $\langle 2,1,2,0,0\rangle$ and $\langle 2,1,2,1,1\rangle$; it is in fact $$\Big\{\langle 2,1,2,a,b\rangle:\langle a,b\rangle\in[0,1]^2\Big\}\;.$$ The $2$-equivalence class of $\langle 1,1,0,1,0\rangle\in[0,2]^5$, which doesn’t contain $2$ at all, is $[0,1]^5$: the $5$-tuples that don’t contain a $2$ are the ones that agree with $\langle 1,1,0,1,0\rangle$ up through the last occurrence of $2$.
# Linear model for repeated-measures regression I have two independent variables $$y_{mi},z_{mi}$$ ($$z$$ is measured in fasting, so it is the basal state), measured with two different methods $$m$$ (m=2 is the reference method) in the same subjects $$i$$, with two predictors (age and sex). One regression model is needed for each variable, and the question is how age, sex, and first-method value influence the differences between methods. What approach would be recommendable? • A linear model (lm) in which $$y_{1i}-y_{2i}$$ is the independent variable, and age, sex and $$y_{1i}$$ is a predictor. The caveats are: A mixed model is recommended for paired-measures data; the residuals~fitted plot reveals a small amount of homocedasticity (a linear model of (abs(residuals)~fitted.values has a slope of 0.1). A linear model of $$residuals = f(age+sex+y_{1i})$$ reveals no association between the residuals and the predictors (Gauss Markov requirement). • A robust regression (ltsreg/lmrob) with the same model. • A mixed model (lme4) in which $$y_{1i}-y_{2i}$$ is the independent variable; age, sex and $$y_{1i}$$ are fixed effects and $$y_{1i}$$ is a random effect. The obtained coefficients are identical to those in the linear model! (I included $$y_{1i}$$ as a fixed effect because it's the most relevant predictor and we need its coefficient). The results/coefficients obtained with the different methods are roughly equivalent. My questions are: • Would using the differences as dependent variable and one of the values as a predictor confront the Gauss-Markov theorem? Is this worrying, considering all the models give roughly similar estimates? • I think I'm going to choose the easily-understandable linear model as the "solution" to this problem. Any suggestion/criticism? Help is appreciated, thank you. • If you work with the differences $$y_{1i} - y_{2i}$$ you have univariate data, i.e., a single difference per subject. Hence, you would not need to work with a mixed model unless I missed something. • A potential benefit of working with a mixed model approach is if you have missing data for some subjects (i.e., you either have $$y_{1i}$$ or $$y_{2i}$$ but not both). In this case, you cannot compute differences for these subjects, but you could use all available data in a mixed model approach. For example, $$y_{mi} = \beta_0 + \beta_1 \texttt{Method}_{mi} + \beta_2 \texttt{Age}_i + \beta_3 \texttt{Sex}_i + b_i + \varepsilon_{mi},$$ where $$\texttt{Method}_{mi}$$ is zero for Method 2 and one for Method 1, and $$b_i$$ is a random intercept for the subjects with mean zero and variance $$\sigma_b^2$$. The coefficient $$\beta_1$$ will denote the difference in the expected outcome $$y_i$$ between the two methods controlled for age and sex.
# Possible way to plot the solution density of diophantine equations Well, I'm trying to investigate the density of solutions to Diophantine equations. What is a general method to describe that density? I've two functions: $$\varphi\left(\text{a},\text{b},\text{c}\right)=\theta\left(\text{a},\text{b},\text{c}\right)\tag1$$ I will look for solutions in the following range: $$\text{a}=\left\{\text{a}_0,\dots,\text{a}_\text{n}\right\}$$ and $$y=\left\{\text{b}_0,\dots,\text{b}_\text{m}\right\}$$. So I will choose $$\text{a}_0$$ as starting value and let $$\text{b}$$ run from $$\text{b}_0$$ until $$\text{b}_\text{m}$$ and check if there follows a $$\text{c}\in\mathbb{Z}$$. Now, if I let $$\text{b}_0$$ and $$\text{b}_\text{m}$$ constant what should I plot on the x-axis if I want to plot the solution density on the y-axis? Example and my work: I've the Diophantine equation: $$x^2+\left(\frac{y}{3}\right)^2=z\tag2$$ In order to solve it $$x$$, $$y$$ and $$z$$ has to be integers. I will look for solutions in the following range: $$x=\left\{5,\dots,7\right\}$$ and $$y=\left\{-10,\dots,10\right\}$$. Now I found $$\text{p}=21$$ soltuions. I used Mathematica to check them: So, I should say that the density is given by: $$\rho=\frac{\text{# solutions}}{\text{total possible solutions}}=\frac{21}{\left(1+7-5\right)\left(1+10-\left(-10\right)\right)}=\frac{21}{63}=\frac{1}{3}\tag3$$ Question: the general way of finding the solution density can be written as: $$\rho=\frac{\text{# solutions}}{\left(1+x_\text{n}-x_0\right)\left(1+y_\text{m}-y_0\right)}\tag4$$ If I want to plot the function of $$\rho$$ on the y-axis what should be a smart choice to put on the x-axis? Assuming that I let $$y_0$$ and $$y_\text{m}$$ constant. ## 1 Answer Clear["Global*"] ρ[x0_Integer, xn_Integer, y0_Integer, yn_Integer] := Module[{x, y, z}, Length[Solve[{x^2 + (y/3)^2 == z, x0 <= x <= xn, y0 <= y <= yn}, {x, y, z}, Integers]]/((1 + xn - x0) (1 + yn - y0))] Using symmetric bounds for x and y DiscretePlot3D[ρ[-x, x, -y, y], {x, 1, 20}, {y, 1, 20}, AxesLabel -> (Style[#, 14, Bold] & /@ {"x", "y", "\nρ "}), ColorFunction -> Function[{x, y, ρ}, Piecewise[{{Red, y == 10}}, Blue]], ColorFunctionScaling -> False] Fixing y to the interval {-10, 10} (colored Red above) DiscretePlot[ρ[-x, x, -10, 10], {x, 1, 20}, AxesLabel -> (Style[#, 14, Bold] & /@ {"x", "ρ"})] Eliminating the negative value for x roughly cuts the number of solutions in half; however, the possible solution space is corresponding reduced by the same amount. DiscretePlot[ρ[0, x, -10, 10], {x, 1, 20}, AxesLabel -> (Style[#, 14, Bold] & /@ {"x", "ρ"})] `
# No Individual Distance Was Found! Geometry Level 5 We are given a regular pentagon $$A_1A_2A_3A_4A_5$$ in the Cartesian plane centered at the origin with $$A_1 = (1,0)$$. Let $$P = (1, \sqrt{3})$$. Denote $$z = \displaystyle \prod_{i=1}^5 m(PA_i)$$, where $$m(AB)$$ denotes the length of segment $$AB$$. Submit your answer as $$z^2$$. ×
## [solved] example request , canvas scale - camera - pixel perfect - simplified General discussion about LÖVE, Lua, game development, puns, and unicorns. ReFreezed Party member Posts: 570 Joined: Sun Oct 25, 2015 11:32 pm Location: Sweden Contact: ### Re: example request , canvas scale - camera - pixel perfect - simplified I didn't intend for my example to show how scaled sprites can look better when using subpixels, but sure, you could use subpixels for that purpose I guess. (See Xeodrifter for somewhat interesting usage of scale.) Movement also become smoother, if sprite coordinates aren't rounded when drawn (or are rounded, but to the subpixels instead of the "full" pixels). Here's an example and comparison with parallax layers without and with usage of subpixels: Notice how movements in the "new" version with subpixels are a bit smoother. Your image with the "glitchy" text shows the issue with using nearest filtering, i.e. that some pixels look a lot wider and/or taller than their neighbors, which is what the linear+subpixel combination (which make pixel borders slightly blurry) is supposed to fix. In the upper version the pixel sizes and lines are all wacky, while in the lower version the pixel sizes and lines look more uniform. Tools: Hot Particles, LuaPreprocess, InputField, (more) Games: Momento Temporis "If each mistake being made is a new one, then progress is being made." gcmartijn Party member Posts: 110 Joined: Sat Dec 28, 2019 6:35 pm ### Re: example request , canvas scale - camera - pixel perfect - simplified ReFreezed wrote: Thu Mar 03, 2022 11:42 am I didn't intend for my example to show how scaled sprites can look better when using subpixels, but sure, you could use subpixels for that purpose I guess. Short story: can you extend the first example with moving camera and screenToWorldMouseXY / worldToScreenMouseXY ? So I can extract the code and re-use that, I have most of the code now but that part is still not working. Long story: After your example and some weeks later... I finally have implemented your code. Most of the time I had problem with font scaling and get the correct mouse coordinates when scaling the window or using fullscreen. I don't use a hardware cursor (image), but I draw somethings using the mouse coordinates. But that is not the problem anymore, that is working. Now I spend some time creating the moving 'camera' but now i'm confused. Maybe your can solve this last part for me? canvas.lua Code: Select all function Canvas:new(data) self.name = data.name self.x = data.x or 0 self.y = data.y or 0 self.width = data.width or love.graphics.getWidth() self.height = data.height or love.graphics.getHeight() self.subpixels = data.subpixels self.integerScaling = data.integerScaling -- the real canvas self.canvas = love.graphics.newCanvas(self.width * self.subpixels, self.height * self.subpixels) self.canvas:setFilter("nearest", "nearest") -- calculated self.scale = 1 self.scaledWidth = nil self.scaledHeight = nil end function Canvas:setSubpixels(s) self.subpixels = s end function Canvas:setX(x) self.x = x end function Canvas:setY(y) self.y = y end function Canvas:getX() return self.x end function Canvas:getY() return self.y end function Canvas:getSubpixels() return self.subpixels end --- -- This is without the subpixels function Canvas:getHeight() return self.height end --- -- This is without the subpixels function Canvas:getWidth() return self.width end --- -- This is with the optional subpixels and current render function Canvas:getRenderHeight() return self:getCanvas():getHeight() end --- -- This is with the optional subpixels and current render function Canvas:getRenderWidth() return self:getCanvas():getWidth() end --- -- The calculated width scaled function Canvas:getScaledWidth() return self.scaledWidth end --- -- The calculated height scaled function Canvas:getScaledHeight() return self.scaledHeight end --- -- The love canvas object function Canvas:getCanvas() return self.canvas end --- -- The calculated scale factor function Canvas:getScale() return self.scale end -- return width and height function Canvas:getDimensions() return self.canvas:getDimensions() end function Canvas:scaleMath() local _, _, windowWidth, windowHeight = love.window.getSafeArea() local canvasWidth, canvasHeight = self:getDimensions() -- Fill as much of the window as possible with the canvas while preserving the aspect ratio. self.scale = math.min(windowWidth / canvasWidth, windowHeight / canvasHeight) -- self.scale = windowHeight / canvasHeight -- This would fill the height and possibly cut off the sides. if self.integerScaling then self.scale = math.floor(self.scale * self:getSubpixels()) / self:getSubpixels() self.scale = math.max(self.scale, 1 / self:getSubpixels()) -- Avoid self.scale =0 if the window is tiny! end self.scaledWidth = canvasWidth * self.scale self.scaledHeight = canvasHeight * self.scale -- center canvas self:setX(math.floor((windowWidth - self.scaledWidth) / 2)) self:setY(math.floor((windowHeight - self.scaledHeight) / 2)) end -- function Canvas:release(args) -- end return Canvas config.lua (snippet) Code: Select all self.canvas = {} self.canvas.width = 320 self.canvas.height = 180 self.canvas.integerScaling = true self.canvas.subpixels = 4 conf.lua Code: Select all -- 1x 320x180 -- 2x 640x360 -- 4x 1280x720 -- < minimal target height -- 6x 1920x1080 (fullHD) -- 12x 3840x2160 (4K) t.window.width = 1280 -- The window width (number) t.window.height = 720 -- The window height (number) camera.lua (snippet) I was trying to implement a follow player x,y like https://github.com/a327ex/STALKER-X But then with less code, without all the functions, to understand what is happening, and I only need the follow player Code: Select all function Camera:follow(x, y) self.targetX, self.targetY = x, y end function Camera:setBounds(x, y, w, h) self.bound = true self.bounds_min_x = x self.bounds_min_y = y self.bounds_max_x = x + w self.bounds_max_y = y + h end function Camera:attach() -- extend inside the main.lua below the canvas things love.graphics.translate(math.floor(self.width / 2), math.floor(self.height / 2)) love.graphics.translate(-math.floor(self.x), -math.floor(self.y)) end function Camera:update(dt) if self.targetX == nil or self.targetY == nil then return end self.x, self.y = self.targetX, self.targetY -- if self.bound then -- self.x = math.min(math.max(self.x, self.bounds_min_x + self.width / 2), self.bounds_max_x - self.width / 2) -- self.y = math.min(math.max(self.y, self.bounds_min_y + self.height / 2), self.bounds_max_y - self.height / 2) -- end end main.lua (snippet) Code: Select all function draw() self.canvas:scaleMath() -- Draw to the canvas love.graphics.push("all") love.graphics.setCanvas(self.canvas:getCanvas()) love.graphics.clear() love.graphics.scale(self.canvas:getSubpixels()) self.camera:attach() -- at the moment without push/pop self.world:draw() love.graphics.pop() love.graphics.clear(0, 0, 0) love.graphics.draw(self.canvas:getCanvas(), self.canvas:getX(), self.canvas:getY(), 0, self.canvas:getScale()) end function love.mousepressed(xOrg, yOrg, button, istouch, presses) local x, y = xOrg, yOrg if self.cursor then x, y = self.cursor:getScreenToWorldXY(x, y) end self.world:mousePressedEvent( { x = x, y = y, button = button, istouch = istouch, presses = presses } ) end function love.resize(w, h) -- calculate so we have the new variables self.canvas:scaleMath() -- forward to other entities self.world:windowResizeEvent( { w = w, h = h } ) end cursor.lua (snippet) Code: Select all function Cursor:getScreenToWorldXY(x, y) return self:getScreenToWorldX(x), self:getScreenToWorldY(y) end function Cursor:getScreenToWorldX(x) x = x or love.mouse.getX() return ((x - self.gameCanvas:getX()) / self.gameCanvas:getSubpixels()) / self.gameCanvas:getScale() -- I have to do something here with the self.camera.x end function Cursor:getScreenToWorldY(y) y = y or love.mouse.getY() return ((y - self.gameCanvas:getY()) / self.gameCanvas:getSubpixels()) / self.gameCanvas:getScale() -- I have to do something here with the self.camera.y end Inside cursor.lua I have to include the camera x and y. I try several things but this is what is happening right now: - I click at the right side - Player is moving to the right side (camera moves to the left side, looks good for now) - The cursor position is not correct anymore I do mean local x, y = self:getScreenToWorldXY() is not correct. First I thought , just do things like this but it don't work. Code: Select all function Cursor:getScreenToWorldX(x) x = x or love.mouse.getX() x = ((x - self.gameCanvas:getX()) / self.gameCanvas:getSubpixels()) / self.gameCanvas:getScale() if self.camera then x = x - self.camera.x end end ReFreezed Party member Posts: 570 Joined: Sun Oct 25, 2015 11:32 pm Location: Sweden Contact: ### Re: example request , canvas scale - camera - pixel perfect - simplified You're probably not too far off. I've extended my initial example program with the concept of a world coordinate system, a movable player, and a camera that follows the player (in love.update). Also, screenToWorld and worldToScreen functions. Attachments PixelArtRendering.20220328.love Tools: Hot Particles, LuaPreprocess, InputField, (more) Games: Momento Temporis "If each mistake being made is a new one, then progress is being made." gcmartijn Party member Posts: 110 Joined: Sat Dec 28, 2019 6:35 pm ### Re: example request , canvas scale - camera - pixel perfect - simplified Thanks very much! give me some time to check and process this gcmartijn Party member Posts: 110 Joined: Sat Dec 28, 2019 6:35 pm ### Re: example request , canvas scale - camera - pixel perfect - simplified Yes everything is working now, it fits perfectly. Going too refactor now. Only added some extra things for the camera because it's something like a point and click. The player is nog in the middle. Using the camera example code from someone else, with your code gives me this fix: Code: Select all function Camera:update(dt) if self.targetX == nil or self.targetY == nil then return end self.x = damp(self.x, self.targetX, self.followSpeed, dt) -- Move towards player smoothly. self.y = damp(self.y, self.targetY, self.followSpeed, dt) -- Do camera.x=player.x etc. for instant snap. if self.bound then self.x = math.min(math.max(self.x, self.bounds_min_x + self.width / 2), self.bounds_max_x - self.width / 2) self.y = math.min(math.max(self.y, self.bounds_min_y + self.height / 2), self.bounds_max_y - self.height / 2) end end function Camera:setBounds(x, y, w, h) self.bound = true self.bounds_min_x = x self.bounds_min_y = y self.bounds_max_x = x + w self.bounds_max_y = y + h end function Camera:follow(x, y) self.targetX, self.targetY = x, y end snippet init Code: Select all Camera( { x = 0, y = 0, width = config.canvas.width, height = config.canvas.height } ) world snippet Code: Select all camera:setBounds(0, 0, scene.width / 4, self.canvas.height) -- 180 height player snippet Code: Select all camera.follow(player.x,player.y) So now the camera stops when the scene width is end. Going to refactor now, before I forget things Really cool ! gcmartijn Party member Posts: 110 Joined: Sat Dec 28, 2019 6:35 pm ### Re: example request , canvas scale - camera - pixel perfect - simplified ReFreezed wrote: Mon Mar 28, 2022 5:00 pm You're probably not too far off. @ReFreezed I did some other work, and now i'm at a point that the player is moving to the right/left and the camera is following. Everything works, but what I see is a little jitter. When I download your example again and use the same settings as I use inside the pixel art game Code: Select all integerScaling = true subpixels = 4 cameraspeed = 5 And move mega man to the right en left, then I can see a very small jitter in the background image. Inside my game I use a static framerate with this code https://github.com/bjornbytes/tick Code: Select all frameTick.framerate = 60 frameTick.rate = 1 / 60 Is it possible that you code always give some jitter? Can it be optimised because I'm using the tick code above? Or will there alway be some images that jitter (like your example code). I already use (as far as I know, need to check this again) on each drawing Code: Select all function floor(n, subpixels) return math.floor(n * (subpixels or 1) + .5) / (subpixels or 1) end But maybe you say, "ow if you use that frameTick framerate settings, then do this inside the camera code, to make it more smooth." ReFreezed Party member Posts: 570 Joined: Sun Oct 25, 2015 11:32 pm Location: Sweden Contact: ### Re: [solved] example request , canvas scale - camera - pixel perfect - simplified I don't see any jitter in the background in my example with the settings you posted, but jitter can be caused by numerous things. (See my reply in that other thread.) If you force dt to be 1/60 in love.update, does it make movement smoother (in the example)? Tools: Hot Particles, LuaPreprocess, InputField, (more) Games: Momento Temporis "If each mistake being made is a new one, then progress is being made." gcmartijn Party member Posts: 110 Joined: Sat Dec 28, 2019 6:35 pm ### Re: [solved] example request , canvas scale - camera - pixel perfect - simplified Nope that don't fix it. I 'fix' it for the moment by using the following things - don't update the camera.y, because i'm only follow moving objects from left <> right - don't use the damp function With this it more smooth. Going to work with this for now, and test it later on other hardware. ReFreezed Party member Posts: 570 Joined: Sun Oct 25, 2015 11:32 pm Location: Sweden Contact: ### Re: [solved] example request , canvas scale - camera - pixel perfect - simplified You'll have to explain what exactly you mean by jitter, or show a video or something of what's happening, before I can provide any better help. Ditching the damp function is probably a good idea as it's not very clever about the movement. In my games I've always just fixed the camera on the player, so there's never any "jitter" specifically for the player sprite on the screen. Tools: Hot Particles, LuaPreprocess, InputField, (more) Games: Momento Temporis "If each mistake being made is a new one, then progress is being made." gcmartijn Party member Posts: 110 Joined: Sat Dec 28, 2019 6:35 pm ### Re: [solved] example request , canvas scale - camera - pixel perfect - simplified @ReFreezed Can you pm me your mailadres ? I can't PM the game, there is no upload option, and I don't want to upload the game here. Or maybe I can upload it somewhere like wetransfer and I PM you the link. Is that oke ? Maybe you can see the problem, I did change the code, so the camera is moving now from left to the right +1 px and every time it stutters at some point. ### Who is online Users browsing this forum: Google [Bot] and 13 guests
# Sampling a Population of Unknown Size How do you sample a population whose size is not known? For example, I have a day's population from whom I need to select 1000 items from. Each hour (for one day) some number of new items (can be any number, even 0) enter a box (at any time, not necessarily the start of the hour) from which I can choose any n items for my sample. However, at the start of each new hour, the box is emptied. The items I choose go into my sample box, which cannot hold more than 1000 items and which I cannot change once I commit an item to it. So, I have to make my decision of how many items to choose during that hour time period. I want my final sample to be as representative of the day's total population (all items that have been in that box at some point during that day) as possible. Thank you. Are there any formulas or algorithms out there for this sort of idea? How would my answer change if I could put items back that I originally selected to be in my sample? EDIT: I can access my samples from the previous days if necessary, but only to look at - not change. Each item comes with a timestamp for when they entered the box. Use past data to compute the average number of items that arrive per hour during each one of the 24 hours. Suppose, the average number of items are: $(n_1, n_2, \ldots, n_{24})$. Then in hour $i$ you should expect that about $\frac{n_i}{n}$ items arrive where $n=\sum_in_i$. Therefore, if $t_i$ items arrive in hour $i$ you should choose about $t_i \frac{n_i}{n}$ items at random.
# Shortest code that raises a SIGSEGV Write the shortest code that raises a Segmentation Fault (SIGSEGV) in any programming language. • Wow. Possibly the shortest successful question. – Matthew Roh Feb 9 '17 at 11:42 # C, 5 characters main; It's a variable declaration - int type is implied (feature copied from B language) and 0 is default value. When executed this tries to execute a number (numbers aren't executable), and causes SIGSEGV. Try it online! • @Macmade: Actually, it is 0. static variables start as 0, and main; is static, as I declared it outside function. c-faq.com/decl/initval.html – Konrad Borowski Aug 16 '13 at 8:20 • last time i played with this thing, i figured out that there's a different reason for the segfault. First of all by calling main you jump to the location of main, not the value, another thing is main is an int, it's located in .bss, usually functions are located in .text, when the kernel loads the elf program it creates an executable page for .text and non-executable for .bss, so by calling main, you jump to a non-executable page, and execution something on a such page is a protection fault. – mniip Dec 6 '13 at 17:55 • Yep, segfaults in C are pretty much the default :P – Paul Draper May 24 '14 at 23:23 • main __attribute__((section(".text#")))=0xc3; FTFY (at least it seems to return without crashing on my x86). – jozxyqk Jul 27 '18 at 4:57 • @jozxyqk Or shorter, const main=195;. As interesting it is that it's working, the goal of this code golfing challenge was to make the code segfault, not work :). – Konrad Borowski Jul 27 '18 at 6:48 kill -11 $$• Signal 11 in 11 characters. Seems legit. – nyuszika7h Dec 31 '13 at 12:39 • @nyuszika7h I was going to upvote your comment, but you have 11 upvotes right now, so I'm going to leave it at that. :P – HyperNeutrino Nov 25 '16 at 16:22 • @AlexL. other people seem to have spoiled that :( – theonlygusti Jan 21 '17 at 10:51 • @theonlygusti Yeah... That's too bad. :( Oh well, then I can upvote it now. – HyperNeutrino Jan 21 '17 at 14:27 • Up to 42 upvotes, no touchee! – seadoggie01 Sep 3 '18 at 15:24 # Assembly (Linux, x86-64), 1 byte RET This code segfaults. • As an MSDOS .com file, it runs and terminates without error. – J B Dec 26 '11 at 19:10 • My point being: just specifying “assembly” isn't enough to make it segfault. – J B Dec 26 '11 at 19:56 • @JB: On MS DOS, no program will ever produce a segmentation fault. That's because MS DOS runs in real mode where memory protection is nonexistent. – celtschk Feb 4 '12 at 17:25 • @celtschk IIRC NTVDM will eke out on nonexistent addresses, and those not allocated to MS-DOS. – nanofarad Jan 31 '13 at 1:32 • @celtschk: You can segfault it anyway like so: mov bx, 1000h ; shr ebx, 4 ; mov eax, [ebx] -> CPU raises the underlying SEGV (AFAIK there's nobody to handle it though). – Joshua Nov 20 '16 at 17:18 # Python 2, 13 exec'()'*7**6 Windows reports an error code of c00000fd (Stack Overflow) which I would assume is a subtype of segmentation fault. Thanks to Alex A. and Mego, it is confirmed to cause segmentation faults on Mac and Linux systems as well. Python is the language of choice for portably crashing your programs. • Segmentation fault: 11 on Mac – Alex A. Nov 2 '15 at 1:29 • Segmentation fault (core dumped) on Linux – user45941 Nov 2 '15 at 1:30 • Does this hang up first? – Mega Man Aug 13 '16 at 9:20 • @MegaMan As in take a long time to finish? No, 7**6 is only about 100K so there's no perceptible delay. – feersum Aug 14 '16 at 2:46 • @MaxGasner Try reading the programming language again :) – feersum Aug 6 '19 at 4:03 # pdfTeX (51) \def~#1{\meaning}\write0{\expandafter~\string}\bye This is actually probably a bug, but it is not present in the original TeX, written by Knuth: compiling the code with tex filename.tex instead of pdftex filename.tex does not produce a segfault. # LOLCODE, 4 bytes OBTW Does not work online, only in the C interpreter. • LOL FANCY CODE M8 8/8 KTHXBYE – Addison Crump Nov 2 '15 at 15:52 ## Python, 33 characters >>> import ctypes;ctypes.string_at(0) Segmentation fault ## Python, 60 characters >>> import sys;sys.setrecursionlimit(1<<30);f=lambda f:f(f);f(f) Segmentation fault This is the Python version I'm testing on: Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin In general the Python interpreter is hard to crash, but the above is selective abusiveness... # Forth - 3 characters 0 @ (@ is a fetch) • Shortest one so far that will work on modern systems. – Demi Mar 6 '15 at 6:06 • Which Forth? Gforth just says "Invalid memory address" – cat Mar 5 '16 at 1:25 # W32 .com executable - 0 bytes This will seem weird, but on 32 bit Windows systems, creating and executing an empty .com file may cause a segfault, depending on... something. DOS just accepts it (the 8086 having no memory management, there are no meaningful segments to fault), and 64 bit Windows refuses to run it (x86-64 having no v86 mode to run a .com file in). ## C, 18 main(){raise(11);} • do you need to add #include <signal.h> in the code listing? – Florian Castellane Nov 28 '16 at 8:16 • @FlorianCastellane: in C90 and lower, for any function call done without a visible declaration, the compiler implicitly declares it as int func(). i.e. a function returning int, taking unspecified parameters. In this case raise is a function returning int, taking an int argument, so this works out (even if the compiler complains). – Hasturkun Nov 28 '16 at 13:26 • @Hasturkun main(){main();} – Sapphire_Brick Mar 9 at 1:16 # Perl ( < 5.14 ), 9 chars /(?{??})/ In 5.14 the regex engine was made reentrant so that it could not be crashed in this way, but 5.12 and earlier will segfault if you try this. • I can reproduce this on Perl 5.14 (Debian) and 5.18 (Arch Linux). sprunge.us/RKHT – nyuszika7h Jan 21 '14 at 21:51 • Reproduced with Perl v5.20.2 (windows) – mihaipopescu Mar 11 '16 at 13:38 • What about /(?R)/ on older Perl versions? – Sapphire_Brick Sep 2 at 13:37 ## brainfuck (2) <. Yes, this is implementation-dependent. SIGSEGV is the likely result from a good compiler. • How is a compiler that segfaults on that "good"? That < should either have no effect or wrap around. – nyuszika7h Jul 4 '14 at 17:38 • Immediately producing a runtime error on bounds violation is best because it lets the programmer find and fix the bug as fast as possible. Letting the buggy program run for a while and corrupt memory haphazardly before crashing just makes the problem harder to diagnose. Preventing the crash entirely, as you suggest, is worst; the programmer may get the program "working" and then be publicly humiliated when it crashes on standard compilers and interpreters. – Daniel Cristofani Oct 12 '14 at 23:17 • Conversely, catching bounds violations before runtime is not possible in general, nor especially useful in the cases where it is possible. Producing a more descriptive runtime error would be okay, but having the operating system catch it as a segfault is great because it doesn't have any speed cost. (In case it's not clear, the compiler itself doesn't segfault--it produces executables that segfault as soon as they try to access memory out of bounds.) – Daniel Cristofani Oct 12 '14 at 23:35 • Can you provide an implementation that produces this behavior and was created before this challenge was posted? If not, this answer is invalid. – user45941 Apr 25 '16 at 20:10 • Bounds checks are implementation specific, so I'm sure there are some that would error on it. Would any SIGSEGV though? I doubt it. There are a large number of programs that depend on the array wrapping to the left though. It can be rather convenient having growable storage on both sides. – captncraig Dec 15 '16 at 20:01 ## Haskell, 31 foreign import ccall main::IO() This produces a segfault when compiled with GHC and run. No extension flags are needed, as the Foreign Function Interface is in the Haskell 2010 standard. # Bash, 4 bytes Golfed . 0 Recursively include the script into itself. Explained Recursive "source" (.) operation causes a stack overflow eventually, and as Bash does not integrate with libsigsegv, this results in a SIGSEGV. Note that this is not a bug, but an expected behavior, as discussed here. Test ./bang Segmentation fault (core dumped) Try It Online ! ## C - 11(19)7(15)6(14) 1 chars, AT&T x86 assembler - 8(24) chars C version is: *(int*)0=0; The whole program (not quite ISO-compliant, let's assume it's K&R C) is 19 chars long: main(){*(int*)0=0;} Assembler variant: orl 0,0 The whole program is 24 chars long (just for evaluation, since it's not actually assembler): main(){asm("orl 0,0");} EDIT: A couple of C variants. The first one uses zero-initialization of global pointer variable: *p;main(){*p=0;} The second one uses infinite recursion: main(){main();} The last variant is the shortest one - 7(15) characters. EDIT 2: Invented one more variant which is shorter than any of above - 6(14) chars. It assumes that literal strings are put into a read-only segment. main(){*""=0;} EDIT 3: And my last try - 1 character long: P Just compile it like that: cc -o segv -DP="main(){main();}" segv.c • in C isn't main; only 5 charecters – Arya Dec 26 '11 at 10:50 • :Linker doesn't check whether main is function or not .It just pass it to the loader and return sigsegv – Arya Dec 26 '11 at 12:24 • @FUZxxl In this case main is a zero-initialized global int variable, so what we get is a result of trying to execute some zero bytes. In x86 it'd be something like add %al,(%rax) which is a perfectly valid instruction which tries to reach memory at address stored in %rax. Chances of having a good address there are minimal. – Alexander Bakulin Dec 27 '11 at 20:35 • Of course the last entry can be used for everything, you just have to supply the right compiler arguments. Which should make it the automatic winner of any code golf contest. :-) – celtschk Feb 4 '12 at 17:20 • Usually compiler flags other than ones that choose the language version to use are counted towards the total. – Jerry Jeremiah Dec 11 '14 at 4:32 # Python 33 import os os.kill(os.getpid(),11) Sending signal 11 (SIGSEGV) in python. • Also 33 characters: from os import* and kill(getpid(),11) – Timtech Jan 8 '14 at 15:45 # dc - 7 chars [dx0]dx causes a stack overflow • Is works, but can you elaborate? Why does it behave that way? – Stéphane Gourichon Nov 21 '16 at 12:17 • [dx0] stores dx0 on the stack, then d duplicates the top stack element, then x pops the top stack element (dx0) and executes it. Which duplicates the top stack element, and starts executing it... the 0 needs to be there to prevent this being a tail call, so they all build up. – Ben Millwood Nov 26 '16 at 7:58 ## Perl, 10 / 12 chars A slightly cheatish solution is to shave one char off Joey Adams' bash trick: kill 11,$$ However, to get a real segfault in Perl, unpack p is the obvious solution: unpack p,1x8 Technically, this isn't guaranteed to segfault, since the address 0x31313131 (or 0x3131313131313131 on 64-bit systems) just might point to valid address space by chance. But the odds are against it. Also, if perl is ever ported to platforms where pointers are longer than 64 bits, the x8 will need to be increased. • What is this 1x8 thing? – Hannes Karppila Dec 15 '16 at 10:19 • @HannesKarppila It's a short way to write "11111111". – Ilmari Karonen Dec 15 '16 at 12:18 # PicoLisp - 4 characters \$ pil : ('0) Segmentation fault This is intended behaviour. As described on their website: If some programming languages claim to be the "Swiss Army Knife of Programming", then PicoLisp may well be called the "Scalpel of Programming": Sharp, accurate, small and lightweight, but also dangerous in the hand of the inexperienced. # OCaml, 13 bytes Obj.magic 0 0 This uses the function Obj.magic, which unsafely coerces any two types. In this case, it coerces 0 (stored as the immediate value 1, due to the tag bit used by the GC) to a function type (stored as a pointer). Thus, it tries to dereference the address 1, and that will of course segfault. • it coerces 0 (stored as the immediate value 1) - why is 0 stored as 1? – Skyler Nov 3 '15 at 14:15 • @Skyler see edit – Demi Nov 4 '15 at 19:02 • Obj.magic()0 is one char shorter :) – Ben Millwood Nov 26 '16 at 7:53 # Actually, 17 16 11 10 9 bytes ⌠[]+⌡9!*. Try it online! If the above doesn't crash, try increasing the number (multi-digit numbers are specified in Actually with a leading colon) Crashes the interpreter by exploiting a bug in python involving deeply nested itertools.chain objects, which actually uses to implement the + operator. # F90 - 39 bytes real,pointer::p(:)=>null() p(1)=0. end Compilation: gfortran segv.f90 -o segv Execution: ./segv Program received signal SIGSEGV: Segmentation fault - invalid memory reference. Backtrace for this error: #0 0x7FF85FCAE777 #1 0x7FF85FCAED7E #2 0x7FF85F906D3F #3 0x40068F in MAIN__ at segv.f90:? Erreur de segmentation (core dumped) Materials: gfortran --version GNU Fortran (Ubuntu 4.8.4-2ubuntu1~14.04.1) 4.8.4 • Nice first post. – Rɪᴋᴇʀ Mar 25 '16 at 18:03 # C# /unsafe, 23 bytes unsafe{int i=*(int*)0;} For some reason I don't understand, *(int*)0=0 just throws a NullReferenceException, while this version gives the proper access violation. • The int i=*(int*)0; returns a NullReferenceException for me. – Peter Olson Dec 30 '11 at 7:43 • You can try to access a negative location, like *(int*)-1=0 and get an access violation. – Peter Olson Dec 30 '11 at 7:46 • The particular exception is just what the clr wraps it in, and is insignificant. The os itself actually gives the seg fault in all these cases. – captncraig Jan 20 '12 at 17:41 • The reason why *(int*)0=0 throws an exception is likely due to optimization. Specifically, to avoid the cost of checking for null, the optimizer may remove null checks, but when a segfault occurs it may rethrow it as a proper NullReferenceException. – Konrad Borowski Sep 15 '18 at 20:55 ## 19 characters in C main(a){*(&a-1)=1;} It corrupts return address value of main function, so it gets a SIGSEGV on return of main. • It depends on the stack frame layout, so in some architecture it can possibly not fail. – Alexander Bakulin Dec 27 '11 at 19:41 ## Cython, 14 This often comes in handy for debugging purposes. a=(<int*>0)[0] ## J (6) memf 1 memf means free memory, 1 is interpreted as a pointer. # Pyth, 3 characters j1Z This would be the part where I explain how I came up with this answer, except I legitimately have no clue. If anyone could explain this for me, I'd be grateful. Here it is in an online interpreter. ## Explanation j squares the base and calls itself recursively until the base is at least as large as the number. Since the base is 0, that never happens. With a sufficienly high recursion limit, you get a segfault. - Dennis ♦ • Figured something out! From browsing Pyth's source, I found that this code does j on 1 and 0, which tries to convert 1 into base 0. Why that segfaults, I have no idea... – NoOneIsHere Dec 23 '16 at 1:24 • See here. j squares the base and calls itself recursively until the base is at least as large as the number. Since the base is 0, that never happens. With a sufficienly high recursion limit, you get a segfault. – Dennis Dec 23 '16 at 1:30 • @Dennis IDEone – NoOneIsHere Dec 23 '16 at 1:35 • @SeeRhino The Pyth interpreter sets the recursion limit to 100,000. At least on TIO, that's enough for a segfault. – Dennis Dec 23 '16 at 1:38 ## Unix PDP-11 assembly, 18 bytes binary, 7 bytes source (this is becoming a theme with me, maybe because it's the only language I sort of know that no-one else here does.) inc(r0) Increments the single byte addressed by the initial value of r0 [which happens to be 05162 according to the simh debugger] as of program start. 0000000 000407 000002 000000 000000 000000 000000 000000 000000 0000020 005210 000000 And, as always, the extraneous bytes at the end can be removed with strip. I made a few attempts to get the source shorter, but always ended up getting either a syntax error or SIGBUS. # Matlab - Yes it is possible! In a response to a question of mine, Amro came up with this quirk: S = struct(); S = setfield(S, {}, 'g', {}, 0) • Please give Matlab version -- R2015B (and 2016B also) just throws an error: Error using setfield (line 56) At least one index is required. – Florian Castellane Nov 28 '16 at 8:24 • @FlorianCastellane Not able to try all versions now, but it has been confirmed to give a segfault in quite some versions, the latest being 2014b and the earliest 2012a. – Dennis Jaheruddin Nov 29 '16 at 16:13 # C - 14 chars Be sure to compile an empty file with cc -nostartfiles c.c ### Explanation: What went wrong is that we treated _start as if it were a C function, and tried to return from it. In reality, it's not a function at all. It's just a symbol in the object file which the linker uses to locate the program's entry point. When our program is invoked, it's invoked directly. If we were to look, we would see that the value on the top of the stack was the number 1, which is certainly very un-address-like. In fact, what is on the stack is our program's argc value. After this comes the elements of the argv array, including the terminating NULL element, followed by the elements of envp. And that's all. There is no return address on the stack. • I'm pretty sure you have to score with the additional args – Blue Dec 16 '16 at 12:36 • You have to add 14 bytes for the special flag. – Erik the Outgolfer Dec 16 '16 at 12:38 • @ErikGolferエリックゴルファー -nostartfiles is actually 13 bytes long :) – Charles Paulet Dec 16 '16 at 13:34 • @CharlesPaulet I think you have to count the space too. – Erik the Outgolfer Dec 16 '16 at 13:37
# Steenrod duality An isomorphism between the $p$-dimensional homology group of a compact subset $A$ of the sphere $S^n$ and the $(n-p-1)$-dimensional cohomology group of the complement (the homology and cohomology groups are the reduced ones). The problem was examined by N. Steenrod [1]. When $A$ is an open or closed subpolyhedron, the same isomorphism is known as Alexander duality, and for any open subset $A$ as Pontryagin duality. The isomorphism $$H_p^c(A;G)=H^{n-p-1}(S^n\setminus A;G)$$ also holds for an arbitrary subset $A$ (Sitnikov duality); here the $H_p^c$ are the Steenrod–Sitnikov homology groups with compact supports, and the $H^q$ are the Aleksandrov–Čech cohomology groups. Alexander–Pontryagin–Steenrod–Sitnikov duality is a simple consequence of Poincaré–Lefschetz duality and of the exact sequence of a pair. It is correct not only for $S^n$, but also for any manifold which is acyclic in dimensions $p$ and $p+1$.
Centrioles are highly structured organelles whose size is remarkably consistent within any given cell type. New centrioles are born when Polo-like kinase 4 (Plk4) recruits Ana2/STIL and Sas-6 to the side of an existing “mother” centriole. These two proteins then assemble into a cartwheel, which grows outwards to form the structural core of a new daughter. Here, we show that in early Drosophila melanogaster embryos, daughter centrioles grow at a linear rate during early S-phase and abruptly stop growing when they reach their correct size in mid- to late S-phase. Unexpectedly, the cartwheel grows from its proximal end, and Plk4 determines both the rate and period of centriole growth: the more active the centriolar Plk4, the faster centrioles grow, but the faster centriolar Plk4 is inactivated and growth ceases. Thus, Plk4 functions as a homeostatic clock, establishing an inverse relationship between growth rate and period to ensure that daughter centrioles grow to the correct size. How organelles grow to the right size is a fundamental problem in cell biology (Marshall, 2016). For many organelles, however, this question is difficult to address: the number and distribution of an organelle within a cell can vary, and it can also be difficult to determine whether an organelle’s surface area, volume, or perhaps the amount of a limiting component, best defines its size. Centrioles are highly structured organelles that form centrosomes and cilia (Nigg and Raff, 2009). Their length can vary by an order of magnitude between different species and tissues but is very consistent within a given cell type. Centrioles are potentially an attractive system with which to study organelle size control (Goehring and Hyman, 2012; Marshall, 2016), as their numbers are precisely regulated: most cells are born with a single centriole pair that is duplicated once per cell cycle, when a single daughter centriole grows outwards from each mother centriole during S-phase. Moreover, the highly ordered structure of the centriole means that the complex 3D question of organelle size control can be simplified to a 1D question of daughter centriole length control. Much progress has been made recently in understanding the molecular mechanisms of centriole duplication (Fırat-Karalar and Stearns, 2014; Arquint and Nigg, 2016; Banterle and Gönczy, 2017). Polo-like kinase 4 (Plk4) initiates duplication and is first recruited in a ring surrounding the mother centriole; this ring ultimately resolves into a single “dot” that marks the site of daughter centriole assembly (Sonnen et al., 2012; Kim et al., 2013; Ohta et al., 2014). Plk4 recruits and phosphorylates Ana2/STIL, which helps recruit Sas-6 to initiate the assembly of the ninefold-symmetric cartwheel that forms the structural backbone of the growing daughter centriole (Kitagawa et al., 2011; van Breugel et al., 2011; Dzhindzhev et al., 2014; Ohta et al., 2014; Kratz et al., 2015). How Plk4 is ultimately localized to a single site on the side of the mother is unclear, but Plk4 can dimerize and autophosphorylate itself in trans to trigger its own destruction (Rogers et al., 2009; Guderian et al., 2010; Holland et al., 2010; Cunha-Ferreira et al., 2013). In addition, binding to Ana2/STIL activates Plk4’s kinase activity (Moyer et al., 2015) and also appears to stabilize Plk4 (Ohta et al., 2014; Arquint et al., 2015). Thus, the binding of Plk4 to Ana2/STIL at a single site on the side of the mother could activate and protect the kinase at this site, whereas the remaining Plk4 around the mother centriole is degraded (Ohta et al., 2014; Moyer et al., 2015; Arquint and Nigg, 2016; Banterle and Gönczy, 2017). Although these studies provide important insight into how mother centrioles grow only a single daughter, the question of how daughter centrioles subsequently grow to the correct length has been difficult to address (Winey and O’Toole, 2014). This is in part because centrioles are small structures (usually 100–500 nm in length), making it hard to directly monitor the kinetics of centriole growth. Also, cells usually only assemble two daughter centrioles per cell cycle, and this makes it difficult to measure centriole growth in a quantitative manner. The early Drosophila melanogaster embryo is an established model for studying centriole and centrosome assembly (Lattao et al., 2017), and it is potentially an attractive system for measuring the kinetics of daughter centriole growth. First, it is a multinucleated single cell (a syncytium) that undergoes 13 rounds of nearly synchronous, rapid nuclear divisions (Foe and Alberts, 1983). During nuclear cycles 10–14, the majority of nuclei (and their associated centrioles) form a monolayer at the cortex, allowing the simultaneous observation of many centrioles as they rapidly and synchronously progress through repeated rounds of S-phase and mitoses without intervening gap phases (Foe and Alberts, 1983). Second, centrioles in flies are structurally simpler than those in vertebrates (Callaini and Riparbelli, 1990; Moritz et al., 1995; Callaini et al., 1997). All centrioles start to assemble around the cartwheel in S-phase, but vertebrate centrioles often exhibit a second phase of growth during G2/M, when the centriolar microtubules (MTs) extend past the cartwheel (Kuriyama and Borisy, 1981; Chrétien et al., 1997). Fly centrioles usually do not exhibit this second phase of growth, so the centrioles are relatively short, and the cartwheel extends throughout the length of the daughter centriole (González et al., 1998; Lattao et al., 2017). We reasoned, therefore, that the fluorescence incorporation of the cartwheel components Sas-6-GFP or Ana2-GFP could potentially be used as a proxy to measure daughter centriole length in D. melanogaster embryos. We show here that this is the case, and we provide the first quantitative description of the kinetics of daughter centriole growth in a living cell. Our findings reveal an unexpected inverse relationship between the centriole growth rate and growth period: in embryos where daughter centrioles tend to grow slowly, they tend to grow for a longer period. Surprisingly, Plk4 influences both the centriole growth rate and growth period and helps coordinate the inverse relationship between them. Thus, Plk4 functions as a homeostatic clock that helps to ensure daughter centrioles grow to the correct size in fly embryos. ### The cartwheel protein Sas-6 is incorporated irreversibly into growing daughter centrioles To establish an assay to monitor centriole growth kinetics, we tested whether Sas-6 and/or Ana2 are stably incorporated into growing daughter centrioles. We generated transgenic lines expressing either Sas-6-GFP or Ana2-GFP under the control of their own promoters and in their respective mutant backgrounds (Fig. S1, A and B). These fusion proteins significantly rescued the severe uncoordinated phenotype (caused by the lack of centrioles) in their respective mutant backgrounds (Fig. S1, C and D; and Videos 1 and 2). 3D structured illumination microscopy (SIM) FRAP (Conduit et al., 2015) revealed that Sas-6-GFP and Ana2-GFP were both incorporated into growing daughter centrioles during S-phase; but, whereas Ana2-GFP turned over at the mother centrioles, Sas-6-GFP did not (Fig. 1). Thus, during the time course of these experiments, Sas-6-GFP is incorporated exclusively and irreversibly into the growing daughter centriole, making it a suitable marker to monitor daughter centriole growth kinetics. ### Centrioles grow approximately linearly during early S-phase but abruptly stop growing in mid-late S-phase We analyzed the kinetics of Sas-6-GFP incorporation into daughter centrioles in individual Sas-6 mutant embryos during nuclear cycle 12, using spinning-disk confocal microscopy to track Sas-6-GFP foci (comprising the mother and growing daughter centriole, which cannot be resolved on this microscope system) from early S-phase, when the two separating mother centrioles first become visible as distinct entities (Fig. 2 A and Video 3). When measured at a single centriole pair, Sas-6-GFP levels tended to increase over time, but the data were noisy and it was difficult to discern a consistent pattern (Fig. 2 B). When measurements from >100 centriole pairs from the same embryo were averaged, however, a clear pattern emerged (Fig. 2 C). We used regression analysis to fit the mean data from each embryo to several different growth models (Figs. S2 and S3; see Materials and methods): centriole growth was usually best described by a period of linear growth in early-S-phase that plateaued in mid-to-late S-phase, presumably when the growing centrioles had reached their correct size, and this plateau continued into mitosis. A similar pattern of “linear growth in S-phase followed by a plateau in mitosis” has been reported previously for GFP:SAS-6 in the early worm embryo, suggesting that the dynamics of Sas-6 incorporation are likely to be conserved (Dammermann et al., 2008). From the fitted data, we extracted several growth parameters (Fig. 2 D) and used these to generate an average centriole growth profile derived from ∼1,100 individual centrioles tracked throughout S-phase in 15 individual embryos (Figs. 3 and S3). ### Centriole length is regulated by a homeostatic inverse relationship between growth rate and period We wondered whether the parameters of centriole growth would change as S-phase length gradually increased during embryogenesis (Foe and Alberts, 1983), so we compared the parameters of centriole growth during nuclear cycles 11–13 (Fig. 3 A). As S-phase length increased (Fig. 3 Bi), the centriole growth rate slowed (Fig. 3 Bii), but this was precisely compensated for by an increase in the growth period (Fig. 3 Biii), and thus daughter centrioles ultimately incorporated the same amount of Sas-6-GFP during each nuclear cycle, indicating that they had grown to the same length (Fig. 3 Biv). Interestingly, a similar inverse relationship between growth rate and growth period was observed between embryos that were at the same nuclear cycle (Fig. 3 C). These observations raised the intriguing possibility that centriole growth in these embryos is regulated by a homeostatic mechanism, whereby an inverse relationship between the growth rate and period ensures that daughter centrioles grow to a consistent size. ### The rate and period of centriole growth are not simply determined by S-phase length Perhaps surprisingly, there was no significant correlation between the length of S-phase and the centriole growth rate or growth period in embryos at the same nuclear cycle (Fig. 4, A and B). To more directly test whether S-phase length influences the parameters of centriole growth, we genetically manipulated S-phase length by either halving the genetic dose of cyclin B, which leads to an increase in S-phase length (Ji et al., 2004), or by halving the genetic dose of the DNA checkpoint protein Chk1 (grapes in flies), which leads to a decrease in S-phase length (Sibon et al., 1997; Fig. 4, C and D). Although there was a ∼24% difference in the length of S-phase between these two sets of embryos, the parameters of centriole growth were not significantly altered (Fig. 4 D), demonstrating that S-phase length does not directly determine the centriole growth rate or period. ### Plk4 influences the rate and period of daughter centriole growth Plk4 determines the site of daughter centriole assembly (Arquint and Nigg, 2016; Banterle and Gönczy, 2017), so we wondered whether it might also influence the parameters of daughter centriole growth. We generated a likely null allele of Plk4 (Fig. 5 and Video 4), and then monitored Sas-6-GFP incorporation in early embryos in which we halved the genetic dose of Plk4 (hereafter, Plk41/2 embryos; Fig. 6 A). Plk41/2 embryos hatched at normal rates (Fig. S4), and we observed no centriole duplication defects in our analysis of >1,300 duplication events. Compared with WT embryos, however, the daughter centrioles in Plk41/2 embryos grew more slowly, but for a longer period of time; as a result the centrioles ultimately grew to their normal size (Fig. 6, A and B). These findings support our conclusion that there is a homeostatic inverse relationship between the growth rate and growth period that ensures daughter centrioles grow to a consistent size. Moreover, they suggest that Plk4 influences both the rate and period of daughter centriole growth, and so helps to establish this inverse relationship. We next tested the effect of either doubling the genetic dosage of Plk4 (Plk42X) or expressing a previously described mutated form of Plk4 with reduced kinase activity (Holland et al., 2010; Zitouni et al., 2016; Plk4RKA) in a WT background (Fig. 6, C–F). Both sets of embryos hatched at normal rates (Fig. S4), and we observed no centriole duplication defects in our analysis of >1,400 duplication events in each genotype. Surprisingly, however, both perturbations resulted in a significant decrease in daughter centriole size, but for different reasons: in Plk42X embryos, the centrioles grew at a normal rate but for a shorter period (Fig. 6, C and D), whereas in Plk4RKA embryos, the centrioles grew at a slower rate but for a normal period (Fig. 6, E and F). Thus, Plk4 appears to influence the daughter centriole growth rate and growth period independently, with the extra Plk4 influencing the growth period and reduced Plk4 kinase activity influencing the growth rate. ### A second assay to measure the parameters of daughter centriole growth We wanted to confirm the kinetics of daughter centriole growth using an assay that was independent of Sas-6-GFP fluorescence incorporation. Our 3D-SIM analysis of the centrioles in early D. melanogaster embryos revealed that mother centrioles are usually oriented end-on to the cortex, so they appear as hollow rings (Fig. 7 A, Fig. S5, and Video 5). Thus, mother centrioles do not freely rotate in the z-axis, but rather adopt a relatively fixed orientation in reference to the cortex. We reasoned, therefore, that we could use the centriole distal-end binding protein GFP-Cep97 (Fig. 7 B) to measure the distance between the center of the mother centriole and the distal end of the growing daughter using Airyscan superresolution microscopy (Fig. 7 C). We acquired superresolution images of embryos expressing GFP-Cep97, determined the center of mass (COM) of the mother and of the distal end of the growing daughter, and measured the distance between them as S-phase progressed (note that all scoring was performed blind; Fig. 7, C and D). This method produced centriole growth curves similar to the Sas-6-GFP incorporation assay in both WT and Plk41/2 embryos, confirming the kinetics of daughter centriole growth in WT embryos, and that daughter centrioles grow more slowly but for a longer period of time in Plk41/2 embryos, and so reach the same size as in WT embryos (Fig. 7 D). ### Sas-6 incorporates into the proximal end of growing daughter centrioles Our observation that Plk4 influences the parameters of daughter centriole growth led us to ask whether the centriole cartwheel grows by incorporating Sas-6 at its proximal or distal end or isotropically throughout its length (Fig. 8 A). We generated embryos expressing Sas-6-GFP and Asl-mCherry (a marker of the mother centriole; Novak et al., 2014) and allowed daughters to grow to approximately half their full size. We then acquired a superresolution fluorescence image (Fig. 8 A, T1), and measured the distance between the COM of the mother and daughter centrioles (Fig. 8 A, distance d1). We photobleached the Sas-6-GFP (Fig. 8 A, T2)—which unintentionally also bleached the Asl-mCherry, but this fluorescence rapidly recovers (Novak et al., 2014)—allowing us to calculate the COM of the mother centriole at subsequent time points. We allowed the daughter centrioles to grow and incorporate Sas-6-GFP for 1min before acquiring a third superresolution image (Fig. 8 A, T3) and measuring the distance between the COM of the mother centriole and the newly incorporated Sas-6-GFP (Fig. 8 A, distance d2). This analysis revealed that in nonbleached controls d2 > d1, as expected, because the daughter centriole has grown in the time between T1 and T3. In bleached centrioles, however, d2 < d1, indicating that Sas-6-GFP is incorporated into the proximal end of the growing daughter (Fig. 8 B). As an additional control, we used the same strategy to assess whether the centriole distal-end binding protein GFP-Cep97 incorporated into the distal end of the centriole after bleaching (Fig. 8, C and D); this was the case, confirming that our methods are sensitive enough to distinguish proximal- and distal-end incorporation. It is currently unclear how Sas-6-GFP can incorporate into the proximal end of the growing cartwheel while the daughter centriole maintains an “engaged” connection to its mother. ### Plk4 levels oscillate at the centriole To better understand how Plk4 might influence the parameters of centriole growth, we quantified its centriolar localization during the embryonic nuclear cycles. Plk4 is usually present at very low levels in cells (Bauer et al., 2016), and neither the localization of the endogenous protein nor its detection by Western blotting has been reported previously in flies. We therefore generated flies expressing Plk4-GFP under the control of endogenous Plk4 promoter in the Plk4 mutant background, where it rescued the uncoordinated fly phenotype (Fig. S1 E and Video 6) and the centriole duplication defects (Fig. 5, D and E) observed in the Plk4 mutant. Centriolar Plk4-GFP levels oscillated in cycling early embryos: levels were lowest at metaphase, increased during late mitosis and early S-phase, and then abruptly started to decline in early-to-mid S-phase (Fig. 9, A and B; and Video 7). This decline was not caused by photobleaching, as the level of centriolar Plk4-GFP fluorescence started to increase again after metaphase of the next mitosis. As we cannot detect endogenous Plk4 by immunofluorescence, we cannot confirm that this localization reflects the localization of the endogenous Plk4. These mutant embryos that lack endogenous Plk4 but express Plk4-GFP can survive, however, demonstrating that Plk4-GFP can support the very rapid cycles of centriole duplication that are essential for early embryo development (Stevens et al., 2007; Varmark et al., 2007; Rodrigues-Martins et al., 2008). From these data, we infer that daughter centrioles can grow at a constant rate even as centriolar Plk4-GFP levels fluctuate (Fig. 9 B). This suggests that not all of the Plk4-GFP recruited to centrioles during S-phase directly promotes daughter centriole growth (see Discussion). Interestingly, slightly lower levels of Plk4-GFP were recruited to centrioles at each successive nuclear cycle (Fig. 9 C), potentially explaining, at least in part, why daughter centrioles grow more slowly but for a longer period of time at successive nuclear cycles. Moreover, the centriolar levels of Plk4-GFP at the start of S-phase appeared to influence the rate at which Plk4-GFP was recruited to, and then lost from, the centrioles: in nuclear cycle 11, for example, centriolar Plk4 levels were relatively high, and the rate of Plk4 recruitment to, and subsequent loss from, the centrioles was also relatively high; during nuclear cycle 13, centriolar Plk4 levels were relatively low, and the rate of Plk4 recruitment to, and subsequent loss from, the centrioles was also relatively low (Fig. 9 D). These findings suggest that the levels of centriolar Plk4 at the start of S-phase can influence how fast Plk4 is subsequently recruited to, and then lost from, the centrioles. Several models have been proposed to explain how daughter centrioles might grow to the correct size (Goehring and Hyman, 2012; Marshall, 2015, 2016; Banterle and Gönczy, 2017), but none of these have been tested, primarily because of the lack of a quantitative description of centriole growth kinetics. Our observations suggest an unexpected, yet relatively simple, model by which centriolar Plk4 might determine daughter centriole length in flies (Fig. 10). We propose that a small fraction of centriolar Plk4, perhaps the fraction bound to both Asl and Ana2 (Fig. 10, pink stars), influences both the rate of cartwheel growth (by determining the rate of Sas-6 and Ana2 recruitment to the centriole) and the period of cartwheel growth (by determining the rate of Plk4 recruitment to the centriole, and so how quickly centriolar Plk4 accumulates to trigger its own destruction). This model is consistent with our observation that daughter centrioles grow at a relatively constant rate even as centriolar levels of Plk4 fluctuate (indicating that the majority of Plk4 located at the centriole during S-phase is not directly promoting daughter centriole growth) and that centriolar Plk4 levels appear to influence the rate at which Plk4 is accumulated at centrioles (suggesting that Plk4 can recruit itself, either directly or indirectly, to centrioles). In this model, Plk4 functions as a homeostatic clock, regulating both the rate and period of daughter centriole growth, and ensuring an inverse relationship between them: the more “active” the Plk4, the faster the daughters grow, but the faster Plk4 is recruited and so inactivated. The activity of this Plk4 fraction is probably a function of both the total amount of Plk4 in this fraction and its kinase activity. We speculate that this activity is determined before the start of S-phase by a complex web of interactions between Plk4, Ana2, Sas-6, and Asl that influence each other’s recruitment and stability and also, directly or indirectly, Plk4’s kinase activity (Ohta et al., 2014; Arquint et al., 2015; Klebba et al., 2015; Moyer et al., 2015). These interactions are likely to be regulated by external factors (such as the basic cell cycle machinery), allowing cells to set centriole growth parameters according to their needs. In cells with a G1 period, for example, Plk4 could be activated as cells progress from mitosis into G1, allowing the mother centriole to recruit an appropriate amount of Sas-6 and Ana2/STIL at this stage, which could then be incorporated into the cartwheel when cells enter S-phase. This could explain why in some somatic cells Plk4 levels appear to be higher during mitosis/G1 than in S-phase (Rogers et al., 2009; Sillibourne et al., 2010; Ohta et al., 2014), and why Plk4 kinase activity appears to be required primarily during G1, rather than S-phase (Zitouni et al., 2016). This model can explain why halving the dose of Plk4 leads to a decrease in the growth rate and an increase in the growth period: halving the dose of Plk4 would be predicted to lower both the kinase activity of centriolar Plk4 (so slowing the growth rate) and the amount of centriolar Plk4 (so increasing the growth period). It can also potentially explain why doubling the dose of Plk4 might change the growth period without changing the growth rate: increasing the dose could lead to an increased rate of Plk4 recruitment (because of its increased cytoplasmic concentration), without increasing the amount or kinase activity of the Plk4 fraction bound to Asl or Ana2 (if these were already near saturation). Finally, it could explain why decreasing the kinase activity of Plk4 decreases the rate of growth without changing the growth period: the decrease in Plk4 kinase activity might affect the rate at which it recruits Ana2/Sas-6 without affecting the amount of centriolar Plk4, and so the rate at which Plk4 recruits itself to centrioles (Fig. 10). Importantly, although the cartwheel extends throughout the entire length of the daughter centriole in worms and flies, this is not the case in vertebrates, where centrioles exhibit a second phase of growth during G2/M and the centriolar MTs grow to extend beyond the cartwheel (Kuriyama and Borisy, 1981; Chrétien et al., 1997). We suspect that the homeostatic clock mechanism we describe here may regulate the initial phase of centriole/cartwheel growth in all species, but the subsequent extension of the daughter centriole beyond the cartwheel that occurs in vertebrates will likely require a separate regulatory network. To our knowledge, the concept of a homeostatic clock regulating organelle size has not been proposed previously. This mechanism is plausible for Plk4, because it can behave as a “suicide” kinase: the more active it is, the faster it will trigger its own inactivation (Rogers et al., 2009; Guderian et al., 2010; Holland et al., 2010; Cunha-Ferreira et al., 2013). This mechanism relies on delayed negative feedback, a principle that helps set both the circadian clock (Knippschild et al., 2005) and the somite segmentation clock (Lewis and Ozbudak, 2007). A similar mechanism might operate with other kinases that influence organelle biogenesis and whose activity accelerates their own inactivation, such as PKC, which regulates lysosome biogenesis (Feng and Hannun, 1998; Gould and Newton, 2008; Li et al., 2016). It will be interesting to determine whether homeostatic clock mechanisms that rely on delayed negative feedback could regulate organelle size more generally. ### D. melanogaster stocks and husbandry The specific D. melanogaster stocks used in this study are listed in Table S1, and the lines generated and tested here are listed in Table S2. Sas-6-, Ana2-, and Plk4-GFP constructs were made by cloning the genetic region of Sas-6, ana2, and PLK4, respectively, from 2 kb upstream of the start codon up to (but excluding) the stop codon, into pDONR-Zeo vector (Thermo Fisher Scientific), which was then recombined with the pUAST-GFPCT vector (pTWG-1076; Drosophila Genomics Resource Centre) via Gateway technology (Thermo Fisher Scientific). To generate a Plk4 construct without any fluorescent tag, the endogenous Plk4 stop codon was introduced to the Plk4 pDONR (just described) by site-directed mutagenesis, using the Quikchange II XL mutagenesis kit (Agilent Technologies). To generate a Plk4 construct with reduced kinase activity (Plk4RKA), the L89G substitution (Holland et al., 2010; Zitouni et al., 2016) was introduced into the Plk4 coding sequence by site-directed mutagenesis, using the Quikchange II XL mutagenesis kit (Agilent Technologies). Both constructs were then recombined with the pUAS-G-empty vector (this study; details available on request). The primer sequences used in the Gateway cloning for each gene are listed in Table S3. The primers used to generate the Plk4-RKA construct were provided by L. Gartenmann (University of Oxford, Oxford, England, UK). Transgenic lines were generated by the Fly Facility in the Department of Genetics, University of Cambridge (Cambridge, England, UK). The Plk4Aa74 allele was generated by inducing an imprecise excision of the P-element P{GSV1}GS3043 (Kyoto Stock Center, Kyoto Institute of Technology) located 996 bp upstream of the PLK4 start codon. The resulting 1,751-bp deletion (Fig. 5 A) removed the first third of the coding sequence, including all sequences encoding the kinase domain (Fig. 5 B). Plk4Aa74 mutant flies were viable but uncoordinated, unlike the hypomorphic Plk4c06612 flies that retained some coordination (Fig. 5 C and Video 4). The uncoordinated phenotype of Plk4Aa74 mutant flies was rescued by the expression GFP-Plk4 under the control of the Ubq promoter and by the expression of Plk4-GFP under the control of its own endogenous promoter (Fig. 5 C and Video 6). Plk4Aa74 mutant third-instar larval brains expressing GFP-PACT had no detectable centrioles, a phenotype that was rescued by the expression of both GFP-Plk4 (Fig. 5, D and E) and Plk4-GFP (not depicted) under the control of Ubq promoter or of its endogenous promoter, respectively, whereas the hypomorphic Plk4c06612 allele had reduced centriole numbers. Thus, the Plk4Aa74 allele appears to be a null or a very strong hypomorph. Flies were maintained at 18°C or 25°C on Drosophila culture medium (0.77% agar, 6.9% maize, 0.8% soya, 1.4% yeast, 6.9% malt, 1.9% molasses, 0.5% propionic acid, 0.03% ortho-phosphoric acid, and 0.3% nipagin) in vials or bottles. For embryo collections, 25% cranberry-raspberry juice plates (2% sucrose and 1.8% agar with a drop of yeast suspension) were used. Embryos studied in our imaging experiments were 0–1-h collections at 25°C, which were then aged at 25°C for 45–60 min. Before imaging, embryos were dechorionated by hand, mounted on a strip of glue painted on a 35-mm glass-bottom Petri dish with 14 mm micro-well (MatTek), and were left to desiccate for 1 min at 25°C. After desiccation, the embryos were covered with Voltalef grade H10S oil (Arkema). ### Behavioral assays #### Hatching experiments To examine the quality of the embryonic development for various fly strains generated in this study, 0–1-h collected embryos were aged for 24 h, and the percentage of the embryos that had hatched out of their chorion was calculated. For this assay, six technical repeats were performed over several days. #### Negative gravitaxis experiments A standard negative gravitaxis assay (Ma and Jarman, 2011; Pratt et al., 2016) was used to assess the climbing reflexes of Sas-6, ana2, and Plk4 null mutant flies rescued, respectively, with Sas-6-GFP, Ana2-GFP, and Plk4-GFP. In brief, three technical repeats of 1–4-d-old adult male flies (n = ∼15 for each technical repeat) were sharply tapped to the bottom of a cylinder, and the maximum distance climbed by individual flies within the first 10 s after tapping was measured. The distances were calculated using Fiji (ImageJ). ### Immunoblotting 0–3-h embryos were collected at 25°C. They were chemically dechorionated and fixed with methanol as previously described (Stevens et al., 2010). Fixed embryos were preserved overnight at 4°C and rehydrated by washing in PBT (PBS and 0.1% Triton X-100) three times for 15 min. Using a bright-field dissecting microscope (with 50× magnification), 40 embryos were selected in a volume of 20 µl PBT, and this was mixed with 20 µl of 2× SDS loading buffer. The sample was lysed at 95°C for 10 min, and 10 µl suspension/sample was loaded onto 3–8% Tris-acetate precast gels for SDS-PAGE (Thermo Fisher Scientific). Proteins were then transferred to nitrocellulose membrane for Western blotting. Membrane was blocked using 4% milk powder in PBS and 0.1% Tween 20 for 1 h. Primary antibodies used in this study are as follows: anti–Sas-6 (rabbit; Peel et al., 2007), anti-Ana2 (rabbit; Stevens et al., 2010), anti-Cnn (rabbit; Dobbelaere et al., 2008), and anti-GFP (mouse; Roche) used 1:500 in blocking solution. The incubation period for primary antibodies was 1 h. Secondary antibody used for immunoblotting: HRPO-linked anti-rabbit IgG and HRPO-linked anti-mouse IgG (both GE Healthcare) diluted 1:3,000 in blocking solution. The incubation period for secondary antibody was 45 min, after which membranes were washed with three changes of TBST (TBS and 0.1% Tween 20) for 45 min. Membranes were incubated in SuperSignal West Femto Maximum Sensitivity Substrate (Thermo Fisher Scientific; 1:1 mix, diluted 1:2, 1:6, 1:10, and 1:5 in water for Sas-6, Ana2, Cnn, and GFP, respectively). Finally, membranes were exposed to film using exposure times that ranged from <1 to 30 s. ### Immunofluorescence #### Immunostaining on late pupal testes and larval brains Late pupal testes from Ubq-GFP-CG3980 (GFP-Cep97) strain and the third instar larvae brains from GFP-PACT strain were dissected in PBS on a silicon dish and fixed by incubation in 4% formaldehyde in PBS for 20 min (for brains) or 30 min (for testes). As an additional step, fixed brains were transferred to a drop of 45% acetic acid for 15 s and then to a drop of 60% acetic acid for 3 min. This was followed by squashing the testes in between two coverslips of thicknesses 1 and 1.5 (silicon-dipped), which had been placed between two pieces of Whatman filter papers (GE Healthcare). In the case of brains, the squashing step was done using a 1.5-thickness coverslip and a slide. After squashing, the sample was snap-frozen in liquid nitrogen for 10 min. Once taken out of the liquid nitrogen, the 1-thickness coverslip was carefully removed from either the 1.5-thickness coverslip (for testes) or the slides (for brains) using a razor blade. The samples were incubated in 100% cold methanol at −20°C for 5 min (for brains) or 100% cold ethanol at −20°C for 15 min (for testes) and were transferred to a Petri dish containing PBST (1% Tween-20 [Sigma-Aldrich] in PBS) to be incubated for 10 min. For testes, this was followed by three washes in PBS for 5 min and by incubation in PBST containing 1:500 Guinea pig polyclonal anti-Asl primary antibody (Roque et al., 2012) overnight at 4°C in a humid chamber. The sample was then washed three times with PBS for 5 min and incubated in PBST containing 1:500 anti–guinea pig IgG Alexa Fluor 568 secondary antibody (Thermo Fisher Scientific) and 1:500 GFP-booster coupled to Atto 488 (ChromoTek) at room temperature for 3 h. For brains, the sample was stained for 10 min in Hoechst 33258 (Thermo Fisher Scientific). After washing three times in PBS for 15 min, the sample was mounted in mounting medium. #### Immunostaining on embryos For immunostaining on embryos, 0–2-h old Oregon-R WT samples were collected and dechorionated in 60% bleach for 2 min. This was followed by a wash in 0.05% Triton X-100 in distilled water. Embryos were then washed into small glass ampules containing 100% heptane and were shaken gently with 3% 0.5 M EGTA in pure methanol. Samples were stored in methanol at 4°C. Refrigerated samples were rehydrated by a wash in PBT (0.1% Triton X-100 in PBS) and blocked in 5% BSA (in PBS) for 1 h at room temperature. Blocking was followed by incubating the samples in 5% BSA (in PBS) containing 1:500 guinea pig anti-Asl (Roque et al., 2012) or 1:500 rabbit anti-PLP (Martinez-Campos et al., 2004) overnight at 4°C. Samples were then washed with PBT before 3-h incubation in 5% BSA (in PBS) containing 1:1,000 anti-guinea pig IgG Alexa Fluor 488 (Thermo Fisher Scientific) or 1:1,000 anti-rabbit IgG Alexa Fluor 594 (Thermo Fisher Scientific) secondary antibodies. After washing three times in PBT, each for 15 min, samples were mounted using Vectashield mounting medium containing DAPI (Vector Laboratories) onto microscopy slides with high-precision glass coverslips (CellPath). ### Image acquisition, processing, and analysis #### 3D SIM Living embryos were imaged at 21°C using a DeltaVision OMX V3 Blaze microscope (GE Healthcare). The system was equipped with a 60×/1.42-NA oil UPlanSApo objective (Olympus Corp.), 488- and 593-nm diode lasers, and Edge 5.5 sCMOS cameras (PCO). Spherical aberration was reduced by matching the refractive index of the immersion oil (1.514) to that of the embryos. 3D-SIM image stacks consisting of six slices at 0.125-µm intervals were acquired in five phases and from three angles per slice. The raw acquisition was reconstructed using softWoRx 6.1 (GE Healthcare) with a Wiener filter setting of 0.006 and channel-specific optical transfer function. For two-color 3D-SIM, images from green and red channels were registered with the alignment coordination information obtained from the calibrations using 0.2-µm-diameter TetraSpeck beads (Thermo Fisher Scientific) in the OMX Editor software. The SIM-Check plug-in in ImageJ (National Institutes of Health) was used to assess the quality of the SIM reconstructions (Ball et al., 2015). To carry out the FRAP experiments in 3D-SIM, the software development kit (GE Healthcare) was used. Settings for the sequence of events needed for FRAP experiments were as follows: (1) acquisition of a single z-stack in 3D-SIM (Fig. 1, pre-bleach); (2) multispot photobleaching (by the OMX galvo scanner TIRF/photo kinetics module); (3) acquisition of the photobleached image (Fig. 1, bleach); and (4) acquisition of the time-lapse images in 3D-SIM (Fig. 1, post-bleach). Assessment of the mother centriole orientation in early embryos (Fig. S5) was performed by visually scoring the percentage of mother centrioles that formed clear hollow rings when stained with anti-Asl or anti-PLP antibodies. #### Spinning disk confocal microscopy Living embryos were imaged at 21°C using a PerkinElmer ERS Spinning Disk confocal system on a Zeiss Axiovert 200M microscope. The system was equipped with a Plan-Apochromat 63×/1.4-NA oil DIC lens. 488- and 568-nm lasers were used to excite GFP and mCherry, respectively (using fast-sequential mode for GFP only, and emission discrimination mode for GFP and mCherry together). Confocal sections of 13 slices with 0.5-µm-thick intervals were collected every 30 s. Focus was occasionally readjusted in between the 30-s intervals. In embryos expressing Jupiter-mCherry and Plk4-GFP in a Plk4Aa74 homozygous background (data presented in Fig. 9 A), the fluorescent signal was too faint to be properly visualized on the system. We therefore used a system equipped with an EM-CCD Andor iXon+ camera on a Nikon Eclipse TE200-E microscopy using a Plan-Apochromat 60×/1.42-NA oil DIC lens, controlled with Andor IQ2 software. Confocal sections of 17 slices with 0.5-µm-thick intervals were collected every 30 s at 21°C. Postacquisition image processing was performed using Fiji (National Institutes of Health). Maximum-intensity projections of the images were first bleach-corrected with Fiji’s exponential fit algorithm, and then the backgrounds were subtracted using the subtract background function with a rolling ball radius of 10 pixels. Centrioles (Sas-6-GFP foci) were tracked using TrackMate (Tinevez et al., 2017), a plug-in of Fiji, with the following analysis settings: track spot diameter size of 1.1 µm, initial threshold of >0.02, and quality of >0.07. Centriole growth regression curves were made using Prism 7 (GraphPad Software), and the mathematical modeling was done using the nonlinear regression (curve fit) analysis function. Growth curves in S-phase and mitosis were modeled discontinuously, as we reasoned that these two phases of the cell cycle are two separate entities (when data were modeled continuously, there was no statistically significant difference between the two ways of modeling). For S-phase, the data were initially fitted against three different functions to assess the most suitable model: linearity (or linear growth followed by a plateau), one-phase association (parabola), and sigmoidal (Fig. S2); among these models, linearity (or linear growth followed by a plateau) best fit the data. Thus, the data were modeled using two separate functions for S-phase: (1) linear growth and (2) linear growth followed by a plateau (Fig. 2 C). The latter function is an in-house algorithm where a linear line is tested against having a point of inflection at any point in S-phase after which the growth is constant. The equation is described as follows, where b represents the first intercept that leads to the point of inflection (x0, y1) after x amount of time with a slope of m: $y 1 =m∗x+b y x0 =m∗x 0 +b y 2 =y x0 y=if( x The only constraint applied to this equation was the requirement of the condition that m and x0 must be greater than 0. For mitosis, the data were fitted against two separate functions: (1) linear growth and (2) constant (Fig. 2 D). In both the S-phase and mitosis analysis, centrioles that come from a single embryo were reasoned as internal replicates, and thus the fitting was done based on considering only the mean y value of each time point (in contrast to considering each replicate y value as an individual point). To judge and control the quality and precision of regression (goodness-of-fit), we used the R2 and absolute sum-of-square values as well as applying the runs test. To compare the fits, the extra sum-of-squares F test was applied, and the appropriate fit was chosen by selecting the simpler model unless the p-value was <0.05. To create the final regression model curves of Sas-6-GFP, the best-fit parameters (means of growth rate, growth period [m and x0, respectively, in the equations] and S-phase length; see Fig. 2 D for the visual definition of these parameters) were calculated and origin-adjusted against the time point, ∼1–1.5 min after where the centrosomes from the previous cycle had separated (this was the time point at which TrackMate’s tracking algorithm could detect the centrosome separation with the threshold parameters). Plk4-GFP data were too noisy to predict a meaningful regression model; however, the regression analysis using linear increase and decrease functions in Prism 7 (taking the peak intensity value in Plk4-GFP’s growth curves as the point of inflection) allowed us to compare the differences between the rates of incorporation and decay over successive cell cycles (Fig. 9). To create the final nonregression model curve of Plk4-GFP, the raw curves from multiple embryos were averaged and plotted (Fig. 9 B, bold brown). The mean intensity values and the final model curves for all the proteins were normalized to zero by bringing the initial mean intensity values (just after centrosome separation) down to zero and normalizing the rest of the data accordingly. In all of these imaging experiments, the timing of nuclear envelope breakdown (NEB) in individual embryos was easily inferred from the sum intensity projections of z-slices, except in the case of Plk4-GFP, where we had to use the simultaneous expression of Jupiter-mCherry to determine the timing of NEB (Fig. 9 A and Video 7). #### Airyscan superresolution microscopy Living embryos in nuclear cycle 12 or cycle 14 were imaged at 21°C using an inverted Zeiss 880 microscope fitted with an Airyscan detector. The system was equipped with Plan-Apochromat 63×/1.4-NA oil lens. 488-nm argon and 561-nm diode lasers were used to excite GFP and mCherry, respectively (sequential excitation of each wavelength was switched per line to ensure green and red channels were aligned). Sections of five slices with 0.2-µm-thick intervals were collected every 1 min with a zoom value of 24.6 pixels/µm. Focus was readjusted between the 1-min intervals. Images were Airy-processed in 3D with a strength value of “auto” (∼6) or 6.5. To measure the distancing between GFP-Cep97 foci at the distal ends of mother and daughter centrioles, three serial stacks were rapidly acquired (over a period of ∼10 s) at 1-min intervals through nuclear cycle 12. Multiple stacks were acquired at each time point to allow us to select the image that gave the best resolution of the two GFP-Cep97 foci at each centriole pair at each time point (as this varied, perhaps because of “wobbling” of the centriole pair). After acquisition, images were analyzed using their maximum-intensity projections in Fiji (see Fig. 7 C for an example). The GFP foci were thresholded to the extent that the threshold encapsulated only the entire intensity mass of the foci for each centriole. After thresholding, the COM for each GFP-Cep97 focus was calculated, and then the distance between COM of the foci on the daughter and mother was measured. The regression curves for GFP-Cep97 distancing were made and quality-controlled using the same methodology described in Spinning disk confocal microscopy. Although this methodology produces a daughter centriole growth profile similar to that seen with our Sas-6-GFP incorporation assay, we note that the data are very noisy, and this methodology may not be sensitive enough to detect subtle changes in centriole length in different conditions. We also note that the data from WT and half-dose embryos were scored blind. We also used the Airyscan superresolution microscope to probe the site of incorporation of Sas-6-GFP and GFP-Cep97 into daughter centrioles (Fig. 8). Embryos were identified as they exited mitosis of nuclear cycle 13 and daughter centrioles were allowed to grow for 5 min into cycle 14 (where daughter centriole growth takes 10.7 ± 0.8 min [n = 4 embryos] in S-phase). This allowed centrioles to grow to roughly half of their final size, before the centrioles were photobleached. The following settings were used in terms of the sequence of events needed for FRAP experiments: (1) acquisition of a single z-stack using airy-scan (Fig. 8, T1); (2) multispot serial photo-bleaching (Fig. 8, T2); and (3) acquisition of the photobleached and the time-lapse images in Airyscan (Fig. 8, T3). To measure the distance between the COM of the mother centriole and the incorporated Sas-6-GFP or GFP-Cep97 (Fig. 8, A and C, distances d1 and d2), the center of Asl-mCherry focus was taken as the reference center of mother centriole for both pre- and postbleaching time points (Fig. 8, T1 and T3, respectively). ### Quantification and statistical analysis All the details for quantification, statistical tests, n, definitions of center, and dispersion and precision measures are described in the main text, relevant figure legends, or Materials and methods. Significance in statistical tests was defined by P < 0.05. To determine whether the data values came from a Gaussian distribution, D’Agostino–Pearson omnibus normality test was applied. Prism 7 was used for all the modeling and statistical analyses. ### Online supplemental material Fig. S1 compares the embryonic expression levels of Sas-6- and Ana2-GFP to their endogenous counterparts and quantifies the ability of Sas-6-GFP, Ana2-GFP, and Plk4-GFP transgenes to rescue the uncoordinated phenotype of their respective mutants. Fig. S2 illustrates the regression models tested to find the best-fit regression type for Sas-6-GFP dynamics. Fig. S3 shows the Sas-6-GFP model curves fitted to data from 15 different embryos in cycle 12. Fig. S4 compares the embryo hatching rates of embryos in which the genetic dosage or the activity of Plk4 was altered. Fig. S5 shows immunofluorescence data and quantification illustrating how mother centrioles are preferentially oriented end-on to the cortex in early fly embryos. Videos 1, 2, 4, and 6 compare the behavior of Sas-6, Ana2, or Plk4 mutant flies to mutant flies rescued with their respective GFP-fusions. Video 3 shows an example of a time-lapse video of an embryo in nuclear cycle 12 expressing Sas-6-GFP and the tracks generated by TrackMate that were used to quantify Sas-6-GFP fluorescence levels through the cycle. Video 5 is a time-lapse 3D-SIM movie of Asl-GFP in early fly embryos, illustrating how mother centrioles remain oriented end-on with respect to the cortex through time. Video 7 illustrates the oscillating behavior of Plk4-GFP in cycle 12 of early embryogenesis. Table S1 lists the source of the D. melanogaster strains used in this study. Table S2 lists the D. melanogaster strains used in this study. Table S3 lists the sequences of the oligonucleotides used in this study. We thank Drs. J. Richard McIntosh and Guangshuo Ou and the members of the Raff Laboratory for critically reading the manuscript, Dr. Omer Dushek for his advice on curve fitting, and Lisa Gartenmann for designing and sharing the primers used to generate the Plk4RKA construct. Superresolution microscopy was performed at the Micron Oxford Advanced Bioimaging Unit, funded by a Strategic Award from the Wellcome Trust (107457). The research was funded by a Wellcome Trust Senior Investigator Award (104575; A. Wainman, S. Saurya, T.L. Steinacker, A. Caballe, Z.A. Novak, N.M., and J.W. Raff) and an Edward Penley Abraham Scholarship (to M.G. Aydogan and J. Baumbach). The authors declare no competing financial interests. Author contributions: M.G. Aydogan, A. Wainman, and J.W. Raff formulated the theory of this study and wrote and revised the article. M.G. Aydogan, A. Wainman, S. Saurya, and J.W. Raff contributed to experimental conception and design. M.G. Aydogan, A. Wainman, S. Saurya, T.L. Steinacker, A. Caballe, and J. Baumbach acquired and analyzed the data. M.G. Aydogan, A. Wainman, Z.A. Novak, J. Baumbach, and N. Muschalik created and provided critical reagents/materials. Arquint , C. , and E.A. Nigg . 2016 . The PLK4-STIL-SAS-6 module at the core of centriole duplication . Biochem. Soc. Trans. 44 : 1253 1263 . Arquint , C. , A.-M. Gabryjonczyk , S. Imseng , R. Böhm , E. Sauer , S. Hiller , E.A. Nigg , and T. Maier . 2015 . STIL binding to Polo-box 3 of PLK4 regulates centriole duplication . eLife. 4 . Ball , G. , J. Demmerle , R. Kaufmann , I. Davis , I.M. Dobbie , and L. Schermelleh . 2015 . SIMcheck: A toolbox for successful super-resolution structured illumination microscopy . Sci. Rep. 5 : 15915 . Banterle , N. , and P. Gönczy . 2017 . Centriole biogenesis: From identifying the characters to understanding the plot . Annu. Rev. Cell Dev. Biol. 33 : 23 49 . Basto , R. , J. Lau , T. , A. Gardiol , C.G. Woods , A. Khodjakov , and J.W. Raff . 2006 . Flies without centrioles . Cell. 125 : 1375 1386 . Bauer , M. , F. Cubizolles , A. Schmidt , and E.A. Nigg . 2016 . Quantitative analysis of human centrosome architecture by targeted proteomics and fluorescence imaging . EMBO J. 35 : 2152 2166 . Bettencourt-Dias , M. , A. Rodrigues-Martins , L. Carpenter , M. Riparbelli , L. Lehmann , M. Gatt , N. Carmo , F. Balloux , G. Callaini , and D. Glover . 2005 . SAK/PLK4 is required for centriole duplication and flagella development . Curr. Biol. 15 : 2199 2207 . Callaini , G. , and M.G. Riparbelli . 1990 . Centriole and centrosome cycle in the early Drosophila embryo . J. Cell Sci. 97 : 539 543 . Callaini , G. , W.G. Whitfield , and M.G. Riparbelli . 1997 . Centriole and centrosome dynamics during the embryonic cell cycles that follow the formation of the cellular blastoderm in Drosophila . Exp. Cell Res. 234 : 183 190 . Chrétien , D. , B. Buendia , S.D. Fuller , and E. Karsenti . 1997 . Reconstruction of the centrosome cycle from cryoelectron micrographs . J. Struct. Biol. 120 : 117 133 . Conduit , P.T. , A. Wainman , Z.A. Novak , T.T. Weil , and J.W. Raff . 2015 . Re-examining the role of Drosophila Sas-4 in centrosome assembly using two-colour-3D-SIM FRAP . eLife. 4 : 1032 . Cunha-Ferreira , I. , I. Bento , A. Pimenta-Marques , S.C. Jana , M. Lince-Faria , P. Duarte , J. Borrego-Pinto , S. Gilberto , T. , D. Brito , et al 2013 . Regulation of autophosphorylation controls PLK4 self-destruction and centriole number . Curr. Biol. 23 : 2245 2254 . Dammermann , A. , P.S. , A. Desai , and K. Oegema . 2008 . SAS-4 is recruited to a dynamic structure in newly forming centrioles that is stabilized by the γ-tubulin–mediated addition of centriolar microtubules . J. Cell Biol. 180 : 771 785 . Dobbelaere , J. , F. Josué , S. Suijkerbuijk , B. Baum , N. Tapon , and J. Raff . 2008 . A genome-wide RNAi screen to dissect centriole duplication and centrosome maturation in Drosophila . PLoS Biol. 6 : e224 . Dzhindzhev , N.S. , G. Tzolovsky , Z. Lipinszki , S. Schneider , R. Lattao , J. Fu , J. Debski , M. , and D.M. Glover . 2014 . Plk4 phosphorylates Ana2 to trigger Sas6 recruitment and procentriole formation . Curr. Biol. 24 : 2526 2532 . Feng , X. , and Y.A. Hannun . 1998 . An essential role for autophosphorylation in the dissociation of activated protein kinase C from the plasma membrane . J. Biol. Chem. 273 : 26870 26874 . Fırat-Karalar , E.N. , and T. Stearns . 2014 . The centriole duplication cycle . Philos. Trans. R. Soc. Lond. B Biol. Sci. 369 : 20130460 . Foe , V.E. , and B.M. Alberts . 1983 . Studies of nuclear and cytoplasmic behaviour during the five mitotic cycles that precede gastrulation in Drosophila embryogenesis . J. Cell Sci. 61 : 31 70 . Goehring , N.W. , and A.A. Hyman . 2012 . Organelle growth control through limiting pools of cytoplasmic components . Curr. Biol. 22 : R330 R339 . González , C. , G. Tavosanis , and C. Mollinari . 1998 . Centrosomes and microtubule organisation during Drosophila development . J. Cell Sci. 111 : 2697 2706 . Gould , C.M. , and A.C. Newton . 2008 . The life and death of protein kinase C . Curr. Drug Targets. 9 : 614 625 . Guderian , G. , J. Westendorf , A. Uldschmid , and E.A.E. Nigg . 2010 . Plk4 trans-autophosphorylation regulates centriole number by controlling betaTrCP-mediated degradation . J. Cell Sci. 123 : 2163 2169 . Holland , A.J. , W. Lan , S. Niessen , H. Hoover , and D.W. Cleveland . 2010 . Polo-like kinase 4 kinase activity limits centrosome overduplication by autoregulating its own stability . J. Cell Biol. 188 : 191 198 . Ji , J.-Y. , J.M. Squirrell , and G. Schubiger . 2004 . Both cyclin B levels and DNA-replication checkpoint control the early embryonic mitoses in Drosophila . Development. 131 : 401 411 . Kim , T.-S. , J.-E. Park , A. Shukla , S. Choi , R.N. Murugan , J.H. Lee , M. Ahn , K. Rhee , J.K. Bang , B.Y. Kim , et al 2013 . Hierarchical recruitment of Plk4 and regulation of centriole biogenesis by two centrosomal scaffolds, Cep192 and Cep152 . 110 : E4849 E4857 . Kitagawa , D. , I. Vakonakis , N. Olieric , M. Hilbert , D. Keller , V. Olieric , M. Bortfeld , M.C. Erat , I. Flückiger , P. Gönczy , and M.O. Steinmetz . 2011 . Structural basis of the 9-fold symmetry of centrioles . Cell. 144 : 364 375 . Klebba , J.E. , B.J. Galletta , J. Nye , K.M. Plevock , D.W. Buster , N.A. Hollingsworth , K.C. Slep , N.M. Rusan , and G.C. Rogers . 2015 . Two Polo-like kinase 4 binding domains in Asterless perform distinct roles in regulating kinase stability . J. Cell Biol. 208 : 401 414 . Knippschild , U. , A. Gocht , S. Wolff , N. Huber , J. Löhler , and M. Stöter . 2005 . The casein kinase 1 family: Participation in multiple cellular processes in eukaryotes . Cell. Signal. 17 : 675 689 . Kratz , A.-S. , F. Bärenz , K.T. Richter , and I. Hoffmann . 2015 . Plk4-dependent phosphorylation of STIL is required for centriole duplication . Biol. Open. 4 : 370 377 . Kuriyama , R. , and G.G. Borisy . 1981 . Centriole cycle in Chinese hamster ovary cells as determined by whole-mount electron microscopy . J. Cell Biol. 91 : 814 821 . Lattao , R. , L. Kovács , and D.M. Glover . 2017 . The centrioles, centrosomes, basal bodies, and cilia of Drosophila melanogaster . Genetics. 206 : 33 53 . Lewis , J. , and E.M. Ozbudak . 2007 . Deciphering the somite segmentation clock: Beyond mutants and morphants . Dev. Dyn. 236 : 1410 1415 . Li , Y. , M. Xu , X. Ding , C. Yan , Z. Song , L. Chen , X. Huang , X. Wang , Y. Jian , G. Tang , et al 2016 . Protein kinase C controls lysosome biogenesis independently of mTORC1 . Nat. Cell Biol. 18 : 1065 1077 . Ma , L. , and A.P. Jarman . 2011 . Dilatory is a Drosophila protein related to AZI1 (CEP131) that is located at the ciliary base and required for cilium formation . J. Cell Sci. 124 : 2622 2630 . Marshall , W.F. 2015 . How cells measure length on subcellular scales . Trends Cell Biol. 25 : 760 768 . Marshall , W.F. 2016 . Cell geometry: How cells count and measure size . Annu. Rev. Biophys. 45 : 49 64 . Martinez-Campos , M. , R. Basto , J. Baker , M. Kernan , and J.W. Raff . 2004 . The Drosophila pericentrin-like protein is essential for cilia/flagella function, but appears to be dispensable for mitosis . J. Cell Biol. 165 : 673 683 . Moritz , M. , M.B. Braunfeld , J.C. Fung , J.W. Sedat , B.M. Alberts , and D.A. Agard . 1995 . Three-dimensional structural characterization of centrosomes from early Drosophila embryos . J. Cell Biol. 130 : 1149 1159 . Moyer , T.C. , K.M. Clutario , B.G. Lambrus , V. Daggubati , and A.J. Holland . 2015 . Binding of STIL to Plk4 activates kinase activity to promote centriole assembly . J. Cell Biol. 209 : 863 878 . Nigg , E.A. , and J.W. Raff . 2009 . Centrioles, centrosomes, and cilia in health and disease . Cell. 139 : 663 678 . Novak , Z.A. , P.T. Conduit , A. Wainman , and J.W. Raff . 2014 . Asterless licenses daughter centrioles to duplicate for the first time in Drosophila embryos . Curr. Biol. 24 : 1276 1282 . Novak , Z.A. , A. Wainman , L. Gartenmann , and J.W. Raff . 2016 . Cdk1 phosphorylates Drosophila Sas-4 to recruit polo to daughter centrioles and convert them to centrosomes . Dev. Cell. 37 : 545 557 . Ohta , M. , T. Ashikawa , Y. Nozaki , H. Kozuka-Hata , H. Goto , M. Inagaki , M. Oyama , and D. Kitagawa . 2014 . Direct interaction of Plk4 with STIL ensures formation of a single procentriole per parental centriole . Nat. Commun. 5 : 5267 . Peel , N. , N.R. Stevens , R. Basto , and J.W. Raff . 2007 . Overexpressing centriole-replication proteins in vivo induces centriole overduplication and de novo formation . Curr. Biol. 17 : 834 843 . Pratt , M.B. , J.S. Titlow , I. Davis , A.R. Barker , H.R. Dawe , J.W. Raff , and H. Roque . 2016 . Drosophila sensory cilia lacking MKS proteins exhibit striking defects in development but only subtle defects in adults . J. Cell Sci. 129 : 3732 3743 . Rodrigues-Martins , A. , M. Riparbelli , G. Callaini , D.M. Glover , and M. Bettencourt-Dias . 2008 . From centriole biogenesis to cellular function: Centrioles are essential for cell division at critical developmental stages . Cell Cycle. 7 : 11 16 . Rogers , G.C. , N.M. Rusan , D.M. Roberts , M. Peifer , and S.L. Rogers . 2009 . The SCF Slimb ubiquitin ligase regulates Plk4/Sak levels to block centriole reduplication . J. Cell Biol. 184 : 225 239 . Roque , H. , A. Wainman , J. Richens , K. Kozyrska , A. Franz , and J.W. Raff . 2012 . Drosophila Cep135/Bld10 maintains proper centriole structure but is dispensable for cartwheel formation . J. Cell Sci. 125 : 5881 5886 . Sibon , O.C. , V.A. Stevenson , and W.E. Theurkauf . 1997 . DNA-replication checkpoint control at the Drosophila midblastula transition . Nature. 388 : 93 97 . Sillibourne , J.E. , F. Tack , N. Vloemans , A. Boeckx , S. Thambirajah , P. Bonnet , F.C.S. Ramaekers , M. Bornens , and T. Grand-Perret . 2010 . Autophosphorylation of polo-like kinase 4 and its role in centriole duplication . Mol. Biol. Cell. 21 : 547 561 . Sonnen , K.F. , L. Schermelleh , H. Leonhardt , and E.A. Nigg . 2012 . 3D-structured illumination microscopy provides novel insight into architecture of human centrosomes . Biol. Open. 1 : 965 976 . Stevens , N.R. , A.A.S.F. Raposo , R. Basto , D. St Johnston , and J.W. Raff . 2007 . From stem cell to embryo without centrioles . Curr. Biol. 17 : 1498 1503 . Stevens , N.R. , J. Dobbelaere , K. Brunk , A. Franz , and J.W. Raff . 2010 . Drosophila Ana2 is a conserved centriole duplication factor . J. Cell Biol. 188 : 313 323 . Tinevez , J.-Y. , N. Perry , J. Schindelin , G.M. Hoopes , G.D. Reynolds , E. Laplantine , S.Y. Bednarek , S.L. Shorte , and K.W. Eliceiri . 2017 . TrackMate: An open and extensible platform for single-particle tracking . Methods. 115 : 80 90 . van Breugel , M. , M. Hirono , A. Andreeva , H.-A. Yanagisawa , S. Yamaguchi , Y. Nakazawa , N. Morgner , M. Petrovich , I.-O. Ebong , C.V. Robinson , et al 2011 . Structures of SAS-6 suggest its organization in centrioles . Science. 331 : 1196 1199 . Varmark , H. , S. Llamazares , E. Rebollo , B. Lange , J. Reina , H. Schwarz , and C. González . 2007 . Asterless is a centriolar protein required for centrosome function and embryo development in Drosophila . Curr. Biol. 17 : 1735 1745 . Winey , M. , and E. O’Toole . 2014 . Centriole structure . Philos. Trans. R. Soc. Lond. B Biol. Sci. 369 : 20130457 . Zitouni , S. , M.E. Francia , F. Leal , S. Montenegro Gouveia , C. Nabais , P. Duarte , S. Gilberto , D. Brito , T. Moyer , S. Kandels-Lewis , et al 2016 . CDK1 prevents unscheduled PLK4-STIL complex assembly in centriole biogenesis . Curr. Biol. 26 : 1127 1137 . ## Author notes J. Baumbach’s and N. Muschalik’s present address is Medical Research Council Laboratory of Molecular Biology, University of Cambridge, Cambridge, England, UK.
# Why can't we use the Master Theorem on recurrences with floor or ceiling operations? [duplicate] From my understanding, using such operators on large numbers doesn't have an impact on running time, since the integer rounding becomes negligible after a certain point. For example, the recurrence $$T(n)= \begin{cases} T(\lfloor{n/2}\rfloor)+(\log(n))^{2}, & \text{if n>1} \\ 1 & \text{if n=1.} \end{cases}$$ is unsolvable using the Master Theorem, whereas $$T(n)= \begin{cases} T({n/2})+(\log(n))^{2}, & \text{if n>1} \\ 1 & \text{if n=1.} \end{cases}$$ is solvable using the Master Theorem. Why is this? EDIT: Why doesn't this floored example work? Isn't it monotonically increasing? • You can use the master theorem on your recurrence with floor. – Yuval Filmus Oct 24 at 10:18
182 articles – 1721 Notices  [english version] HAL : in2p3-00131859, version 1 arXiv : nucl-ex/0104015 Physical Review Letters 87 (2001) 052301 Measurement of the mid-rapidity transverse energy distribution from $\sqrt{s_{NN}}=130$ GeV Au+Au collisions at RHIC PHENIX Collaboration(s) (2001) The first measurement of energy produced transverse to the beam direction at RHIC is presented. The mid-rapidity transverse energy density per participating nucleon rises steadily with the number of participants, closely paralleling the rise in charged-particle density, such that E_T / N_ch remains relatively constant as a function of centrality. The energy density calculated via Bjorken's prescription for the 2% most central Au+Au collisions at sqrt(s_NN)=130 GeV is at least epsilon_Bj = 4.6 GeV/fm^3 which is a factor of 1.6 larger than found at sqrt(s_NN)=17.2 GeV (Pb+Pb at CERN). Thème(s) : Physique/Physique des Hautes Energies - ExpériencePhysique/Physique Nucléaire Expérimentale Lien vers le texte intégral : http://fr.arXiv.org/abs/nucl-ex/0104015 in2p3-00131859, version 1 http://hal.in2p3.fr/in2p3-00131859 oai:hal.in2p3.fr:in2p3-00131859 Contributeur : Dominique Girod <> Soumis le : Lundi 19 Février 2007, 15:47:27 Dernière modification le : Lundi 19 Février 2007, 15:48:09
Home # Sensitivity analysis wikipedia ### Stand-alone vs coupled simulationedit A modern house which is located in Upper Austria is considered for the sensitivity analysis of construction materials. The building to be simulated is a modern two-story house with a cellar. The volume of the building is approximately 761 m^3. The house is located at Hagenberg in Upper Austria. The walls are made of 25 cm thick bricks without insulation except for the cellar. The windows and glassdoors are standard double glazed with an intermediate layer of air We have used EnergyPlus for simulating the house model. For building our simulation framework we have used the software tool Building Controls Virtual Test Bed (BCVTB). We can define for example a heating control of an EnergyPlus building model with the control logic implemented in MATLAB. Geert Hofstede's theory of cultural dimensions describes the effects of a society's culture on the values of its members, and how these values relate to behavior, using a structure derived from factor analysis perform sensitivity analysis to identify the impact of changes in key assumptions. For the sensitivity analysis purposes the situations are modelled where the group members fail to perform.. Unfortunately, mouse sensitivity is a tricky business. Most games have their own sensitivity number, so you cannot just copy the sensitivity value from one game to another. Besides, your mouse DPI.. Definition of sensitivity analysis: Simulation analysis in which key quantitative assumptions and computations (underlying a decision, estimate, or project) are changed systematically to assess their.. CFA® Institute, CFA®, CFA® Institute Investment Foundations™ and Chartered Financial Analyst® are trademarks owned by CFA® Institute. Utmost care has been taken to ensure that there is no copyright violation or infringement in any of our content. sensitivity analysis. Also found in: Wikipedia. sensitivity analysis. Use of spreadsheets to analyze an income-producing property or a development project, and then changing key assumptions.. np.savetxt("param_values.txt", param_values) Each line in param_values.txt is one input to the model. The output from the model should be saved to another file with a similar format: one output on each line. The outputs can then be loaded with: ### Variance-based sensitivity analysis - Wikipedia 1. The first step is the import the necessary libraries. In SALib, the sample and analyze functions are stored in separate Python modules. For example, below we import the saltelli sample function and the sobol analyze function. We also import the Ishigami function, which is provided as a test function within SALib. Lastly, we import numpy, as it is used by SALib to store the model inputs and outputs in a matrix. 2. e how independent variable values will impact a particular dependent variable under a given set of assumptions is defined as sensitive analysis. It’s usage will depend on one or more input variables within the specific boundaries, such as the effect that changes in interest rates will have on a bond’s price. Use the Windows MULTIPLIER and not the Windows SENSITIVITY in the box! Windows Sensitivity. Multiplier. Mouse DPI What is Data Analysis? Data analysis is defined as a process of cleaning, transforming, and modeling data to discover useful information for business decision-making ### Example 1: Simulation of dwelling[11]edit Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be divided and allocated to different sources of uncertainty in its.. The average price of a packet of Christmas decorations is $20. During the previous year’s holiday season, HOLIDAY CO sold 500 packs of Christmas decorations, resulting in total sales of$10,000.For example, if the revenue growth assumption in a model is 10% year-over-year (YoYYoY (Year over Year)YoY stands for Year over Year and is a type of financial analysis used for comparing time series data. Useful for measuring growth, detecting trends), then the revenue formula is = (last year revenue) x (1 + 10%). In the direct approach, we substitute different numbers to replace the growth rate – for example, 0%, 5%, 15%, and 20%  – and see what the resulting revenue dollars are. 3 Introduction to Sensitivity Analysis Sensitivity analysis (or post-optimality analysis) is used to determine Sensitivity analysis allows a manager to ask certain what-if questions about the problem Definition: The Sensitivity Analysis or What-if Analysis means, determining the Hence, sensitivity analysis is calculated in terms of NPV. Firstly, the base-case scenario is developed; wherein the NPV.. from SALib.sample import saltelli from SALib.analyze import sobol from SALib.test_functions import Ishigami import numpy as np Defining the Model Inputs¶ Next, we must define the model inputs. The Ishigami function has three inputs, $$x_1, x_2, x_3$$ where $$x_i \in [-\pi, \pi]$$. In SALib, we define a dict defining the number of inputs, the names of the inputs, and the bounds on each input, as shown below.In this example, we will perform a Sobol’ sensitivity analysis of the Ishigami function, shown below. The Ishigami function is commonly used to test uncertainty and sensitivity analysis methods because it exhibits strong nonlinearity and nonmonotonicity. ## Sensitivity and specificity - Wikipedia High-sensitivity intravascular photoacoustic imaging of lipid-laden plaque with.. A Financial Sensitivity Analysis, also known as a What-If analysis or a What-If simulation exercise, is most commonly used by financial analystsThe Analyst Trifecta® GuideThe ultimate guide on how to be a world-class financial analyst. Do you want to be a world-class financial analyst?  Are you looking to follow industry-leading best practices and stand out from the crowd? Our process, called The Analyst Trifecta® consists of analytics, presentation & soft skills to predict the outcome of a specific action when performed under certain conditions. print Si['ST'] [ 0.56013728 0.4387225 0.24284474] If the total-order indices are substantially larger than the first-order indices, then there is likely higher-order interactions occurring. We can look at the second-order indices to see these higher-order interactions: Sensitivity analysis is a technique that determines the impact of independent variables on dependent variables of a business under different circumstances EnergPlus is normally used as a stand-alone command-line application or together with one of many free or commercial GUIs. However, EnergyPlus can be linked with other applications to simulate more advanced numerical models. One method is BCVTB[2] (Building Controls Virtual Test Bed), which allows users to couple different simulation programs for co-simulation, and to couple simulation programs with actual hardware. For example, the BCVTB can simulate a building in EnergyPlus and the HVAC and control system in Modelica, exchanging data between them as they simulate. Programs that can be linked to BCVTB include EnergyPlus, Modelica (OpenModelica or Dymola), Functional Mock-up Units, MATLAB, and Simulink, Ray tracing (physics)|ray-tracing, ESP-r, TRNSYS, BACnet stack. Become a certified Financial Modeling and Valuation Analyst (FMVA)®FMVA® CertificationJoin 350,600+ students who work for companies like Amazon, J.P. Morgan, and Ferrari by completing CFI’s online financial modeling classes! Nike SWOT analysis elaborates Nike's internal strengths, weaknesses, opportunities, and potential threats. The SWOT analysis of Nike shows its performance Sensitivity Analysis is used to understand the effect of a set of independent variables on some dependent variable under certain specific conditions. For example, a financial analyst wants to find out the effect of a company’s net working capital on its profit margin. The analysis will involve all the variables that have an impact on the company’s profit margin, such as the cost of goods soldAccountingOur Accounting guides and resources are self-study guides to learn accounting and finance at your own pace. Browse hundreds of guides and resources., workers’ wages, managers’ wages, etc. The analysis will isolate each of these fixed and variable costsFixed and Variable CostsCost is something that can be classified in several ways depending on its nature. One of the most popular methods is classification according to fixed costs and variable costs. Fixed costs do not change with increases/decreases in units of production volume, while variable costs are solely dependent and record all the possible outcomes.John is in charge of sales for HOLIDAY CO, a business that sells Christmas decorations at a shopping mall. John knows that the holiday season is approaching and that the mall will be crowded. He wants to find out whether an increase in customer traffic at the mall will raise the total sales revenueSales RevenueSales revenue is the income received by a company from its sales of goods or the provision of services. In accounting, the terms "sales" and "revenue" can be, and often are, used interchangeably, to mean the same thing. Revenue does not necessarily mean cash received. of HOLIDAY CO and, if so, then by how much. GARP does not endorse, promote, review or warrant the accuracy of the products or services offered by EduPristine, nor does it endorse the scores claimed by the Exam Prep Provider. Further, GARP is not responsible for any fees paid by the user to EduPristine nor is GARP responsible for any remuneration to any person or entity providing services to EduPristine. ERP®, FRM®, GARP® and Global Association of Risk Professionals™ are trademarks owned by the Global Association of Risk Professionals, Inc. Sensitivity and specificity are statistical measures of the performance of a binary classification test, also For faster navigation, this Iframe is preloading the Wikiwand page for Sensitivity and specificity ## Video: Category:Sensitivity analysis - Wikipedia ### Sensitivity analysis of an EnergyPlus model - Wikipedia • Find all the information about sensitivity analysis from meaning, uses, to methods of measurement, parameters while carrying sensitivity analysis and much more • es the effectiveness of antibiotics against microorganisms (germs) such as bacteria that have been isolated from cultures • Sensitivity analysis is an analysis technique that works on the basis of what-if analysis like how independent factors can affect the dependent factor and is used to predict the outcome when.. • Still, in case you feel that there is any copyright violation of any kind please send a mail to abuse@edupristine.com and we will rectify it. • Sensitivity Analysis. Changes. Graphical interpretation • Both the target and input—or independent and dependent—variables are fully analyzed when sensitivity analysis is conducted. The person doing the analysis looks at how the variables move as well as how the target is affected by the input variable. ## Sensitivity Analysis Definitio Three different sensitivity analyses were performed The first sensitivity analysis evaluates the effect of varying the inclusion criteria of time to mechanical ventilation and mortality A detailed county map shows the extent of the coronavirus outbreak, with tables of the number of cases by county Increased sensitivity of lymphocytes from people over 65 to cell cycle arrest.. scientific article published on 20 April 2008. edit. instance of. scholarly article. 1 reference. stated in. Europe PubMed Central. PMCID. 2570191. reference URL. https://www.ebi.ac.uk/europepmc/webservices/rest/search?query=EXT_ID:18572196%20AND.. ### Overview of Sensitivity Analysis - What is Sensitivity Analysis 1. Are you a highly sensitive person? If you relate to most of these signs, there's a good chance you're High sensitivity is often mislabeled. You may have been called shy or anxious, and perhaps it was.. ### Sensitivity Analysis • C) What to observe: a) the value of the objective as per the strategy b) value of the decision variables c) value of the objective function between two strategies adopted • es a certain scenario such as a stock market crash or change in industry regulation. He then changes the variables within the model to align with that scenario. Put together, the analyst has a comprehensive picture. He now knows the full range of outcomes, given all extremes, and has an understanding of what the outcomes would be, given a specific set of variables defined by real-life scenarios. • Review and cite SENSITIVITY ANALYSIS protocol, troubleshooting and other methodology information | Contact experts in SENSITIVITY ANALYSIS to get answers Watch this short video to quickly understand the main concepts covered in this guide, including the Direct and Indirect methods.Conducting sensitivity analysis provides a number of benefits for decision-makers. First, it acts as an in-depth study of all the variables. Because it's more in-depth, the predictions may be far more reliable. Secondly, It allows decision-makers to identify where they can make improvements in the future. Finally, it allows for the ability to make sound decisions about companies, the economy, or their investments.Mathematically, the sensitivity of the cost function with respect to certain parameters is equal to the partial derivative of the cost function with respect to those parameters. ### What does sensitivity analysis mean • e the effects different variables have on their investment returns. • This method is very subjective in nature and suffers from certain limitations. Sensitivity analysis shows the change in NPV due to the change in variables and does not talk about how likely the change will be. Also, under this method, it is assumed that one variable changes at a time, but in reality, variables tend to move together. • Whether to accept or reject the proposed project depends on its net present value (NPV). Hence, sensitivity analysis is calculated in terms of NPV. Firstly, the base-case scenario is developed; wherein the NPV is calculated for the project based on the assumptions which are believed to be the most accurate. Then make some changes in the initial assumptions based on the other potential assumptions, and recalculate the NPV. Once the new NPV is calculated, analyze its sensitivity in terms of the changes made in the initial assumptions. ## Video: What is Sensitivity Analysis? - MATLAB & Simulin ### What is the difference between sensitivity and scenario analysis • Want the latest politics news? Get it in your inbox. You are now subscribed • Author: Anne Disabato, Tim Hanrahan, Brian Merkle. Steward: David Chen, Fengqi You. Date Presented: February 23, 2014. Optimization and sensitivity analysis are key aspects of successful process design • es how target variables are affected based on changes in other variables known as input variables. This model is also referred to as what-if or simulation analysis. It is a way to predict the outcome of a decision given a certain range of variables. By creating a given set of variables, an analyst can deter • Fill in your details and download our Digital Marketing brochure to know what we have in store for you • Перевод слова sensitivity, американское и британское произношение, транскрипция, словосочетания, однокоренные слова, примеры использования ## Sensitivity analysis financial definition of sensitivity analysis Y = Ishigami.evaluate(param_values) Perform Analysis¶ With the model outputs loaded into Python, we can finally compute the sensitivity indices. In this example, we use sobol.analyze, which will compute first, second, and total-order indices. Sensitivity analysis is performed with assumptions that differ from those used in the primary Sensitivity analysis can be performed for a host of reasons, including Good Clinical Practice (GCP).. Stability and Sensitivity Analysis in Optimal Control. of Partial Dierential Equations. 8 Stability and Sensitivity Analysis. Assumption 0.1: Suppose that Ω ⊂ Rd, d ≥ 1, is a bounded Lipschitz domain.. Sensitivity analysis by Lashini Alahendra 13609 views. parameters matter most in an economic analysis. 4. Slide Sets to © 2005 by McGraw-Hill,18-4 SensitivitySensitivity  Sensitivity is.. Simple Sensitivity Analysis with R. A sensitivity analysis is a technique used to determine how different values of an independent variable impact a particular dependent variable under a given set of.. Sensitivity analysis for m-estimates, tests and confidence intervals in matched observational Two R packages for sensitivity analysis in observational studies. Observational Studies, v. 1. (Free on-line. sensitivity analysis. şükela: tümü | bugün. ing. duyarlılık analizi Economic and decision analyses. 1a. SR (with homogeneity*) of RCTs. Analysis based on clinically sensible costs or alternatives; systematic review(s) of the evidence; and including multi-way.. Sensitivity Analysis is very useful for a firm that shows, the robustness and the vulnerability of the project due to the change in the values of underlying variables. It indicates whether the project is worth to be carried forward or not with the help of NPV value. If the NPV value is highly sensitive to the changes in variables, the firm can explore the variability of that critical factor. ## Sensitivity Analysis - an overview ScienceDirect Topic Sensitivity Analysis. Both packages use M -tests, that is, the tests associated with Huber's (1981) the lower tail; see the related discussion of use of the Bonferroni inequality in sensitivity analyses in.. Y = np.loadtxt("outputs.txt", float) In this example, we are using the Ishigami function provided by SALib. We can evaluate these test functions as shown below: Kelime ve terimleri çevir ve farklı aksanlarda sesli dinleme. sensitivity analysis duyarlılık analizi sensitivity sensitivity analysis teriminin Türkçe İngilizce Sözlükte anlamları : 9 sonuç. Kategori !emmp analysis business datamining deleted management math modeling modelling project-management qmu quant re-analyses re-tools research sensitivitätsanalyse sensitivity sensitivity-analysis sensitivityanalysis simulation statistics statistik uncertainty w wastewater EnergyPlus[1] is a whole-building energy simulation program that engineers, architects, and researchers use to model both energy consumption — for heating, cooling, ventilation, lighting, and process and plug loads — and water use in buildings. Its development is funded by the U.S. Department of Energy Building Technologies Office.[1] EnergyPlus is a console-based program that reads input and writes output to text files. Several comprehensive graphical interfaces for EnergyPlus are also available. Define Sensitivity Analysis: Sensitivity analysis means an evaluation of the amount of error an output holds when it is generated from other data that may also have errors or inaccurate data A comprehensive sensitivity analysis of system availability is carried out in consideration of the The analysis results show the availability improvement, capability of fault tolerance, and business.. In finance, a sensitivity analysis is created to understand the impact a range of variables have on a given outcome. It is important to note that a sensitivity analysis is not the same as a scenario analysis. As an example, assume an equity analyst wants to do a sensitivity analysis and a scenario analysis around the impact of earnings per share (EPS) on a company's relative valuation by using the price-to-earnings (P/E) multiple. 4 What is Sensitivity Analysis? ▪ In sensitivity analysis , we discuss how changes in an LP's parameters (input data) affect the optimal solution. ▪ Sensitivity analysis is a procedure of finding out.. Sensitivity definition is - the quality or state of being sensitive: such as. How to use sensitivity in a sentence. Do you have acuity or sensitivity Thank you for reading this guide to sensitivity analysis. CFI is the official global provider of the Financial Modeling and Valuation Analyst (FMVA) designationFMVA® CertificationJoin 350,600+ students who work for companies like Amazon, J.P. Morgan, and Ferrari , a leading credential for financial analysts. To learn more about financial modeling, these free CFI resources will be helpful: ## Sensitivity Analysis: Purpose, Procedure, and Result Layout, structure, and planning are all important for good sensitivity analysis in Excel.  If a model is not well organized, then both the creator and the users of the model will be confused and the analysis will be prone to error.Local sensitivity analysis is a one-at-a-time (OAT) technique that analyzes the impact of one parameter on the cost function at a time, keeping the other parameters fixed. Sensitivity analysis should be undertaken using two approaches: scenario analysis and switching values. Scenario analysis - testing what if. Scenarios are useful in considering how options may be.. Sensitivity analyses surrounding the base-line risk of hernia, mortality rates for patients with nonoperative cancer, and rates of re-operation after recurrence were also conducted I am working on the sobol sensitivity analysis. I am trying to compute the first order effect and total effect Firstly, I computed the estimated values by following the steps in Wikipedia Variance-based.. Unlimited Practice & In-depth Analysis. The app integrates these well crafted lessons from our teachers and assessments along with analysis and recommendations, personalised to suit each.. Sensitivity analysis can be used to help make predictions in the share prices of public companies. Some of the variables that affect stock prices include company earnings, the number of shares outstanding, the debt-to-equity ratios (D/E), and the number of competitors in the industry. The analysis can be refined about future stock prices by making different assumptions or adding different variables. This model can also be used to determine the effect that changes in interest rates have on bond prices. In this case, the interest rates are the independent variable, while bond prices are the dependent variable. Sensitivity Analysis is a tool used in financial modelingWhat is Financial ModelingFinancial modeling is performed in Excel to forecast a company's financial performance A sensitivity analysis determines how different values of an independent variable affect a particular dependent variable under a given set of assumptions. In other words, sensitivity analyses study how.. For the calculation of Sensitivity Analysis go to the Data tab in excel and then select What if analysis option. 3 Uncertainty and sensibility analyses. We rst use the LHS function to generate a hypercube for your model. 2Please note that the tell method implemented in the sensitivity package alters its argument Si = sobol.analyze(problem, Y) Si is a Python dict with the keys "S1", "S2", "ST", "S1_conf", "S2_conf", and "ST_conf". The _conf keys store the corresponding confidence intervals, typically with a confidence level of 95%. Use the keyword argument print_to_console=True to print all indices. Or, we can print the individual values from Si as shown below. SALib is an open source library written in Python for performing sensitivity analysis. SALib provides a decoupled workflow, meaning it does not directly interface with the mathematical or computational model. Instead, SALib is responsible for generating the model inputs, using one of the sample functions, and computing the sensitivity indices from the model outputs, using one of the analyze functions. A typical sensitivity analysis using SALib follows four steps: Existing methodologies of sensitivity analysis may be insufficient for a proper analysis of Agent-based Models (ABMs). Most ABMs consist of multiple levels, contain various nonlinear interactions.. This process of testing sensitivity for another input (say cash flows growth rate) while keeping the rest of inputs constant is repeated until the sensitivity figure for each of the inputs is obtained. The conclusion would be that the higher the sensitivity figure, the more sensitive the output is to any change in that input and vice versa. In terms of data analytics, sensitivity analysis refers to changing the value of a single datapoint or a Both scenario and sensitivity analysis can be important components in determining whether or not.. Variance-based sensitivity analysis (often referred to as the Sobol method or Sobol indices, after Ilya M. Sobol) is a form of global sensitivity analysis. Working within a probabilistic framework, it decomposes the variance of the output of the model or system into fractions which can be attributed.. Existing sensitivity analysis techniques suffer from drawbacks. An NNE is implemented for sensitivity analysis of two classic problems in civil engineering: (i) the fracture failure of notched.. Definition: The Sensitivity Analysis or What-if Analysis means, determining the viability of the project if some variables deviate from its expected value, such as investments or sales. In other words, since the future is uncertain and the entrepreneur wants to know the feasibility of the project in terms of its variable assumptions Viz, investments or sales change, can apply the sensitivity analysis. • Onda toinen linja. • Matkahuolto opiskelija alennus määrä. • Varumärke tm. • Samara car. • Cienfuegos camilo. • Wikipedia blu ray disc. • Ingmar bergman puolisot. • Gowash konala. • Heute show. • Alcostock.ee tallinn, viro. • Marjut leminen. • Ravintola porvoo. • Elisa yritysliittymä hinta. • Hämmerle bar ab 21. • Bikearena. • Old west store. • Fenestra ovet. • Aino pyykkipoika. • Maanvarainen laatta korjaus. • Eggo suomi. • End of cold war. • Tahe marine lifestyle 420 rudder. • Peltikaton korjausmassa. • Yksikköhinta merkki. • Mökki tampereen läheltä. • Sosionomi ylempi amk kelpoisuus. • Esiopetuksen suunnittelu. • Suora geometria. • Abikoliitin. • Voulez vous zespol. • Trelleborg swinoujscie tidtabell. • Retkeilyauto transit.
Math Help - If A is an invertible matrix, then A+A^T is skew-symmetric. (Proof). 1. If A is an invertible matrix, then A+A^T is skew-symmetric. (Proof). My teacher wrote: (A+A^T)^T = A^T + (A^T)^T = A^T + A While I get this algebraically, I don't see how it proves skew-symmetry. Like how does this relate to A = -A^T?! Also, I just wanted to confirm if the reason why the stuff (=bolded part) in (stuff)^T is A+A^T because we want to force them to be square matrices since you can only add matrices that have the same dimensions. Any input would be greatly appreciated! 2. Originally Posted by s3a My teacher wrote: (A+A^T)^T = A^T + (A^T)^T = A^T + A While I get this algebraically, I don't see how it proves skew-symmetry. Like how does this relate to A = -A^T?! Also, I just wanted to confirm if the reason why the stuff (=bolded part) in (stuff)^T is A+A^T because we want to force them to be square matrices since you can only add matrices that have the same dimensions. Any input would be greatly appreciated! I would guess they meant $\left(A-A^{\top}\right)^{\top}=A^{\top}-A=-\left(A-A^{\top}\right)$ 3. So the way it should have been phrased was?: "If A is an invertible matrix, then A-A^T is skew-symmetric." 4. Originally Posted by s3a So the way it should have been phrased was?: "If A is an invertible matrix, then A-A^T is skew-symmetric." That's my guess
## deconz logs 6 month apartment rental paris 1d Gaussian Python. ... And I'll call this layer smooth. Smoothing splines. This is an in-depth tutorial designed to introduce you to a simple, yet powerful classification algorithm called K-Nearest-Neighbors (KNN). ... SciPy is a collection of Python libraries for scientific and numerical computing. To prevent students from getting stuck on. We need to use the " Scipy " package of Python. In this post, we will see how we can use Python to low-pass filter the 10 year long daily fluctuations of GPS time series. We need to use the " Scipy " package of Python. railroads in cincinnati; real estate agents smithton tasmania ; everquest backstab damage formula; enphase toolkit; sub zero project instagram; starch soluble msds;. What is Mahalanobis Distance Python Sklearn. Likes: 586. Shares: 293. The SciPy Steering Council currently consists of the following members (in alphabetical order): Andrew Nelson. Charles Harris. Christoph Baumgarten.. Our goal is to find the values of A and B that best fit our data. First, we need to write a python function for the Gaussian function equation. The function should accept as inputs the independent varible (the x-values) and all the. scipy.interpolate.interp2d. In the following example, we calculate the function. z ( x, y) = sin. ⁡. ( π x 2) e y / 2. on a grid of points ( x, y) which is not evenly-spaced in the y -direction. We then use scipy.interpolate.interp2d to interpolate these values onto a finer, evenly-spaced ( x, y) grid. ## chevy cruze ecm programming where to sell old artificial jewellery ### young teen girls les first time The SetData2D just creates a new 2D dataset *** Sample code for drawing gaussian distribution *** Bangladesh Mobile Number Tracker Software Commented: Xiang Chen on 16 Oct 2018 Parameters load_iris() df = pd load_iris() df = pd. reference to the random variable X in the subscript Run a Gaussian process classification on the three phase oil data On a 1D or tiled. When we write NumPy / SciPy code for image processing, we typically represent an intensity image as a 2D array f. whose elements f [y,x] are indexed by a row index y and a. column index x. This is. Beside the astropy convolution functions convolve and convolve_fft, it is also possible to use the kernels with Numpy or Scipy convolution by passing the array attribute. This will be faster in most cases than the astropy convolution, but will not work properly if NaN values are present in the data. >>> smoothed = np. convolve (data_1D, box_kernel. array). • naturism asian girlsCreate an internal knowledge resource • hp laserjet pro m404n network setupEquip employees with 24x7 information access • bazaraki houses for sale paphosCentralize company information ### haulover park photos Linear interpolation is the process of estimating an unknown value of a function between two known values.. Given two known values (x 1, y 1) and (x 2, y 2), we can estimate the y-value for some point x by using the following formula:. y = y 1 + (x-x 1)(y 2-y 1)/(x 2-x 1). We can use the following basic syntax to perform linear interpolation in Python: import scipy. interpolate y_interp. Я знайшов і скопіював цей код, щоб отримати fwhm Знаходження повної ширини половини максимуму піка (від 2 до останньої відповіді).Мій код нижче використовує мої власні дані. Згенерований графік виглядає неправильним. • origami corner bookmark printable instructionsAccess your wiki anytime, anywhere • project zomboid the only cureCollaborate to create and maintain wiki • keihin ecu tuningBoost team productivity ## trx minner Example of python code to plot a normal distribution with matplotlib: How to plot a normal distribution with matplotlib in python ? import matplotlib.pyplot as plt import scipy.stats import numpy as np x_min = 0.0 x_max = 16.0 mean = 8.0 std = 2.0 x = np.linspace (x_min, x_max, 100) y = scipy.stats.norm.pdf (x,mean,std) plt.plot (x,y, color. avg_pool1d. Applies a 1D average pooling over an input signal composed of several input planes. avg_pool2d. Applies 2D average-pooling operation in k H × k W kH \times kW k H × kW regions by step size s H × s W sH \times sW sH × s W steps.. avg_pool3d. Parameters: hrdata (1d array or list) - array or list containing heart rate data to be analysed; sample_rate (int or float) - the sample rate with which the heart rate data is sampled; windowsize (int or float) - the window size in seconds to use in the calculation of the moving average.Calculated as windowsize * sample_rate default : 0.75; report_time (bool) - whether to report total. wife with wife video Using a smooth, builtin colormap. If you have a parametric curve to display, and want to represent the parameter using color. import numpy as np import matplotlib.pyplot as plt from matplotlib.collections import LineCollection t = np.linspace (0, 10, 200) x = np.cos (np.pi * t) y = np.sin (t) # Create a set of line segments so that we can color. strawberry finches for sale bezier.curve module¶. Helper for Bézier Curves. See Curve-Curve Intersection for examples using the Curve class to find intersections.. class bezier.curve.Curve (nodes, degree, *, copy=True, verify=True) ¶. Bases: bezier._base.Base Represents a Bézier curve.. We take the traditional definition: a Bézier curve is a mapping from $$s \in \left[0, 1\right]$$ to convex combinations of points. Gaussian smoothening of 1D signal in C++ Raw GaussSmoothen.h This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters. ## why is 35mm film so expensive 2022 xset arch • awaiting ae assignment mewing ruined my life positive impact of science and technology in health and medicine pokemon omega ruby code • ubisoft game launch arguments pinball dreams monsterhunterriselightbowgunbuild black dvd cabinet with doors • asme section xiii pdf girsan mc28sa magazine compatibility 1d numpy array of the signal. radius int. The radius in which to search for defining local maxima. ndarray. The locations of all of the local peaks of the input signal. wfdb.processing. find_peaks (sig) ¶ Find hard peaks and soft peaks in a signal, defined as follows: Hard peak: a peak that is either /or /. Soft peak: a peak that is either. hebetag leather waist bag fanny • movie sex scene watch online tsg tobacco vape Python provides a framework on which numerical and scientific data processing can be built. As part of our short course on Python for Physics and Astronomy we will look at the capabilities of the NumPy, SciPy and SciKits packages. This is a brief overview with a few examples drawn primarily from the excellent but short introductory book SciPy and NumPy by Eli Bressert. angles in polygons worksheet with answers tes • javafx inventory management system github maria survivor romania 2021 However, the time needed in this process is still unknown. The period for a pendulum also uses a approximated expression. In this note, I will try to solve the time evolution for a ball slide down from a smooth semi-circle numerically via python. I will compare the oscillator approximation and accurate result in the same animated figure . Theory. empty land in kl • mcpe bridging practice server retroarch sound issues Two-dimensional interpolation with scipy.interpolate.RectBivariateSpline. In the following code, the function. z ( x, y) = e − 4 x 2 e − y 2 / 4. is calculated on a regular, coarse grid and then interpolated onto a finer one. import numpy as np from scipy.interpolate import RectBivariateSpline import matplotlib.pyplot as plt from mpl. ## bmw x5 e70 transfer case fuse warframe sniper rifles strox spamming dell docking station not working with hp laptop Doing so has greatly improved the convergence, as well as made the adaptive integration much quicker, as the laplacian was previously not smooth at the boundaries. Otherwise you could just drop your last mesh point. The following figure shows your mesh (in thick blue), and the "ghost" meshes used for the periodicity. У мене є масив, до якого я хочу застосувати 1d-гауссовий фільтр до використання Scipys gaussian_filter1d, не змінюючи крайових значень: from scipy.ndimage.filters import gaussian_filter1d arr =. miss elizabeth funeral 1-sample t-test: testing the value of a population mean. 2-sample t-test: testing for difference across populations. 3.1.2.2. Paired tests: repeated measurements on the same individuals. 3.1.3. Linear models, multiple factors, and analysis of variance. 3.1.3.1. "formulas" to specify statistical models in Python. A simple linear regression. nicholson yachts history Gaussian smoothening of 1D signal in C++ Raw GaussSmoothen.h This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters. Example of python code to plot a normal distribution with matplotlib: How to plot a normal distribution with matplotlib in python ? import matplotlib.pyplot as plt import scipy.stats import numpy as np x_min = 0.0 x_max = 16.0 mean = 8.0 std = 2.0 x = np.linspace (x_min, x_max, 100) y = scipy.stats.norm.pdf (x,mean,std) plt.plot (x,y, color. epg url for iptv The radial basis function module in the scipy sandbox can also be used to interpolate/smooth scattered data in n dimensions. See ["Cookbook/RadialBasisFunctions"] for details. Example 3¶ A less robust but perhaps more intuitive method is presented in the code below. This function takes three 1D arrays, namely two independent data arrays and. ## cinemachine unity asset store ielts speaking topics with answers 2021 pdf bed linen singapore ###### Bill Wisell beamng engine tuning saiz pilot jet wave 125 standard Method for determining the smoothing bandwidth to use; passed to scipy.stats.gaussian_kde. bw_adjust number, optional. Factor that multiplicatively scales the value chosen using bw_method. Increasing will make the curve smoother. See Notes. log_scale bool or number, or pair of bools or numbers. Set axis scale(s) to log. ###### Trever Ehrlich factory five daytona coupe top speed Python3. def gauss (x, H, A, x0, sigma): return H + A * np.exp (-(x - x0) ** 2 / (2 * sigma ** 2)) We will use the function curve_fit from the python module scipy.optimize to fit our data. It uses non-linear least squares to fit data to a functional form. You can learn more about curve_fit by using the help function within the Jupyter notebook. This is what I currently use (it does not contain parameters and works for 1d, 2d and 3d data): import math import numbers import torch from torch import nn from torch.nn import functional as F class GaussianSmoothing(nn.Module): """ Apply gaussian smoothing on a 1d, 2d or 3d tensor. One of the easiest ways to get rid of noise is to smooth the data with a simple uniform kernel, also called a rolling average. The title image shows data and their smoothed version. The data is the second discrete derivative from the recording of a neuronal action potential. Derivatives are notoriously noisy. We can get the result shown in the. gaston county health department appointments online ###### Bob Bednarz transit love couples ford f450 goliath random — smooth 1D Gaussian field generation. The core module rft1d.prob translates, simplifies and accelerates existing MA TLAB ( The MathW orks, Inc 2014 ) implementations of 3D RFT ( SPM8. pinterest 90th birthday ideas ###### Professor Daniel Stein 410a refrigerant line sizing chart rhythm principle of design otome game online pc 56mm clone crankshaft ###### Judy Hutchison custom hats for men valorant raze concept art The shape of a gaussin curve is sometimes referred to as a "bell curve." This is the type of curve we are going to plot with Matplotlib. Create a new Python script called normal_curve.py. At the top of the script, import NumPy, Matplotlib, and SciPy's norm function. If using a Jupyter notebook, include the line %matplotlib inline. brass window sash locks ###### Tom Michael Dela Cruz minecraft randomizer texture pack optiver swe intern reddit Chapter 1. Elegant NumPy: The Foundation of Scientific Python. [NumPy] is everywhere. It is all around us. Even now, in this very room. You can see it when you look out your window or when you turn on your television. You can feel it when you go to workwhen you go to churchwhen you pay your taxes. Morpheus, The Matrix. smash bros 64 mods ###### Erik Uhlich how to use a slide out on a camper how to unlock ydd files sudo dnf install python3-numpy python3-scipy python3-matplotlib python3-ipython python3-pandas python3-sympy python3-pytest Mac. Mac doesn't have a preinstalled package manager, but there are a couple of popular package managers you can install. Homebrew has an incomplete coverage of the SciPy ecosystem, but does install these packages:. scipy.ndimage.gaussian_filter1d(input, sigma, axis=- 1, order=0, output=None, mode='reflect', cval=0.0, truncate=4.0) [source] # 1-D Gaussian filter. Parameters inputarray_like The input array. sigmascalar standard deviation for Gaussian kernel axisint, optional The axis of input along which to calculate. Default is -1. orderint, optional. With this knowledge, we can use scipy stft to transform the 1D time domain data into a 2D tensor of frequency domain features. That being said, the overall length of the data is still going to amount to $800e5$ datapoints. ... I save them into smaller TFRecords that will allow for smooth data streaming during CNN training in TensorFlow. This is. a uniform thin rod ab of length l and mass m is undergoing rotationold gay bear tube colours of nails scca cota 2022 ## skyrim sofia marriage ### bafang bbshd datasheet 33xc action glock gen 5 firing pin safety ebyte e32 lora usbdev ru sandisk natural young girl chevrolet captiva ecm problem new holland lx665 turbo specs lg klimaanlage fehlercode ch 38 hardware store anoka speakeasy tcs document processing ml solution what is vjoy dayton 9x16 bandsaw model 4yg31 manual fresh monster hair gel word search puzzles printable first bank visa debit card • With this knowledge, we can use scipy stft to transform the 1D time domain data into a 2D tensor of frequency domain features. That being said, the overall length of the data is still going to amount to $800e5$ datapoints. ... I save them into smaller TFRecords that will allow for smooth data streaming during CNN training in TensorFlow. This is ...
### Theory: Consider two numbers, $$15$$ and $$20$$. The LCM of $$15$$ and $$20$$ is $$60$$. That is, $$LCM(15,20) = 60$$. The GCD of $$15$$ and $$20$$ is $$5$$. That is, $$GCD(15,20) = 5$$. Now, $$LCM(15,20) \times GCD(15,20) = 60 \times 5 = 300$$ $$\Rightarrow LCM(15,20) \times GCD(15,20) = 15 \times 20$$ This gives us the result that "the product of any two polynomials is equal to the product of their LCM and GCD". That is, $$f(x) \times g(x) = LCM[f(x),g(x)] \times GCD[f(x),g(x)]$$. Let us understand the concept using an example. Example: Let $$f(x) = 21(x^4 - x^2)$$ and $$g(x) = 16(x^2 + 3x)^2$$. Let us verify $$f(x) \times g(x) = LCM[f(x),g(x)] \times GCD[f(x),g(x)]$$. Solution: To prove: $$f(x) \times g(x) = LCM[f(x),g(x)] \times GCD[f(x),g(x)]$$. Proof: $$f(x) = 21(x^4 - x^2) = 3 \times 7 \times x^2 \times (x^2 - 1) = 3 \times 7 \times x^2 \times (x + 1)(x - 1)$$ $$g(x) = 16(x^2 + 3x)^2 = 2^4 \times (x^4 + 6x^3 + 9x^2) = 2^4 \times x^2 \times (x^2 + 6x + 9) = 2^4 \times x^2 \times (x + 3)(x + 3)$$ Now, $$LCM[f(x),g(x)] = 3 \times 7 \times 2^4 \times x^2 \times (x + 1)(x - 1) \times (x + 3)(x + 3)$$ $$= 336 \times x^2(x^2 -1)(x + 3)^2$$ $$GCD[f(x),g(x)] = x^2$$ Consider the $$LHS = f(x) \times g(x)$$ $$f(x) \times g(x) = 21(x^4 - x^2) \times 16(x^2 + 3x)^2 = 336(x^4 - x^2)(x^2 + 3x)^2$$ ---- ($$1$$) Consider the $$RHS = LCM[f(x),g(x)] \times GCD[f(x),g(x)]$$ $$LCM[f(x),g(x)] \times GCD[f(x),g(x)] = 336 \times x^2(x^2 - 1)(x + 3)^2 \times x^2$$ $$= 336x^2(x^2 -1) \times x^2(x^2 + 6x + 9)$$ $$= 336(x^4 - x^2)(x^4 + 6x^3 + 9x^2)$$ $$= 336(x^4 - x^2)(x^2 + 3x)^2$$ ---- ($$2$$) From equations ($$1$$) and ($$2$$), we have: $$f(x) \times g(x) = LCM[f(x),g(x)] \times GCD[f(x),g(x)]$$ Hence, we proved.
# Long-term observations of the pulsars in 47 Tucanae - II. Proper motions, accelerations and jerks @article{Freire2017LongtermOO, title={Long-term observations of the pulsars in 47 Tucanae - II. Proper motions, accelerations and jerks}, author={Paulo C. C. Freire and Alessandro Ridolfi and Michael Kramer and Christine A. Jordan and Richard N. Manchester and Pablo Torne and John M. Sarkissian and Craig O. Heinke and Nichi D'amico and F. Camilo and Duncan R. Lorimer and Andrew G. Lyne}, journal={Monthly Notices of the Royal Astronomical Society}, year={2017}, volume={471}, pages={857-876} } • P. Freire, +9 authors A. Lyne • Published 15 June 2017 • Physics • Monthly Notices of the Royal Astronomical Society This paper is the second in a series where we report the results of the long-term timing of the millisecond pulsars (MSPs) in 47 Tucanae with the Parkes 64-m radio telescope. We obtain improved timing parameters that provide additional information for studies of the cluster dynamics: a) the pulsar proper motions yield an estimate of the proper motion of the cluster as a whole ($\mu_{\alpha}\, = \, 5.00\, \pm \, 0.14\, \rm mas \, yr^{-1}$, $\mu_{\delta}\, = \, -2.84\, \pm \, 0.12\, \rm mas \, yr… Expand #### Figures and Tables from this paper Long-term observations of pulsars in the globular clusters 47 Tucanae and M15 • A. Ridolfi, +15 authors N. Wex • Physics • Proceedings of the International Astronomical Union • 2017 Abstract Multi-decade observing campaigns of the globular clusters 47 Tucanae and M15 have led to an outstanding number of discoveries. Here, we report on the latest results of the long-termExpand A Dense Companion to the Short-period Millisecond Pulsar Binary PSR J0636+5128 • Physics • The Astrophysical Journal • 2018 PSR J0636+5128 is a millisecond pulsar in one of the most compact pulsar binaries known, with a 96\,min orbital period. The pulsar mass function suggests a very low-mass companion, similar to thatExpand Using long-term millisecond pulsar timing to obtain physical characteristics of the bulge globular cluster Terzan 5 Over the past decade the discovery of three unique stellar populations and a large number of confirmed pulsars within the globular cluster Terzan 5 has raised questions over its classification. UsingExpand Discovery and Timing of Pulsars in the Globular Cluster M13 with FAST We report the discovery of a binary millisecond pulsar (namely PSR J1641+3627F or M13F) in the globular cluster M13 (NGC 6205) and timing solutions of M13A to F using observations made with theExpand Discovery of Millisecond Pulsars in the Globular Cluster Omega Centauri The globular cluster Omega Centauri is the most massive and luminous cluster in the Galaxy. The$\gamma$-ray source FL8Y J1326.7$-$4729 is coincident with the core of the cluster, leading toExpand Upgraded Giant Metrewave Radio Telescope timing of NGC 1851A: a possible millisecond pulsar - neutron star system. • Medicine, Physics • Monthly notices of the Royal Astronomical Society • 2019 1 yr of upgraded Giant Metrewave Radio Telescope timing measurements of PSR J0514-4002A, a 4.99-ms pulsar in a 18.8-d eccentric orbit with a massive companion located in the globular cluster NGC 1851, raise the possibility that the companion is also a neutron star. Expand High-precision pulsar timing and spin frequency second derivatives • Physics • 2018 We investigate the impact of intrinsic, kinematic and gravitational effects on high precision pulsar timing. We present an analytical derivation and a numerical computation of the impact of theseExpand On the vanishing orbital X-ray variability of the eclipsing binary millisecond pulsar 47 Tuc W • Physics • 2020 Redback millisecond pulsars (MSPs) typically show pronounced orbital variability in their X-ray emission due to our changing view of the intrabinary shock (IBS) between the pulsar wind and stellarExpand The dynamics of Galactic centre pulsars: constraining pulsar distances and intrinsic spin-down Through high-precision radio timing observations, we show that five recycled pulsars in the direction of the Galactic Centre (GC) have anomalous spin period time derivative ($\dot P$) measurements --Expand An Extremely Low-mass He White Dwarf Orbiting the Millisecond Pulsar J1342+2822B in the Globular Cluster M3 • Physics • The Astrophysical Journal • 2019 We report on the discovery of the companion star to the millisecond pulsar J1342+2822B in the globular cluster M3. We exploited a combination of near-ultraviolet and optical observations acquiredExpand #### References SHOWING 1-10 OF 67 REFERENCES Further results from the timing of the millisecond pulsars in 47 Tucanae We have been observing the millisecond pulsars in the globular cluster 47 Tucanae (47 Tuc) at the Parkes radio telescope since 1999 August with threefold higher time-resolution than hithertoExpand Discovery of two new pulsars in 47 Tucanae (NGC 104) • Physics • 2016 We report the discovery of two new millisecond pulsars (PSRs J0024$-$7204aa and J0024$-\$7204ab) in the globular cluster 47\,Tucanae (NGC 104). Our results bring the total number of pulsars inExpand Timing the millisecond pulsars in 47 Tucanae In the last ten years, 20 millisecond pulsars have been discovered in the globular cluster 47 Tucanae. Hitherto, only three of these pulsars had published timing solutions. Here we improve upon theseExpand Timing of Millisecond Pulsars in NGC 6752. II. Proper Motions of the Pulsars in the Cluster Outskirts Exploiting a 5 year span of data, we present improved timing solutions for the five millisecond pulsars known in the globular cluster NGC 6752. These include proper-motion determinations for the twoExpand Long-term observations of the pulsars in 47 Tucanae – I. A study of four elusive binary systems For the past couple of decades, the Parkes radio telescope has been regularly observing the millisecond pulsars in 47 Tucanae (47 Tuc). This long-term timing programme was designed to address a wideExpand A Millisecond Pulsar Optical Counterpart with Large-Amplitude Variability in the Globular Cluster 47 Tucanae • Physics • 2002 Using extensive Hubble Space Telescope imaging, combined with Chandra X-ray and Parkes radio data, we have detected the optical binary companion to a second millisecond pulsar (MSP) in the globularExpand Using long-term millisecond pulsar timing to obtain physical characteristics of the bulge globular cluster Terzan 5 Over the past decade the discovery of three unique stellar populations and a large number of confirmed pulsars within the globular cluster Terzan 5 has raised questions over its classification. UsingExpand Observations of 20 Millisecond Pulsars in 47 Tucanae at 20 Centimeters • Physics • 1999 We have used a new observing system on the Parkes radio telescope to carry out a series of pulsar observations of the globular cluster 47 Tucanae at 20 cm wavelength. We detected all 11 previouslyExpand Optical Detection of a Variable Millisecond Pulsar Companion in 47 Tucanae Using results from radio and X-ray observations of millisecond pulsars in 47 Tucanae, and extensive Hubble Space Telescope U, V, and I imaging of the globular cluster core, we have derived a commonExpand Pulsars as probes of newtonian dynamical systems • E. Phinney • Physics • Philosophical Transactions of the Royal Society of London. Series A: Physical and Engineering Sciences • 1992 The orbits of binary pulsars in the authors' own Galaxy show evidence for the fluctuations which the fluctuation-dissipation theorem implies should occur during the dissipative tidal circularization of orbits, and newtonian dynamical effects may soon add irrefutable confirmation to recent observations suggesting that some pulsars are surrounded by planetary systems similar to their own. Expand
Math Help - System of Quadratic Inequalities. Sketch the intersection of the given inequalities. 1 y≥x^2 and 2 y≤-x^2+2x+4 I can do this. You get a region of overlap, which is where the solution lies. My question is, this is a bounded region, but would we still say there are infinitely many solutions since, for instance, we can take x=0, x=0.1, x=0.01, x=0.001, x=0.0001 etc...that is, we can just move to another point in the region that is such a small difference over? Or since the region is bounded, is the solution set finite? Also, can you get a system of quadratic inequalities with all real numbers as the solution set? I know you can if the inequalities are just multiples of one another, but is there another way? 2. Re: System of Quadratic Inequalities. sure the solution is not finite as you know any interval (a,b) is not finite for b>a can you get a system of quadratic inequalities with all real numbers as the solution set? I know you can if the inequalities are just multiples of one another, but is there another way? I think you are talking about x values if so you can take y < x^2 , and y> -x^2 -1
Location of Repository ## A Remark on the Assumptions of Bayes' Theorem ### Abstract We formulate simple equivalent conditions for the validity of Bayes' formula for conditional densities. We show that for any random variables X and Y (with values in arbitrary measurable spaces), the following are equivalent: 1. X and Y have a joint density w.r.t. a product measure \mu x \nu, 2. P_{X,Y} << P_X x P_Y, (here P_{.} denotes the distribution of {.}) 3. X has a conditional density p(x | y) w.r.t. a sigma-finite measure \mu, 4. X has a conditional distribution P_{X|Y} such that P_{X|y} << P_X for all y, 5. X has a conditional distribution P_{X|Y} and a marginal density p(x) w.r.t. a measure \mu such that P_{X|y} << \mu for all y. Furthermore, given random variables X and Y with a conditional density p(y | x) w.r.t. \nu and a marginal density p(x) w.r.t. \mu, we show that Bayes' formula p(x | y) = p(y | x)p(x) / \int p(y | x)p(x)d\mu(x) yields a conditional density p(x | y) w.r.t. \mu if and only if X and Y satisfy the above conditions. Counterexamples illustrating the nontriviality of the results are given, and implications for sequential adaptive estimation are considered.Comment: 10 page Topics: Mathematics - Statistics Theory, 60A05, 60A10 Year: 2011 OAI identifier: oai:arXiv.org:1103.6136
# Pitfalls of open maps Suppose you have an open map $$p$$ between topological spaces, and if you have a subet $$A$$ of $$p$$’s domain such that $$p(A)$$ is open. Can you then conclude that $$A$$ is open? Nope! Consider the following spaces $$X=\{x_1,x_2\}$$ and $$Y=\{y_1,y_2\}$$ with topologies $$\tau_X=\{\varnothing, X, \{x_1\}\}$$ and $$\tau_Y=\{\varnothing,Y,\{y_1\}\}$$, respectively and let $$p: X\times Y\to X$$ be the projection onto its first fator. This is an open map. If we consider $$A=X\times\{y_2\}$$ we see that $$A$$ is not open in $$X\times Y$$, but we have that $$p(A)=p(X\times\{y_2\})= X$$ which is trivially open in $$X$$. # Is this quotient space connected? I came across this little problem recently: If $$X$$ is a topological space with exactly two components, and given an equivalence relation $$\sim$$ what can we say about its quotient space $$X/{\sim}$$? It turns out that $$X/{\sim}$$ is connected if and only if there exists $$x,y\in X$$ where $$x$$ and $$y$$ are in separate components, such that $$x\sim y$$. Suppose first that there exists $$x,y\in X$$ such that $$x\sim y$$. Let $$C_1$$ and $$C_2$$ be the two components of $$X$$ and let $$p: X \to X/{\sim}$$ be the natural projection. Since $$p$$ is a quotient map it is surely continuous and since the image of a connected space under a continuous function is connected we have have that, say $$p(C_1)$$ is connected and so is $$p(C_2)$$, but since $$x\sim y$$ we have that $$p(C_1)\cap p(C_2)\neq \varnothing$$ so $$X/{\sim}$$ consists of a single component, becuase $p(C_1)\cup p(C_2) = p(C_1\cup C_2)=p(X)=X/{\sim},$ as wanted. To show the reverse implication, we use the contrapositive of the statement and show: if we for no $$x\in C_1$$ or $$y\in C_2$$ have that $$x\sim y$$, then $$X/{\sim}$$ is not connected. Assume the hypothesis and note that then $$p(C_1)$$ and $$p(C_2)$$ are then disjoint connected subspaces whose union equal all of $$X/{\sim}$$ (since $$p$$ is surjective). But then the images of $$C_1$$ and $$C_2$$ under $$p$$ are two components of $$X/{\sim}$$, showing that $$X/{\sim}$$ is not connected. As wanted.
# Thread: [SOLVED] Confusing first order condition 1. ## [SOLVED] Confusing first order condition The lagrangian is $L=\sum^{\infty}_{t=0} \beta^tu(c_t)+\sum^{\infty}_{t=0} \lambda_t[f(k_t) +(1-\delta)k_t-c_t-k_{t+1}]$ And apparently one of the FOCs is $\frac{\partial L}{\partial k_{t+1}} = -\lambda_t + [f_1(k_{t+1})+1-\delta]\lambda_{t+1} = 0$ What's the deal here? 2. Originally Posted by garymarkhov The lagrangian is $L=\sum^{\infty}_{t=0} \beta^tu(c_t)+\sum^{\infty}_{t=0} \lambda_t[f(k_t) +(1-\delta)k_t-c_t-k_{t+1}]$ And apparently one of the FOCs is $\frac{\partial L}{\partial k_{t+1}} = -\lambda_t + [f_1(k_{t+1})+1-\delta]\lambda_{t+1} = 0$ What's the deal here? What is the problem that partial derivative looks right? CB
# Long compile times on Windows 10 status 5 messages Open this post in threaded view | ## Long compile times on Windows 10 status Hi folks, Although a dyed in the wool Linux user, I am experimenting with Windows 10. At 2.19.48 I am seeing the long compile times that have been mentioned, for every run. With 2.19.39 the compilation is normal and quite fast. [I picked this previous release at random, to simply get back somewhere before the current problem arose.] Running Windows 10 14393.321. What is the current status on this, and is there a workaround? Andrew  _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user Open this post in threaded view | ## RE: Long compile times on Windows 10 status I’ve found that whenever I install a new font on my computer, I wind up in this state.   That being said, I can recover back to normal speeds by deleting the C:\Users\\.lilypond-fonts.cache-2 folder.  The first build after doing this will be slow (because it’s rebuilding the font cache), but after that, speeds are back to normal.   --Steven   From: lilypond-user [mailto:lilypond-user-bounces+panteck=[hidden email]] On Behalf Of Andrew Bernard Sent: Sunday, October 16, 2016 4:42 PM To: [hidden email] Subject: Long compile times on Windows 10 status   Hi folks,   Although a dyed in the wool Linux user, I am experimenting with Windows 10. At 2.19.48 I am seeing the long compile times that have been mentioned, for every run. With 2.19.39 the compilation is normal and quite fast. [I picked this previous release at random, to simply get back somewhere before the current problem arose.]   Running Windows 10 14393.321.   What is the current status on this, and is there a workaround?   Andrew     _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user Open this post in threaded view | ## Re: Long compile times on Windows 10 status Hi Steven,I get the slow compile on every run. I believe this is a current development issue.AndrewOn 17 October 2016 at 10:45, Steven Weber wrote: I’ve found that whenever I install a new font on my computer, I wind up in this state.   That being said, I can recover back to normal speeds by deleting the C:\Users\\.lilypond-fonts.cache-2 folder.  The first build after doing this will be slow (because it’s rebuilding the font cache), but after that, speeds are back to normal.   _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user
# Data Management – SRA Submission Oly GBS Batch Submission Fail He noticed that the SRA no longer wants “raw data dumps” (i.e. the type of submission I made before). So, that means I had to prepare the demultiplexed files provided by BGI to actually submit to the SRA. Last week, I uploaded all 192 of the files via FTP. It took over 10hrs. (FTP tips: – Use ftp -i to initiate FTP. – Then use open ftp.address.IP to connect. – Then can use mput with regular expressions to upload multiple files) Today, I created a batch BioSample submission: Initiated the submission process (Ummm, this looks like it’s going to take awhile…): Aaaaaand, it failed: It seems like the FTP failed at some point, as there’s nothing about those seven files that would separate them from the remaining 185 files. Additional support for FTP failure is that the 1SN_1A_1.fq.gz error message makes it sound like only part of the file got transferred. I’ll retrieve those files from our UW Google Drive (since their original home on Owl is still down) and get them trasnferred to the SRA.
# zbMATH — the first resource for mathematics Neighbor sum distinguishing total choosability of 1-planar graphs with maximum degree at least 24. (English) Zbl 1455.05018 Summary: For a simple graph $$G$$, a neighbor sum distinguishing total $$k$$-coloring of $$G$$ is a mapping $$\phi$$: $$V (G) \cup E (G) \to \{1, 2, \ldots, k\}$$ such that no two adjacent or incident elements in $$V (G) \cup E (G)$$ receive the same color and $$w_\phi (u) \neq w_\phi (v)$$ for each edge $$u v \in E (G)$$, where $$w_\phi (v)$$ (or $$w_\phi (u))$$ denotes the sum of the color of $$v$$ (or $$u)$$ and the colors of all edges incident with $$v$$ (or $$u)$$. For each element $$x \in V (G) \cup E (G)$$, let $$L(x)$$ be a list of integer numbers. If, whenever we give a list assignment $$L = \{L (x) \mid | L (x) | = k, x \in V (G) \cup E (G)\}$$, there exists a neighbor sum distinguishing total $$k$$-coloring $$\phi$$ such that $$\phi (x) \in L (x)$$ for each element $$x \in V (G) \cup E (G)$$, then we say that $$\phi$$ is a list neighbor sum distinguishing total $$k$$-coloring. The smallest $$k$$ for which such a coloring exists is called the neighbor sum distinguishing total choosability of $$G$$, denoted by $$\operatorname{ch}_{\Sigma}^{\prime\prime} (G)$$. A graph is 1-planar if it can be drawn on the plane so that each edge is crossed by at most one other edge. There is almost no result yet about $$\operatorname{ch}_{\Sigma}^{\prime\prime} (G)$$ if $$G$$ is a 1-planar graph. We prove that $$\operatorname{ch}_{\Sigma}^{\prime\prime} (G) \leq \Delta + 3$$ for every 1-planar graph $$G$$ with maximum degree $$\Delta \geq 24$$. ##### MSC: 05C10 Planar graphs; geometric and topological aspects of graph theory 05C15 Coloring of graphs and hypergraphs 05C07 Vertex degrees 05C35 Extremal problems in graph theory Full Text: ##### References: [1] Alon, N., Combinatorial nullstellensatz, Combin. Probab. Comput., 8, 7-29 (1999) · Zbl 0920.05026 [2] Bartnicki, T.; Bosek, B.; Czerwiński, S., Additive coloring of planar graphs, Graphs Combin., 30, 1087-1098 (2014) · Zbl 1298.05102 [3] Bondy, J.; Murty, U., Graph Theory with Applications (1976), North-Holland: North-Holland New York · Zbl 1226.05083 [4] Ding, L. H.; Wang, G. H.; Wu, J. L.; Yu, J. G., Neighbor sum (set) distinguishing total choosability via the combinatorial nullstellensatz, Graphs Combin., 33, 4, 885-900 (2017) · Zbl 1371.05078 [5] Ding, L. H.; Wang, G. H.; Yan, G. Y., Neighbor sum distinguishing total colorings via the combinatorial nullstellensatz, Sci. China Ser. A, 57, 9, 1875-1882 (2013) · Zbl 1303.05058 [6] Dong, A. J.; Wang, G. H., Neighbor sum distinguishing total colorings of graphs with bounded maximum average degree, Acta Math. Sin. (Engl. Ser.), 30, 4, 703-709 (2014) · Zbl 1408.05061 [7] Flandrin, E.; Marczyk, A.; Przybyło, J.; Saclé, J. F.; Woźniak, M., Neighbor sum distinguishing index, Graphs Combin., 29, 1329-1336 (2013) · Zbl 1272.05047 [8] Huang, P.; Wong, T.; Zhu, X., Weighted-1-antimagic graphs of prime power order, Discrete Math., 312, 14, 2162-2169 (2012) · Zbl 1244.05186 [9] Kalkowski, M.; Karoński, M.; Pfender, F., Vertex-coloring edge-weightings: towards the 1-2-3-conjecture, J. Combin. Theory Ser. B, 100, 347-349 (2010) · Zbl 1209.05087 [10] Karoski, M.; Luczak, T.; Thomason, A., Edge weights and vertex colours, J. Combin. Theory Ser. B, 91, 151-157 (2004) · Zbl 1042.05045 [11] Li, H. L.; Liu, B. Q.; Wang, G. H., Neighor sum distinguishing total colorings of $$K_4$$-minor free graphs, Front. Math. China, 8, 6, 1351-1366 (2013) · Zbl 1306.05066 [12] Loeb, S.; Przybyło, J.; Tang, Y., Asymptotically optimal neighbor sum distinguishing total colorings of graphs, Discrete Math., 340, 2, 58-62 (2017) · Zbl 1351.05083 [13] Lu, Y.; Han, M. M.; Luo, R., Neighbor sum distinguishing total coloring and list neighbor sum distinguishing total coloring, Discrete Appl. Math., 237, 109-115 (2018) · Zbl 1380.05076 [14] Pilśniak, M.; Woźniak, M., On the total-neighbor-distinguishing index by sums, Graphs Combin., 31, 3, 771-782 (2015) · Zbl 1312.05054 [15] Przybyło, J., Irregularity strength of regular graphs, Electron. J. Combin., 15, 1, 82 (2008) · Zbl 1163.05329 [16] Przybyło, J.; Woźniak, M., Total weight choosability of graphs, Electron. J. Combin., 18, 112 (2011) · Zbl 1217.05202 [17] Qu, C. Q.; Wang, G. H.; Yan, G. Y.; Yu, X. W., Neighbor sum distinguishing total choosability of planar graphs, J. Comb. Optim., 32, 3, 906-916 (2016) · Zbl 1348.05082 [18] Ringel, G., Ein sechsfarbenproblem auf der kugel, Abh. Math. Semin. Univ. Hambg., 29, 107-117 (1965) · Zbl 0132.20701 [19] Seamone, B., The 1-2-3 conjecture and related problems: a survey (2012), arXiv:1211.5122 [20] Song, W. Y.; Miao, L. Y.; Li, J. B.; Zhao, Y. Y.; Pang, J. R., Neighbor sum distinguishing total coloring of sparse IC-planar graphs, Discrete Appl. Math., 239, 183-192 (2018) · Zbl 1382.05019 [21] Wang, J. H.; Cai, J. S.; Liu, B. J., Neighbor sum distinguishing total choosability of planar graphs without adjacent triangles, Theoret. Comput. Sci., 661, 1-7 (2017) · Zbl 1357.05027 [22] Wang, J. H.; Cai, J. S.; Ma, Q. L., Neighbor sum distinguishing total choosability of planar graphs without 4-cycles, Discrete Appl. Math., 206, 215-219 (2016) · Zbl 1335.05051 [23] Wong, T.; Zhu, X., Total weight choosability of graphs, J. Graph Theory, 66, 198-212 (2011) · Zbl 1228.05161 [24] Wong, T.; Zhu, X., Antimagic labelling of vertex weighted graphs, J. Graph Theory, 3, 70, 348-359 (2012) · Zbl 1244.05192 [25] Yang, D. L.; Sun, L.; Yu, X. W.; Wu, J. L.; Zhou, S., Neighbor sum distinguishing total chromatic number of planar graphs with maximum degree $$10$$, Appl. Math. Comput., 314, 456-468 (2017) · Zbl 1426.05051 [26] Zhang, Z.; Cheng, X.; Li, J.; Yao, B.; Lu, X.; Wang, J., On adjacent-vertex-distinguishing total coloring of graphs, Sci. China Ser. A, 48, 3, 289-299 (2005) · Zbl 1080.05036 [27] Zhang, X.; Wu, J. L., On edge colorings of 1-planar graphs, Inform. Process. Lett., 111, 124-128 (2011) · Zbl 1259.05050 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. This article presents the different data types in R. To learn about the different variable types from a statistical point of view, read “Variable types and examples”. # What data types exist in R? There are five data types in R: 1. Numeric 2. Integer 3. Complex 4. Character 5. Logical Datasets in R are often a combination of these 5 different data types. Below we explore in more details each data types one by one, except the data type “complex” as we focus on the main ones and this data type is rarely used in practice. # Numeric The most common data type in R is numeric. A variable or a series will be stored as numeric data if the values are numbers or if the values contains decimals. For example, the following two series are stored as numeric by default: # numeric series without decimals num_data <- c(3, 7, 2) num_data ## [1] 3 7 2 class(num_data) ## [1] "numeric" # numeric series with decimals num_data_dec <- c(3.4, 7.1, 2.9) num_data_dec ## [1] 3.4 7.1 2.9 class(num_data_dec) ## [1] "numeric" # also possible to check the class thanks to str() str(num_data_dec) ## num [1:3] 3.4 7.1 2.9 In other words, if you assign one or several numbers to an object in R, it will be stored as numeric by default (numbers with decimals), unless specified otherwise. # Integer Integer data type is actually a special case of numeric data. Integers are numeric data without decimals. It can be used if you are sure that the numbers you store will never contains decimals. For example, let’s say you are interested in the number of children in a sample of 10 families. This variable is a discrete variable (see a reminder on the variable types if you do not remember what is a discrete variable) and will never have decimals. Therefore, it can be stored as integer data thanks to the as.integer() command: children ## [1] 1 3 2 2 4 4 1 1 1 4 children <- as.integer(children) class(children) ## [1] "integer" Note that if your variable does not have decimals, R will automatically set the type as integers instead of numeric. # Character The data type character is used when storing text, known as strings in R. The simplest ways to store data under the character format is by using "" around the piece of text: char <- "some text" char ## [1] "some text" class(char) ## [1] "character" If you want to force any kind of data to be stored as character, you can do it by using the command as.character(): char2 <- as.character(children) char2 ## [1] "1" "3" "2" "2" "4" "4" "1" "1" "1" "4" class(char2) ## [1] "character" Note that everything inside "" will be considered as character, no matter if it looks like character or not. For example: chars <- c("7.42") chars ## [1] "7.42" class(chars) ## [1] "character" Furthermore, as soon as there is at least one character value inside a variable or vector, the whole variable or vector will be considered as character: char_num <- c("text", 1, 3.72, 4) char_num ## [1] "text" "1" "3.72" "4" class(char_num) ## [1] "character" Last but not least, although space does not matter in numeric data, it does matter for character data: num_space <- c(1 ) num_nospace <- c(1) # is num_space equal to num_nospace? num_space == num_nospace ## [1] TRUE char_space <- "text " char_nospace <- "text" # is char_space equal to char_nospace? char_space == char_nospace ## [1] FALSE As you can see from the results above, a space within character data (i.e., within "") makes it a different string in R! # Logical A logical variable is a variable with only two values; TRUE or FALSE: value1 <- 7 value2 <- 9 # is value1 greater than value2? greater <- value1 > value2 greater ## [1] FALSE class(greater) ## [1] "logical" # is value1 less than or equal to value2? less <- value1 <= value2 less ## [1] TRUE class(less) ## [1] "logical" It is also possible to transform logical data into numeric data. After the transformation from logical to numeric with the as.numeric() command, FALSE values equal to 0 and TRUE values equal to 1: greater_num <- as.numeric(greater) sum(greater) ## [1] 0 less_num <- as.numeric(less) sum(less) ## [1] 1 Conversely, numeric data can be converted to logical data, with FALSE for all values equal to 0 and TRUE for all other values. x <- 0 as.logical(x) ## [1] FALSE y <- 5 as.logical(y) ## [1] TRUE Thanks for reading. I hope this article helped you to understand the basic data types in R and their particularities. If you would like to learn more about the different variable types from a statistical point of view, read “Variable types and examples”. As always, if you find a mistake/bug or if you have any questions do not hesitate to let me know in the comment section below, raise an issue on GitHub or contact me. Get updates every time a new article is published by subscribing to this blog.
Algebra -> Coordinate Systems and Linear Equations -> SOLUTION: In an isosceles triangle, two of the angles are equal in measure.If the third angle is 21° less than three times the other angles, find the measure of all three angles. That's the whole goal here. So that is going to be the same as that right over there. Also, in an isosceles triangle, two equal sides will join at the same angle to the base i.e. At first you need to find an intersection point of the axis of symmetry of the triangle and its BC side - let's denote this point D and its coordinates (x, y). Find the coordinates of P. View solution If the points ( 1 , 1 ) ( − 1 , − 1 ) and ( − 3 , k ) are vertices of an equilateral triangle, Find the value of k altitude that they dropped, this is going to form a right angle here and a right angle here and notice, both of these triangles, because this whole thing Types of Isosceles Triangles. Let the third point be C(x,y). How to find coordinates of 3rd vertex of a right angled triangle when everything else is known? [insert R Y T as described] This distance right here, the whole thing, the whole thing is Measure of length of all three sides, or 2. The base angle α is equal to 180° minus vertex angle β, divided by 2. We'll give that the same color. Our mission is to provide a free, world-class education to anyone, anywhere. same as that 90 degrees. - [Instructor] We're asked I'm gonna try to solve for x. On the left hand side, we have x squared over four is equal to 169 minus 144. Why can't we built a huge stationary optical telescope inside a depression similar to the FAST? Hence, you know the equation to the line passing through A and B. angles that are the same. So this is going to be x over two and this is going to be x over two. These special properties of the isosceles triangle will help us to calculate its area as well as its altitude with the help of a few pieces of information and formula. An isosceles triangle has 2 congruent sides and two congruent angles. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. 1. And so the third angle This angle, is the same as that angle. This is just the Pythagorean Theorem now. So this is x over two and this is x over two. Here the three points are A (3 , 0) , B (6 , 4) and C ( − 1 , 3). Well the key realization to solve this is to realize that this Here, the student will learn the methods to find … Here, we see that two sides of the triangle are equal. needs to be the same. To form an isosceles triangle here, we need to create a third vertex whose coordinate is between and . I want what's inside anyway. Find the measure of the third angle. Let A be the point (x1,y1) and B be the point (x2,y2). Calculate the midpoint of the base by averaging the x and y coordinates. We can say that x over two squared that's the base right over here this side right over here. Now, if you're just looking J(–6, 2) and K(3, 2) are the endpoints of the base of an isosceles triangle. . So there you have it. Let X the point on AB which a perpendicular line passes through C. AXC is a right angled triangle … To construct a triangle, we need to know; 1. You can transfer th Issocelous triangle in to a quadelateral. Because it's an isosceles triangle, this 90 degrees is the if you can figure that out. So they're both going to have 13 they're going to have one side that's 13, one side that is 12 and so this and this side are going to be the same. Donate or volunteer today! The easiest way to prove that a triangle is isosceles using coordinate geometry is to use the sides. Can we get rid of all illnesses by a year of Total Extreme Quarantine? Half of that is going to be five. MathJax reference. We can multiply both sides by four to isolate the x squared. Merge Two Paragraphs with Removing Duplicated Lines. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Isosceles triangle Calculate the perimeter of isosceles triangle with arm length 73 cm and base length of 48 cm. An isosceles spherical triangle has angle A = B = 54° and side b = 82°. So subtracting 144 from both sides and what do we get? In an isosceles triangle, the equal sides are 2/3 of the length of the base. The answer is A:(8,-2) See since it is an right angled triangle, let the third vertex lie on a circle , hence the given vertices will lie on the circle and will act as the two extremities of the diameter. Solve the Base Angle. the third side. How to Find Missing Angles in an Isosceles Triangle from only One Angle. So pause this video and see triangles are congruent. I need to find the third vertex of an isosceles triangle in $3D$ space. The height of an isosceles triangle is the perpendicular line segment drawn from base of the triangle to the opposing vertex. Solve the Vertex Angle The third side is called the base. Unknown circuit component with glass encapsulated contacts. or 10x +2y = 12 or 5x +y = 6 i.e. And we use that information and the Pythagorean Theorem to solve for x. Let's see, 69 minus 44 is 25. select elements \) Customer Voice. Is going to be equal to 13 squared. angles that are the same and you have a side between them that is the same this altitude of 12 is on both triangles, we know that both of these How to calculate coordinates of third point in a triangle (2D) knowing 2 points coordinates, all lenghts and all angles 1 How to find co ordinates of a triangle after increasing the area by a factor of $\alpha$? So this length right over here, that's going to be five and indeed, five squared plus 12 squared, that's 25 plus 144 is 169, 13 squared. Lengths of an isosceles triangle. If any two sides have … Calculator solve the triangle specified by coordinates of three vertices in the plane (or in 3D space). I was making a 3D printing design late at night and needed to find the height of a triangle, but I was too tired to do the calculations myself. same column, let me see. My whipped cream can has run out of nitrous. Triangle area calculator by points. Now as AB = BC = CA, we have AB2 = BC2 = CA2. - 13721772 If a vertex is placed at , the distance from to this point will be . If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. What does the name "Black Widow" mean in the MCU? Does Kasardevi, India, have an enormous geomagnetic field because of the Van Allen Belt? y = 6 − 5x. The distance from to this point will be the same. So the triangle formed should be an isosceles triangle. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. In an isosceles triangle there are two sides which are equal in length. Even if you define a plane the triangle lies on, there still will be two possible solutions. Getting ready for right triangles and trigonometry, Pythagorean theorem with isosceles triangle, Multi-step word problem with Pythagorean theorem. positive value of it. We have solved for x. Coordinate proof: Given the coordinates of the triangle's vertices, to prove that a triangle is isosceles plot the 3 points (optional). Leah Ashe Adopt Me Eggs, Jj Abrams Wife, Best Silicone Engagement Ring, The Number 420 In The Bible, Four Seasons Nevis Villas, Strawmyer & Drury Obituaries, Fashion Pixiez Bratz,