content
stringlengths
86
994k
meta
stringlengths
288
619
Time complexity in data structure | StudyMite Time complexity in data structure Whar are Asymptotic Notations? The Time complexity of Algorithms Space Complexity of an Algorithm The Time complexity of Algorithms As we know that analysis of algorithms is required to find the most efficient algorithm for a given task, two factors help us determine the efficiency, time and space complexity of the algorithms. We will discuss here the first factor i.e. Time complexity. Time complexity evaluates the amount of time taken by the algorithm to perform a given function of the length of the input. There are five different types of time complexities possible: • Constant time complexity O(1) • Linear time complexity O(n) • Logarithmic time complexity O(Log n) • Quadratic time complexity O(n²) • Exponential time complexity O(2ˆn) To get more comfortable with the term “time complexity” let us understand it through a simple problem of search using two very basic searches Linear & Binary. For both the searches let us take a mock problem to solve, here we will search for an element, say ‘9’ in a given array. Array= {1, 2, 3, 4, 5, 6, 7, 8, 9} So, the linear search starts and every element is matched with the target element which in this case is ‘9’. The algorithm is run 9 times till the target is found and it returns true. This case can be considered as the worst-case scenario for the algorithm, as it runs for the longest time to find the target. Now let’s search using Binary Search. Here, the target element is first matched with the middle element of a sorted array, it then searches to the left of the list if the target is less than the middle element or searches the right side of the element is greater than the middle element. So, in this case, the number of operations required is 3 until the targeted element ‘9’ is found. However, this case here is also the worst case for the binary search as all the elements were scrutinized until the last element matches the target. The search starts here (mid-point) We can conclude from the given example that binary search used Logarithmic time complexity, number of operations= log (9) = approx. 3 (for the base 2) For an array of size n, it takes Log (n) in a Binary Search. In the end, we can understand that the numbers of operations are greatly reduced when we switched our search algorithm. This may not seem to be a big difference here but when there are a huge amount of data in real-time the time complexity of an algorithm plays a very important role.
{"url":"https://www.studymite.com/data-structure/time-complexity-of-algorithms","timestamp":"2024-11-04T08:53:39Z","content_type":"text/html","content_length":"50170","record_id":"<urn:uuid:5de59bb5-6253-4425-ac0f-9e87080944c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00585.warc.gz"}
Question #4ab7b | Socratic Question #4ab7b 1 Answer The important thing to always keep in mind when dealing with dilution factors is that the dilution factor depends on two things □ the volume of the initial solution, i.e. the concentrated solution □ the total volume of the final solution, i.e. the diluted solution More specifically, the dilution factor is calculated like this $\textcolor{b l u e}{| \overline{\underline{\textcolor{w h i t e}{\frac{a}{a}} \text{DF" = V_"final"/V_"initial} \textcolor{w h i t e}{\frac{a}{a}} |}}}$ ${V}_{\text{final}}$ - the final volume of the solution ${V}_{\text{initial}}$ - the initial volume of the solution In your case, you make a solution by dissolving $\text{0.4772 g}$ of solute in $\text{100 mL}$ of water. You then take $\text{1 mL}$ of this solution and add it to another $\text{100 mL}$ of water. This means that in your case you have ${V}_{\text{initial" = "1 mL}}$ you start with this sample of concentrated solution ${V}_{\text{final" = "1 mL" + "100 mL" = "101 mL}}$ you add the concentrated sample to another $\text{100 mL}$of water The dilution factor will thus be #"DF" = (101 color(red)(cancel(color(black)("mL"))))/(1color(red)(cancel(color(black)("mL")))) = 101# In order to have a dilution factor of $100$, you must take the $\text{1 mL}$ sample and add enough water to get the total volume to $\text{100 mL}$. This would then get you #"DF" = (100color(red)(cancel(color(black)("mL"))))/(1color(red)(cancel(color(black)("mL")))) = 100# As a final note, a dilution factor equal to $101$ means that your initial solution was $101$times more concentrated than the diluted solution. Impact of this question 1892 views around the world
{"url":"https://api-project-1022638073839.appspot.com/questions/57c0b449b72cff0d7704ab7b","timestamp":"2024-11-06T08:09:29Z","content_type":"text/html","content_length":"37641","record_id":"<urn:uuid:7066c391-fbbd-4d75-9c75-386705ee7061>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00118.warc.gz"}
Tight Binding parameter extract from MLWF Date: 2011/05/20 15:55 Name: KF Dear Sirs, I also want to get tight binding (TB) parameter, since I use MLWF module in the openmx. I found the hopping integrals have real & imaginary part. In my knowledge the hopping integrals (or TB parameter) is only one number (or energy (eV)). Why does hopping integrals have real & imaginary part ? Or, what were meaning for real & imaginary part in hopping integrals? Best Regards,
{"url":"https://www.openmx-square.org/forum/patio.cgi?mode=view&no=1253","timestamp":"2024-11-04T11:39:41Z","content_type":"text/html","content_length":"3439","record_id":"<urn:uuid:dd989a6c-1515-4e60-9f10-c3f617eed972>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00752.warc.gz"}
How To Find The Equation Of A Tangent Line When Given Point - Tessshebaylo How To Find The Equation Of A Tangent Line When Given Point How to find the equation of a tangent line mathsathome com with derivatives you finding using derivative ex 2 math wonderhowto calculus at point ient rule equations lines that pass through given How To Find The Equation Of A Tangent Line Mathsathome Com How To Find The Equation Of A Tangent Line Mathsathome Com How To Find The Equation Of Tangent Line With Derivatives You How To Find The Equation Of A Tangent Line Mathsathome Com Finding The Equation Of A Tangent Line Using Derivative Ex 2 You How To Find The Equation Of A Tangent Line Mathsathome Com How To Find The Equation Of A Tangent Line Math Wonderhowto How To Find The Equation Of A Tangent Line Mathsathome Com Calculus Equation Of The Tangent Line At Point Using Ient Rule You Calculus Find Equations Of Tangent Lines That Pass Through Given Points You Finding Equations Of Tangent Lines That Pass Through Given Points You Solved Find An Equation Of The Tangent Line To Given Curve At Specified Point Y X2 1 X 2 0 Solved 31 32 Find An Equation Of The Tangent Line To Chegg Com Find An Equation Of The Tangent Line To Graph At Given Point Y 3 2 4 X 5 6 Socratic Finding The Equation Of A Tangent Line Using Derivative You Find An Equation Of The Tangent Line To Graph At Given Point Is Called A Serpentine Wyzant Ask Expert How To Find The Equation Of A Tangent Line 8 Steps Solved Find The Equation Of Tangent Line At Given Point On Following Curve X Y 41 5 4 Iine To Is Find An Equation Of The Tangent Line To Curve At Y X 81 9 Get Answer Find F A An Equation Of The Tangent Line To Curve At Transtutors Find An Equation Of The Tangent Line To Graph At Given Point Then Use A Graphing Utility Function And Its In Same Viewing Window Tangent Line Definition Equation Examples Lesson Transcript Study Com Equation Of Tangent Line Calculator At Point How to find the equation of a tangent finding line math wonderhowto calculus equations lines that Trending Posts This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://www.tessshebaylo.com/how-to-find-the-equation-of-a-tangent-line-when-given-point/","timestamp":"2024-11-12T15:47:26Z","content_type":"text/html","content_length":"59938","record_id":"<urn:uuid:44eb74be-3553-4894-a680-b360bc9febad>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00188.warc.gz"}
Frequently Asked Java Program 01: Java Program to Check If a Given Number is Palindrome Hello Folks, As part of Frequently Asked Java Programs In Interviews For Freshers And Experienced, in this post we will see a java program to check if a given number is palindrome. Palindrome program is a frequently asked programming interview question to both freshers or experienced in interviews. We will going to learn that in this post. Let’s start with basics first. We must be aware what is Palindrome. What is palindrome? We read a word or group of words from left to right. If we can read them from right to left same as we read left to right , is called a palindrome. Examples: “Refer” , “level” , “Madam” , “Nurses run” , 1991 are palindrome. A number can also be a palindrome number as shown above in example. In this post, we will be learn about a palindrome number. In next post, we will learn about string palindrome. A palindromic number or numeral palindrome is a number that remains the same when its digits are reversed. For example: 16461. Logic to find out a numeral palindrome: 1. We need to extract last digit from input number and keep it at first position for a new number. 2. Extract the second last digit and put in the next position of new number. 3. We can achieve that using multiple ways like reverse traversing or charAt methods etc. Here we will see include some number concepts. 4. We can extract last digit of a number(remainder) if we divide number by 10. For examples: a. 13/10 : Remainder= 3 which is last digit in number 13. b. 100/10 : Remainder= 0 which is last digit in number 10. 5. New number which is actually reverse number, will be formed by formula “(reverseNumber*10) + remainder”. 6. Now divide the original number by 10 to remove last digit of it. 7. Repeat above steps till original number is less than zero. Java Program: package NumberSeries; import java.util.Scanner; public class PalindromeNumber { public static void main(String[] args) { // Taking input from user Scanner sc = new Scanner(System.in); System.out.println("Please input the number to find if it is palindrome or not:"); int inputByUser = sc.nextInt(); System.out.println("Input Number to be checked for palindrome: " + inputByUser); // Closing input stream // Copy input number to a temporary variable to keep original value intact int temp = inputByUser; int revNumber = 0; // checking if number is negative if (inputByUser < 0) { System.out.println("Negative number.Enter positive number."); } // checking if number is single digit only else if (inputByUser >= 0 && inputByUser <= 9) { System.out.println(inputByUser + " is palindrome as it is single digit number."); } else { while (temp > 0) // extracting last digit of number int rem = temp % 10; // forming number revNumber = revNumber * 10 + rem; // removing last digit from number temp = temp / 10; System.out.println("Input By User:" + inputByUser); System.out.println("Reverse number:" + revNumber); // Comparing if input number and reversed number are same if (inputByUser == revNumber) { System.out.println(inputByUser + " is a Palindrome Number"); } else { System.out.println(inputByUser + " is not a Palindrome Number"); [java]Please give the input number to check palindrome: Number to be checked for palindrome: -10 Negative number.Enter positive number. Please give the input number to check palindrome: Number to be checked for palindrome: 5 5 is palindrome as it is single digit number. Please give the input number to check palindrome: Number to be checked for palindrome: 16461 Input By User:16461 Reverse number:16461 16461 is a Palindrome Number Please give the input number to check palindrome: Number to be checked for palindrome: 87342 Input By User:87342 Reverse number:24378 87342 is not a Palindrome Number[/java] You can run above program for multiple inputs and if it fails for any condition, let me know. 4 thoughts on “Frequently Asked Java Program 01: Java Program to Check If a Given Number is Palindrome” 1. you can add the this program also. WAP to find second largest number from an array. 1. Sure. 2. Thank You so much. 3. i think we could have just put the check for number=0 for single digit number as rest can be taken care by the loop even if the number is >=1 and <=9
{"url":"http://makeseleniumeasy.com/2017/05/31/java-program-to-check-if-a-given-number-is-palindrome/","timestamp":"2024-11-03T19:34:34Z","content_type":"text/html","content_length":"49409","record_id":"<urn:uuid:9d2959f3-c3f9-4006-aa8b-b49a94532272>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00071.warc.gz"}
Some publications [2024] [2023] [2022] [2021] [2020] [2019] [2018] [2017] [2016] [2015] [2014] [2013] [2012] [2011] [2010] [2009] [2008] [2007] [2006] [2005] [2004] [2003] [2002] [2001] [2000] [1999] [1998] [1997] [ 1996] [1994] [1993] [1991] Note: the years below correspond to the first preprint, not to the final publication dates. Accuracy of Complex Mathematical Operations and Functions in Single and Double Precision, with Paul Caprioli and Vincenzo Innocente, 8 pages, September 2024. The IEEE Standard for Floating-Point Arithmetic requires correct rounding for basic arithmetic operations (addition, subtraction, multiplication, division, and square root) on real floating-point numbers, but there is no such requirement for the corresponding operations on complex floating- point numbers. Furthermore, while the accuracy of mathematical functions has been studied in various libraries for different IEEE real floating-point formats, we are unaware of similar studies for complex-valued functions Quadratic Short Division, with Juraj Sukop, May 2024. In Modern Computer Arithmetic, the authors describe a quadratic division with remainder, and mention a factor of two speedup can be obtained when only the quotient is needed. We give an explicit quadratic algorithm that computes an approximate quotient. Note on the Veltkamp/Dekker Algorithms with Directed Roundings, preprint, February 2024. The Veltkamp/Dekker algorithms are very useful for double-double arithmetic, when no fused multiply-add is available in hardware. Their analysis is well-known for rounding to nearest-even. We study how they behave with directed roundings in radix 2. Correctly-rounded evaluation of a function: why, how, and at what cost?, with Nicolas Brisebarre, Guillaume Hanrot and Jean-Michel Muller, preprint, 30 pages, 2024. The goal of this paper is to convince the reader that a future standard for floating-point arithmetic should require the availability of a correctly-rounded version of a well-chosen core set of elementary functions. We discuss the interest and feasibility of this requirement. We also give answers to common objections we have received over the last 10 years. Note: the hardness to round upper bounds from Table 3 can be reproduced with this SageMath program. Accuracy of Mathematical Functions in Single, Double, Extended Double and Quadruple Precision, with Brian Gladman, Vincenzo Innocente, John Mather, 26 pages, August 2024. [previous versions from February 2024, September 2023, February 2023, August 2022, February 2022, September 2021, February 2021, December 2020, September 17, 2020, September 15, 2020, August 28, 2020, August 25, 2020, February 2020] [HAL] • For single, double, and quadruple precision (IEEE-754 binary32, binary64 and binary128 formats), we give the largest known error in ulps (units in last place) of mathematical functions for several mathematical libraries. For single precision and univariate functions, these values were obtained by exhaustive search over the full binary32 range, and are thus rigorous upper bounds. The input values that exhibit these largest known errors are also given. Note: the material for this paper (LaTeX source and programs) is available here. Errata: AMD LibM does not provide the Bessel functions (j0, j1, y0, y1) and erf/erfc, both in single and double precision. The corresponding entries in all versions up to August 2022 should thus read "NA" (Not Available). In version from February 2024, replace "We also get large errors for expm1f in AMD LibM, and for cospif, tanpif, powf in FreeBSD" by "We also get large errors for expm1f in AMD LibM". Cipher key used by king Charles IX to his ambassador in Spain, Fourquevaux, reconstructed on November 9, 2023. Towards a correctly-rounded and fast power function in binary64 arithmetic, with Tom Hubrecht and Claude-Pierre Jeannerod, extended version (with proofs) of an article published in the proceedings of Arith 2023, 23 pages, July 2023 [extended version with full proofs] • We design algorithms for the correct rounding of the power function x^y in the binary64 IEEE 754 format, for all rounding modes, modulo the knowledge of hardest-to-round cases. Our implementation of these algorithms largely outperforms previous correctly-rounded implementations and is not far from the efficiency of current mathematical libraries, which are not correctly-rounded. Still, we expect our algorithms can be further improved for speed. The proofs of correctness are fully detailed, with the goal to enable a formal proof of these algorithms. We hope this work will motivate the next IEEE 754 revision committee to require correct rounding for mathematical functions. Deciphering Charles Quint (A diplomatic letter from 1547), with Cécile Pierrot, Camille Desenclos and Pierrick Gaudry, Proceedings of Histocrypt 2023, pages 148-159, 2023. • An unknown and almost fully encrypted letter written in 1547 by Emperor Charles V to his ambassador at the French Court, Jean de Saint-Mauris, was identified in a public library, the Bibliothèque Stanislas (Nancy, France). As no decryption of this letter was previously published or even known, a team of cryptographers and historians gathered together to study the letter and its encryption system. First, multiple approaches and methods were tested in order to decipher the letter without any other specimen. Then, the letter has now been inserted within the whole correspondence between Charles and Saint-Mauris, and the key has been consolidated thanks to previous key reconstructions. Finally, the decryption effort enabled us to uncover the content of the letter and investigate more deeply both cryptanalysis challenges and encryption methods. Note on FastTwoSum with Directed Rounding, preprint, 3 pages, 2023, revised July 2024 with Sélène Corbineau. • In [1], Graillat and Jézéquel prove a bound on the maximal error for the FastTwoSum algorithm with directed roundings. We improve that bound by a factor 2, and show that FastTwoSum is exact when the exponent difference of the inputs does not exceed the current precision. Déchiffrement de la lettre de Stanislas au comte d'Heudicourt datée du 23 décembre 1724, avec Clément Dallé, 2022 [in french]. • This letter is located at the Bibliothèque Stanislas in Nancy. The cipher used is a variant of Cesar's cipher, but with a shift of 2 instead of 3, and an encoding of letter and digrams by 1- and 2-digit numbers. The CORE-MATH Project, with Alexei Sibidanov and Stéphane Glondu, Proceedings of the 29th IEEE Symposium on Computer Arithmetic (ARITH 2022), 2022. Best paper award. • The CORE-MATH project aims at providing open-source mathematical functions with correct rounding that can be integrated into current mathematical libraries. This article demonstrates the CORE-MATH methodology on two functions: the binary32 power function (powf) and the binary64 cube root function (cbrt). CORE-MATH already provides a full set of correctly rounded C99 functions for single precision (binary32). These functions provide similar or in some cases up to threefold speedups with respect to the GNU libc, which is not correctly rounded. This work offers a prospect of the mandatory requirement of correct rounding for mathematical functions in the next revision of the IEEE-754 standard. The State of the Art in Integer Factoring and Breaking Public-Key Cryptography, with Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic, Nadia Heninger, Emmanuel Thomé, IEEE Security and Privacy, volume 20, number 2, pages 80-86, 2022. [HAL] • In this column, we will review the current state of the art of cryptanalysis for three number-theoretic problems using classical (nonquantum) computers, including, in particular, our most recent computational records for integer factoring and prime-field discrete logarithms. Three Cousins of Recamán’s Sequence, with Max A. Alekseyev, Joseph Samuel Myers, Richard Schroeppel, S. R. Shannon, and N. J. A. Sloane, The Fibonacci Quarterly, volume 60, number 3, pages 201-219, August 2022. [HAL] • Although 10^230 terms of Recamán's sequence have been computed, it remains a mystery. Here three distant cousins of that sequence are described, one of which is also mysterious. (i) {A(n), n ≥ 3} is defined as follows. Start with n, and add n+1, n+2, n+3, ..., stopping after adding n+k if the sum n + (n+1) + ... + (n+k) is divisible by n+k+1. Then A(n)=k. We determine A(n) and show that A (n) ≤ n^2 - 2n - 1. (ii) {B(n), n ≥ 1} is a multiplicative analog of {A(n)}. Start with n, and successively multiply by n+1, n+2, ..., stopping after multiplying by n+k if the product n(n+1) ... (n+k) is divisible by n+k+1. Then B(n)=k. We conjecture that log^2 B(n) = (1/2+o(1)) log n log log n. (iii) The third sequence, {C(n), n ≥ 1}, is the most interesting, because the most mysterious. Concatenate the decimal digits of n, n+1, n+2, ... until the concatenation n || n+1 || ... || n+k is divisible by n+k+1. Then C(n)=k. If no such k exists we set C(n) = -1. We have found C(n) for all n ≥ 1000 except for two cases. Some of the numbers involved are quite large. For example, C(92) = 218128159460, and the concatenation 92 || 93 || ... || (92+C(92)) is a number with about 2 * 10^12 digits. We have only a probabilistic argument that such a k exists for all n. Nouveaux records de factorisation et de calcul de logarithme discret, with F. Boudot, P. Gaudry, A. Guillevic, N. Heninger and E. Thomé, Techniques de l'ingénieur, 17 pages, 2021 [in french]. [HAL] • Cet article décrit deux nouveaux records établis fin 2019 : un record de factorisation d’entier avec la factorisation du nombre RSA-240, et un record de calcul de logarithme discret de même taille. Ces deux records correspondent à des nombres de 795 bits, soit 240 chiffres décimaux, et ont été établis avec le même logiciel libre (CADO-NFS), sur le même type de processeurs. Ces records servent de référence pour les recommandations en termes de taille de clé pour les protocoles cryptographiques. Parallel Structured Gaussian Elimination for the Number Field Sieve, with Charles Bouillaguet, Mathematical Cryptology, volume 0, number 1, pages 22-39, 2020. • This article describes a parallel algorithm for the Structured Gaussian Elimination step of the Number Field Sieve (NFS). NFS is the best known method for factoring large integers and computing discrete logarithms. State-of-the-art algorithms for this kind of partial sparse elimination, as implemented in the CADO-NFS software tool, were unamenable to parallel implementations. We therefore designed a new algorithm from scratch with this objective and implemented it using OpenMP. The result is not only faster sequentially, but scales reasonably well: using 32 cores, the time needed to process two landmark instances went down from 38 minutes to 20 seconds and from 6.7 hours to 2.3 minutes, respectively. CORE-MATH, research project submitted as Advanced Grant Proposal to the European Research Council, August 2020. This project was judged ``too narrowly focused and [that it] would have limited impact'' by the ERC evaluation panel. Comparing the difficulty of factorization and discrete logarithm: a 240-digit experiment, with F. Boudot, P. Gaudry, A. Guillevic, N. Heninger and E. Thomé, proceedings of Crypto 2020, LNCS 12171, 30 pages. [HAL] • We report on two new records: the factorization of RSA-240, a 795-bit number, and a discrete logarithm computation over a 795-bit prime field. Previous records were the factorization of RSA-768 in 2009 and a 768-bit discrete logarithm computation in 2016. Our two computations at the 795-bit level were done using the same hardware and software, and show that computing a discrete logarithm is not much harder than a factorization of the same size. Moreover, thanks to algorithmic variants and well-chosen parameters, our computations were significantly less expensive than anticipated based on previous records. This work was partially supported by a PRACE grant, and was published in the PRACE success stories, and in the Results of the Gauss Center for Supercomputing. Recovering Hidden SNFS Polynomials, note, 2 pages, October 2019. • Given an integer N constructed with an SNFS trapdoor, i.e., such that N = |Res(f,g)| with f = a[d]x^d + a[d-1]x^d-1 + ... + a[0] having small coefficients a[i] = O(B), and g = lx - m, we can recover f and g in O(B F(l)) arithmetic operations, assuming B^2 l^2 ≪ a[d] m, where F(l) is the number of arithmetic operations to find a prime factor l. This partially answers an open problem from [1]. A New Ranking Function for Polynomial Selection in the Number Field Sieve, with Nicolas David, Contemporary Mathematics, volume 754, pages 315-325, special issue "75 Years of Mathematics of Computation", Susanne C. Brenner, Igor Shparlinski, Chi-Wang Shu, Daniel B. Szyld, eds., American Mathematical Society, 2020. [DOI] • This article explains why the classical Murphy-E ranking function might fail to correctly rank polynomial pairs in the Number Field Sieve, and proposes a new ranking function. Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice, with David Adrian, Karthikeyan Bhargavan, Zakir Durumeric, Pierrick Gaudry, Matthew Green, J. Alex Halderman, Nadia Heninger, Drew Springall, Emmanuel Thomé, Luke Valenta, Benjamin VanderSloot, Eric Wustrow, and Santiago Zanella-Béguelin, Research Highlights of Communications of the ACM, volume 62, number 1, pages 106-114, January 2019. [CACM page] [HAL] • We investigate the security of Diffie-Hellman key exchange as used in popular Internet protocols and find it to be less secure than widely believed. First, we present Logjam, a novel flaw in TLS that lets a man-in-the-middle downgrade connections to "export-grade" Diffie-Hellman. To carry out this attack, we implement the number field sieve discrete logarithm algorithm. After a week-long precomputation for a specified 512-bit group, we can compute arbitrary discrete logarithms in that group in about a minute. We find that 82% of vulnerable servers use a single 512-bit group, and that 8.4% of Alexa Top Million HTTPS sites are vulnerable to the attack.a In response, major browsers have changed to reject short groups. We go on to consider Diffie-Hellman with 768- and 1024- bit groups. We estimate that even in the 1024-bit case, the computations are plausible given nation-state resources. A small number of fixed or standardized groups are used by millions of servers; performing precomputation for a single 1024-bit group would allow passive eavesdropping on 18% of popular HTTPS sites, and a second group would allow decryption of traffic to 66% of IPsec VPNs and 26% of SSH servers. A close reading of published NSA leaks shows that the agency's attacks on VPNs are consistent with having achieved such a break. We conclude that moving to stronger key exchange methods should be a priority for the Internet community. On various ways to split a floating-point number, with Claude-Pierre Jeannerod and Jean-Michel Muller, proceedings of ARITH'25, June 2018. [HAL] Computational Mathematics with SageMath, with Alexandre Casamayou, Nathann Cohen, Guillaume Connan, Thierry Dumont, Laurent Fousse, François Maltey, Matthias Meulien, Marc Mezzarobba, Clément Pernet, Nicolas M. Thiéry, Erik Bray, John Cremona, Marcelo Forets, Alexandru Ghitza, Hugh Thomas, SIAM textbook, 2018. [HAL] • This is the english translation of the book "Calcul mathématique avec Sage" which was published in 2013. The examples were updated from Sage 5.9 to Sage 8.3. FFT extension for algebraic-group factorization algorithms, with Richard P. Brent and Alexander Kruppa, chapter of the book Topics in Computational Number Theory Inspired by Peter L. Montgomery, Cambridge University Press, 2017 [HAL]. Other chapters of the book are available online on this page. Optimized Binary64 and Binary128 Arithmetic with GNU MPFR, with Vincent Lefèvre, proceedings of the 24th IEEE Symposium on Computer Arithmetic (ARITH 24), London, UK, July 24-26, 2017 [HAL]. • We describe algorithms used to optimize the GNU MPFR library when the operands fit into one or two words. On modern processors, this gives a speedup for a correctly rounded addition, subtraction, multiplication, division or square root in the standard binary64 format (resp. binary128) between 1.8 and 3.5 (resp. between 1.6 and 3.2). We also introduce a new faithful rounding mode, which enables even faster computations. Those optimizations will be available in version 4 of MPFR. Computing the ρ constant, with Jérémie Detrey and Pierre-Jean Spaenlehauer, preprint, 3 pages, October 2016. Factorisation of RSA-220 with CADO-NFS, with Shi Bai, Pierrick Gaudry, Alexander Kruppa and Emmanuel Thomé, 3 pages, May 2016. [HAL]. RSA-220 is part of the RSA Factoring Challenge. • We report on the factorization of RSA-220 (220 decimal digits), which is the 3rd largest integer factorization with the General Number Field Sieve (GNFS), after the factorization of RSA-768 (232 digits) in December 2009, and that of 3^697+1 (221 digits) in February 2015 by NFS@home. Twelve New Primitive Binary Trinomials, with Richard P. Brent, preprint, 2 pages, 2016 [arxiv, HAL] • We exhibit twelve new primitive trinomials over GF(2) of record degrees 42,643,801, 43,112,609, and 74,207,281. In addition we report the first Mersenne exponent not ruled out by Swan's theorem --- namely 57,885,161 --- for which none primitive trinomial exists. This completes the search for the currently known Mersenne prime exponents. Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice, with David Adrian, Karthikeyan Bhargavan, Zakir Durumeric, Pierrick Gaudry, Matthew Green, J. Alex Halderman, Nadia Heninger, Drew Springall, Emmanuel Thomé, Luke Valenta, Benjamin VanderSloot, Eric Wustrow, Santiago Zanella-Beguelin, May 2015, to appear in the proceedings of CCS 2015. [HAL] [talk slides]. Best paper award. • We investigate the security of Diffie-Hellman key exchange as used in popular Internet protocols and find it to be less secure than widely believed. First, we present a novel flaw in TLS that allows a man-in-the-middle to downgrade connections to export-grade Diffie-Hellman. To carry out this attack, we implement the number field sieve discrete log algorithm. After a week-long precomputation for a specified 512-bit group, we can compute arbitrary discrete logs in this group in minutes. We find that 82% of vulnerable servers use a single 512-bit group, allowing us to compromise connections to 7% of Alexa Top Million HTTPS sites. In response, major browsers are being changed to reject short groups. More details here. See also the CNRS press releases in english and french. Note added on June 21st, 2016: our estimation for DH-768 in Table 2 was pessimistic, since Thorsten Kleinjung did such a computation with 4000 core-years of sieving (instead of 8000), 900 core years for linear algebra (instead of 28,500) for a matrix of 24M rows (instead of 150M). The DH-1024 estimation is probably pessimistic too. Automatic Analysis, 5 pages. This is a preliminary version of a chapter for a book about collected works of Philippe Flajolet. I wrote that chapter in March to July 2012. While the book is not yet published, I make this text available here. Magic Squares of Squares, with Paul Pierrat and François Thiriet, 2015. • We give modular properties and new classes of potential solutions for magic squares of squares and similar problems. Beyond Double Precision, research project submitted as Advanced Grant Proposal to the European Research Council, October 2014. This project was judged ``of high quality but not sufficient to pass to Step 2 of the evaluation''. Better Polynomials for GNFS, with Shi Bai, Cyril Bouvier, and Alexander Kruppa, Mathematics of Computation, volume 85, pages 861-873, 2016 (preprint from September 2014). [HAL] • The general number field sieve (GNFS) is the most efficient algorithm known for factoring large integers. It consists of several stages, the first one being polynomial selection. The quality of the selected polynomials can be modelled in terms of size and root properties. We propose a new kind of polynomials for GNFS: with a new degree of freedom, we further improve the size property. We demonstrate the efficiency of our algorithm by exhibiting a better polynomial than the one used for the factorization of RSA-768, and a polynomial for RSA-1024 that outperforms the best published one. Note: the size-optimized RSA-1024 polynomial B[1024] can be reproduced using CADO-NFS using the command sopt -n 135...563 -f rsa1024.poly1 -d 6 with that rsa1024.poly1 file. Calcul mathématique avec Sage [in french] with Alexandre Casamayou, Guillaume Connan, Thierry Dumont, Laurent Fousse, François Maltey, Matthias Meulien, Marc Mezzarobba, Clément Pernet and Nicolas M. Thiéry, 2010 [HAL]. Division-Free Binary-to-Decimal Conversion, with Cyril Bouvier, IEEE Transactions on Computers, volume 63, number 8, pages 1895-1901, August 2014. [HAL] • This article presents algorithms that convert multiple precision integer or floating-point numbers from radix 2 to radix 10 (or to any radix b > 2). Those algorithms, based on the ``scaled remainder tree'' technique, use multiplications instead of divisions in their critical part. Both quadratic and subquadratic algorithms are detailed, with proofs of correctness. Experimental results show that our implementation of those algorithms outperforms the GMP library by up to 50% (using the same low-level routines). Errata: in Algorithm 3, the formula for g at step 20 is wrong. It should be g = max(k[t], ceil(log2((k-3)/(k[t]-3)))) (reported by Juraj Sukop, April 8, 2024). Discrete logarithm in GF(2^809) with FFS, with Razvan Barbulescu, Cyril Bouvier, Jérémie Detrey, Pierrick Gaudry, Hamza Jeljeli, Emmanuel Thomé and Marion Videau, proceedings of PKC 2014, Lecture Notes in Computer Science Volume 8383, pages 221-238, 2014. [HAL] • We give details on solving the discrete logarithm problem in the 202-bit prime order subgroup of GF(2^809) using the Function Field Sieve algorithm (FFS). Factorization of RSA-704 with CADO-NFS, with Shi Bai and Emmanuel Thomé, preprint, July 2012. [HAL] • We give details of the factorization of RSA-704 with CADO-NFS. This is a record computation with publicly available software tools. The aim of this experiment was to stress CADO-NFS --- which was originally designed for 512-bit factorizations --- for larger inputs, and to identify possible rooms of improvement. Size Optimization of Sextic Polynomials in the Number Field Sieve, with Shi Bai, preprint, March 2012, revised June 2013. [HAL] • The general number field sieve (GNFS) is the most efficient algorithm known for factoring large integers. It consists of several stages, the first one being polynomial selection. The quality of the chosen polynomials in polynomial selection can be modelled in terms of size and root properties. In this paper, we describe some methods to optimize the size properties of sextic polynomials. To reproduce the example on page 12, see this Sage file. Note added January 23, 2015: part of the results of this preprint are given in Better Polynomials for GNFS (see above). Avoiding adjustments in modular computations, preprint, March 2012. • We consider a sequence of operations (additions, subtractions, multiplications) modulo a fixed integer N, where only the final value is needed, therefore intermediate computations might use any representation. This kind of computation appears for example in number theoretic transforms (NTT), in stage 1 of the elliptic curve method for integer factorization, in modular exponentiation, ... Our aim is to avoid as much as possible adjustment steps, which consist in adding or subtracting N, since those steps are useless in the mathematical sense. Note added July 17, 2012: the fact of using residues larger than N in Montgomery multiplication is well known. See for example the article "Software Implementation of Modular Exponentiation Using Advanced Vector Instructions Architectures" by Shay Gueron and Vlad Krasnov in the proceedings of WAIFI 2012, where it is called "Non Reduced Montgomery Multiplication" (NRMM). Note added June 26, 2018: for Montgomery multiplication, earlier papers are by H. Orup ("Simplifying quotient determination in high-radix modular multiplication", Arith 12, 1999) and C. D. Walter ("Montgomery exponentiation needs no final subtraction", Electronics Letters, 1999). Finding Optimal Formulae for Bilinear Maps, with Razvan Barbulescu, Jérémie Detrey and Nicolas Estibals, proceedings of WAIFI 2012, Bochum, Germany, July 16-19, LNCS 7369, pages 168-186, 2012. Maximal Determinants and Saturated D-optimal Designs of Orders 19 and 37, with Richard P. Brent, William Orrick, and Judy-anne Osborn, 28 pages. • A saturated D-optimal design is a {+1,-1} square matrix of given order with maximal determinant. We search for saturated D-optimal designs of orders 19 and 37, and find that known matrices due to Smith, Cohn, Orrick and Solomon are optimal. For order 19 we find all inequivalent saturated D-optimal designs with maximal determinant, 2^30 * 7^2 * 17, and confirm that the three known designs comprise a complete set. For order 37 we prove that the maximal determinant is 2^39 * 3^36, and find a sample of inequivalent saturated D-optimal designs. Our method is an extension of that used by Orrick to resolve the previously smallest unknown order of 15; and by Chadjipantelis, Kounias and Moyssiadis to resolve orders 17 and 21. The method is a two-step computation which first searches for candidate Gram matrices and then attempts to decompose them. Using a similar method, we also find the complete spectrum of determinant values for {+1,-1} matrices of order 13. Will Orrick compiles known results about The Hadamard maximal determinant problem. Related integer sequences are A003432 and A003433. Note: Richard Brent wrote a paper showing how to generate many Hadamard equivalence classes of solutions from a given Gram matrix. Numerical Approximation of the Masser-Gramain Constant to Four Decimal Digits: delta=1.819..., with Guillaume Melquiond and W. Georg Nowak, Mathematics of Computation, volume 82, number 282, pages 1235-1246, 2013. [HAL entry] • We prove that the constant studied by Masser, Gramain, and Weber, satisfies 1.819776 < delta < 1.819833, and disprove a conjecture of Gramain. This constant is a two-dimensional analogue of the Euler-Mascheroni constant; it is obtained by computing the radius r[k] of the smallest disk of the plane containing k Gaussian integers. While we have used the original algorithm for smaller values of k, the bounds above come from methods we developed to obtain guaranteed enclosures for larger values of k. Ballot stuffing in a postal voting system, with Véronique Cortier, Jérémie Detrey, Pierrick Gaudry, Frédéric Sur, Emmanuel Thomé and Mathieu Turuani, proceedings of REVOTE 2011, International Workshop on Requirements Engineering for Electronic Voting Systems, Trento, Italy, August 29, 2011, pages 27-36. • We review a postal voting system used in spring 2011 by the French research institute CNRS and designed by a French company (Tagg Informatique). We explain how the structure of the material can be easily understood out of a few samples of voting material (distributed to the voters), without any prior knowledge of the system. Taking advantage of some flaws in the design of the system, we show how to perform major ballot stuffing, making possible to change the outcome of the election. Our attack has been tested and confirmed by the CNRS. A fixed postal voting system has been quickly proposed by Tagg Informatique in collaboration with the CNRS, preventing this attack for the next elections. Short Division of Long Integers, with David Harvey, proceedings of the 20th IEEE Symposium on Computer Arithmetic (ARITH 20), Tuebingen, July 25-27, 2011, pages 7-14. [HAL entry, DOI] • We consider the problem of short division --- i.e., approximate quotient --- of multiple-precision integers. We present ready-to-implement algorithms that yield an approximation of the quotient, with tight and rigorous error bounds. We exhibit speedups of up to 30% with respect to GMP division with remainder, and up to 10% with respect to GMP short division, with room for further improvements. This work enables one to implement fast correctly rounded division routines in multiple-precision software tools. Note added July 27, 2011: the algorithm used by GMP 5 (implemented in the mpn_div_q function) was presented by Torbjörn Granlund in his invited talk at the ICMS 2006 conference. Non-Linear Polynomial Selection for the Number Field Sieve [doi], with Thomas Prest, Journal of Symbolic Computation, special issue in the honour of Joachim von zur Gathen, volume 47, number 4, pages 401-409, 2012. [HAL] • We present an algorithm to find two non-linear polynomials for the Number Field Sieve integer factorization method. This algorithm extends Montgomery's "two quadratics" method; for degree 3, it gives two skewed polynomials with resultant O(N^5/4), which improves on Williams O(N^4/3) result. Note added in June 2011: Namhun Koo, Gooc Hwa Jo and Soonhak Kwon extended our algorithm in this preprint. Note added on September 30, 2011: Nicholas Coxon extended and analysed our algorithm in this preprint. Note added on April 5, 2018: as noticed by Georgina Canny, the polynomials in the example for Montgomery's two quadratics method should be reversed. However, the reverse polynomials also work, since they have as common root 1/m modulo N. Modern Computer Arithmetic, with Richard Brent, Cambridge University Press, 2010, our page of the book, [HAL entry]. • This book collects in the same document all state-of-the-art algorithms in multiple precision arithmetic (integers, integers modulo n, floating-point numbers). The best current reference on that topic is volume 2 from Knuth's The art of computer programming, which misses some new important algorithms (divide and conquer division, other variants of FFT multiplication, floating-point algorithms, ...) Our aim is to give detailed algorithms: □ for all operations (not just multiplication as many text books), □ for all size ranges (not just schoolbook methods or FFT-based methods), □ and including all details (for example how to properly deal with carries for integer algorithms, or a rigorous analysis of roundoff errors for floating-point algorithms). The book would be useful for graduate students in computer science and mathematics (perhaps too specialized for most undergraduates, at least in its present state), researchers in discrete mathematics, computer algebra, number theory, cryptography, and developers of multiple-precision libraries. Reliable Computing with GNU MPFR, proceedings of the 3rd International Congress on Mathematical Software (ICMS 2010), June 2010, pages 42-45, LNCS 6327, Springer. The original publication is (or will be) available on www.springerlink.com. • This article presents a few applications where reliable computations are obtained using the GNU MPFR library. Why and how to use arbitrary precision, with Kaveh R. Ghazi, Vincent Lefèvre and Philippe Théveny, March 2010, Computing in Science and Engineering, volume 12, number 3, pages 62-65, 2010 (© IEEE). • Most nowadays floating-point computations are done in double precision, i.e., with a significand (or mantissa) of 53 bits. However, some applications require more precision: double-extended (64 bits or more), quadruple precision (113 bits) or even more. In an article published in The Astronomical Journal in 2001, Toshio Fukushima says: In the days of powerful computers, the errors of numerical integration are the main limitation in the research of complex dynamical systems, such as the long-term stability of our solar system and of some exoplanets [...] and gives an example where using double precision leads to an accumulated round-off error of more than 1 radian for the solar system! Another example where arbitrary precision is useful is static analysis of floating-point programs running in electronic control units of aircrafts or in nuclear reactors. An O(M(n) log n) algorithm for the Jacobi symbol, with Richard Brent, January 2010, Proceedings of the Ninth Algorithmic Number Theory Symposium (ANTS-IX), Nancy, France, July 19-23, 2010, LNCS 6197, pages 83-95, Springer Verlag [the original publication is or will be available at www.springerlink.com]. • The best known algorithm to compute the Jacobi symbol of two n-bit integers runs in time O(M(n) log n), using Schönhage's fast continued fraction algorithm combined with an identity due to Gauss. We give a different O(M(n) log n) algorithm based on the binary recursive gcd algorithm of Stehlé and Zimmermann. Our implementation -- which to our knowledge is the first to run in time O(M(n) log n) -- is faster than GMP's quadratic implementation for inputs larger than about 10000 decimal digits. Note: the subquadratic code mentioned in the paper is available here. Note added July 19, 2019: Niels Möller published on arxiv a description of the algorithm he implemented in GMP in 2010. Factorization of a 768-bit RSA modulus, with Thorsten Kleinjung, Kazumaro Aoki, Jens Franke, Arjen K. Lenstra, Emmanuel Thomé, Joppe W. Bos, Pierrick Gaudry, Alexander Kruppa, Peter L. Montgomery, Dag Arne Osvik, Herman te Riele and Andrey Timofeev, Proceedings of Crypto'2010, Santa Barbara, USA, LNCS 6223, pages 333-350, 2010 [technical announcement]. • This paper reports on the factorization of the 768-bit number RSA-768 by the number field sieve factoring method and discusses some implications for RSA [more details here] Note added 21 September 2012: in Section 2.3 (Sieving) we report 47 762 243 404 unique relations (including free relations). It appears the correct number should be about 10^9 less, i.e., 46 762 246 508 including free relations, and 46 705 023 046 without free relations. Note added 09 October 2012: in Section 2.3 (Sieving) the number of remaining prime ideals we report during the filtering (initially 35.3G, then 14.5G and 10G) are most probably underestimated by about 5G. The Great Trinomial Hunt, with Richard Brent, Notices of the American Mathematical Society, volume 58, number 2, pages 233-239, February 2011. The glibc bug #10709, September 2009. [bugzilla entry] • On computers without double-extended precision, the GNU libc 2.10.1 incorrectly rounds the sine of (the double-precision closest to) 0.2522464. This is a bug in IBM's Accurate Mathematical Library, which claims correct rounding, as recommended by IEEE 754-2008. We analyze this bug and propose a fix. Calcul formel : mode d'emploi. Exemples en Maple, with Philippe Dumas, Claude Gomez, Bruno Salvy, March 2009 (in french) [HAL entry]. • This book is a free version of the book of the same name published by Masson in 1995. The examples use an obsolete version of Maple (V.3), but most of the text still applies to Maple and other modern computer algebra systems. Computing predecessor and successor in rounding to nearest, with Siegfried Rump, Sylvie Boldo and Guillaume Melquiond, BIT Numerical Mathematics, volume 49, number 2, pages 419-431, 2009. • We give simple and efficient methods to compute and/or estimate the predecessor and successor of a floating-point number using only floating-point operations in rounding to nearest. This may be used to simulate interval operations, in which case the quality in terms of the diameter of the result is significantly improved compared to existing approaches. Note added January 31, 2018: Jean-Michel Muller found a error in Remark 1 following Theorem 2.2, where we say that in the range [1/2,2]*eta/u, Algorithm 1 returns csup = succ(succ(c)). This is wrong. For example, in the IEEE 754 binary32 format, this range is [2^-126,2^-124]. For c = 2^-125 - 2^-149, Algorithm 1 returns csup = succ(c). Worst Cases for the Exponential Function in the IEEE 754r decimal64 Format, with Vincent Lefèvre and Damien Stehlé, LNCS volume 5045, pages 114-126, special LNCS issue following the Dagstuhl seminar 06021: Reliable Implementation of Real Number Algorithms: Theory and Practice, August 2008, • We searched for the worst cases for correct rounding of the exponential function in the IEEE 754r decimal64 format, and computed all the bad cases whose distance from a breakpoint (for all rounding modes) is less than 10^-15 ulp, and we give the worst ones. In particular, the worst case for |x| ≥ 3 * 10^-11 is exp(9.407822313572878 * 10^-2) = 1.09864568206633850000000000000000278... This work can be extended to other elementary functions in the decimal64 format and allows the design of reasonably fast routines that will evaluate these functions with correct rounding, at least in some domains. [Complete lists of worst cases for the exponential are available for the IEEE 754r decimal32 and decimal64 formats.] Ten New Primitive Binary Trinomials, with Richard Brent, Mathematics of Computation 78 (2009), pages 1197-1199 [Brent's web page]. • We exhibit ten new primitive trinomials over GF(2) of record degrees 24036583, 25964951, 30402457, and 32582657. This completes the search for the currently known Mersenne prime exponents. Implementation of the reciprocal square root in MPFR, March 2008 (extended abstract), Dagstuhl Seminar Proceedings following Dagstuhl seminar 08021 (Numerical validation in current hardware architectures), January 06-11, 2008. • We describe the implementation of the reciprocal square root --- also called inverse square root --- as a native function in the MPFR library. The difficulty is to implement Newton's iteration for the reciprocal square root on top's of GNU MP's mpn layer, while guaranteeing a rigorous 1/2 ulp bound on the roundoff error. Landau's function for one million billions, with Marc Deléglise and Jean-Louis Nicolas, Journal de Théorie des Nombres de Bordeaux, volume 20, number 3, pages 625-671, 2008. A Maple program implementing the algorithm described in this paper is available from Jean-Louis Nicolas web page. • Let S[n] denote the symmetric group with n letters, and g(n) the maximal order of an element of S[n]. If the standard factorization of M into primes is M=q[1]^a[1] q[2]^a[2] ... q[k]^a[k], we define l(M) to be q[1]^a[1] + q[2]^a[2] + ... + q[k]^a[k]; one century ago, E. Landau proved that g(n)=max[l(M) ≤ n] M and that, when n goes to infinity, log g(n) ~ sqrt(n log(n)). There exists a basic algorithm to compute g(n) for 1 ≤ n ≤ N; its running time is O(N^3/2 / sqrt(log N)) and the needed memory is O(N); it allows computing g(n) up to, say, one million. We describe an algorithm to calculate g(n) for n up to 10^15. The main idea is to use the so-called l-superchampion numbers. Similar numbers, the superior highly composite numbers, were introduced by S. Ramanujan to study large values of the divisor function tau(n)=sum[d divides n] 1. Faster Multiplication in GF(2)[x], with Richard P. Brent, Pierrick Gaudry and Emmanuel Thomé, Proceedings of the Eighth Algorithmic Number Theory Symposium (ANTS-VIII), May 17-22, 2008, Banff Centre, Banff, Alberta (Canada), A. J. van der Poorten and A. Stein, editors, pages 153--166, LNCS 5011, 2008. A preliminary version appeared as INRIA Research Report, November 2007. • In this paper, we discuss an implementation of various algorithms for multiplying polynomials in GF(2)[x]: variants of the window methods, Karatsuba's, Toom-Cook's, Schönhage's and Cantor's algorithms. For most of them, we propose improvements that lead to practical speedups. The code that we developed for this paper is contained in the gf2x package, available under the GNU General Public License from https://gitlab.inria.fr/gf2x/gf2x. Note (added 20 May 2008): part of this paper will be soon obsolete, namely the base case section, with the PCLMULQDQ instruction from Intel's new AVX instruction-set (compiler intrinsics, simulator, compiler support). Note (added 26 Jan 2009): Gao and Mateer have found a theoretical speedup in Cantor multiplication, see http://cr.yp.to/f2mult.html. Note (added 29 Nov 2011): Su and Fan analyze the use of PCLMULQDQ in http://eprint.iacr.org/2011/589. Arithmétique entière, cours aux JNCF 2007, [in french]. A Multi-level Blocking Distinct Degree Factorization Algorithm, INRIA Research Report 6331, with Richard P. Brent, 16 pages, October 2007. This paper describes in detail the algorithm presented at the 8th International Conference on Finite Fields and Applications (Fq8), July 9-13, 2007, Melbourne, Australia [extended abstract], [Richard's slides]. A revised version appeared in a special issue of Contemporary Mathematics, volume 461, pages 47-58, 2008. • We give a new algorithm for performing the distinct-degree factorization of a polynomial P(x) over GF(2), using a multi-level blocking strategy. The coarsest level of blocking replaces GCD computations by multiplications, as suggested by Pollard (1975), von zur Gathen and Shoup (1992), and others. The novelty of our approach is that a finer level of blocking replaces multiplications by squarings, which speeds up the computation in GF(2)[x]/P(x) of certain interval polynomials when P(x) is sparse. As an application we give a fast algorithm to search for all irreducible trinomials x^r + x^s + 1 of degree r over GF(2), while producing a certificate that can be checked in less time than the full search. Naive algorithms cost O(r^2) per trinomial, thus O(r^3) to search over all trinomials of given degree r. Under a plausible assumption about the distribution of factors of trinomials, the new algorithm has complexity O(r^2 (log r)^3/2 (log log r)^1/2) for the search over all trinomials of degree r. Our implementation achieves a speedup of greater than a factor of 560 over the naive algorithm in the case r = 24036583 (a Mersenne exponent). Using our program, we have found two new primitive trinomials of degree 24036583 over GF(2) (the previous record degree was 6972593). A GMP-based implementation of Schönhage-Strassen's large integer multiplication algorithm, with Pierrick Gaudry and Alexander Kruppa, Proceedings of the International Symposium on Symbolic and Algebraic Computation (ISSAC 2007), Waterloo, Ontario, Canada, pages 167-174, editor C.W.Brown, 2007. • Schönhage-Strassen's algorithm is one of the best known algorithms for multiplying large integers. Implementing it efficiently is of utmost importance, since many other algorithms rely on it as a subroutine. We present here an improved implementation, based on the one distributed within the GMP library. The following ideas and techniques were used or tried: faster arithmetic modulo 2^n+1, improved cache locality, Mersenne transforms, Chinese Remainder Reconstruction, the sqrt(2) trick, Harley's and Granlund's tricks, improved tuning. We also discuss some ideas we plan to try in the future. Note: this paper was motivated by Allan Steel, and the corresponding code is available from http://www.loria.fr/~zimmerma/software/. Andrew Sutherland used our FFT code to set a new record for elliptic curve point counting. Note added on April 29, 2011: Tsz-Wo Sze multiplied integers of 2^40 bits in about 2000 seconds using a cluster of 1350 cores : preprint. Time- and Space-Efficient Evaluation of Some Hypergeometric Constants, with Howard Cheng, Guillaume Hanrot, Emmanuel Thomé and Eugene Zima, Proceedings of the International Symposium on Symbolic and Algebraic Computation (ISSAC 2007), Waterloo, Ontario, Canada, pages 85-91, editor C.W.Brown, 2007. • The currently best known algorithms for the numerical evaluation of hypergeometric constants such as zeta(3) to d decimal digits have time complexity O(M(d) log^2 d) and space complexity of O(d log d) or O(d). Following work from Cheng, Gergel, Kim and Zima, we present a new algorithm with the same asymptotic complexity, but more efficient in practice. Our implementation of this algorithm improves over existing programs for the computation of Pi, and we announce a new record of 2 billion digits for zeta(3). Worst Cases of a Periodic Function for Large Arguments, with Guillaume Hanrot, Vincent Lefèvre and Damien Stehlé, Proceedings of the 18th IEEE Symposium on Computer Arithmetic (ARITH'18), pages 133-140, Montpellier, France, 2007. A preliminary version appeared as INRIA Research Report 6106, January 2007. • One considers the problem of finding hard to round cases of a periodic function for large floating-point inputs, more precisely when the function cannot be efficiently approximated by a polynomial. This is one of the last few issues that prevents from guaranteeing an efficient computation of correctly rounded transcendentals for the whole IEEE-754 double precision format. The first non-naive algorithm for that problem is presented, with an heuristic complexity of O(2^0.676 p) for a precision of p bits. The efficiency of the algorithm is shown on the largest IEEE-754 double precision binade for the sine function, and some corresponding bad cases are given. We can hope that all the worst cases of the trigonometric functions in their whole domain will be found within a few years, a task that was considered out of reach until now. Asymptotically Fast Division for GMP, October 2005, revised August 2006, October 2006 and February 2015. • Until version 4.2.1, GNU MP (GMP for short) division has complexity O(M(n) log(n)), which is not asymptotically optimal. We propose here some division algorithms that achieve O(M(n)) with small constants, with corresponding GMP code. Code is available too: invert.c computes an approximate inverse within 1 ulp in 3M(n) [revised in February and May 2015]. A patch for the 2-limb case was proposed by Marco Bodrato: invert.diff. Errors Bounds on Complex Floating-Point Multiplication, with Richard Brent and Colin Percival, Mathematics of Computation volume 76 (2007), pages 1469-1481. Some technical details are given in INRIA Research Report 6068, December 2006. [DOI] • Given floating-point arithmetic with t-digit base-β significands in which all arithmetic operations are performed as if calculated to infinite precision and rounded to a nearest representable value, we prove that the product of complex values z[0] and z[1] can be computed with maximum absolute error |z[0]| |z[1]| (1/2) β^1-t sqrt(5). In particular, this provides relative error bounds of 2^-24 sqrt(5) and 2^-53 sqrt(5) for IEEE 754 single and double precision arithmetic respectively, provided that overflow, underflow, and denormals do not occur. We also provide the numerical worst cases for IEEE 754 single and double precision arithmetic. 20 years of ECM (&copy Springer-Verlag), with Bruce Dodson, Proceedings of ANTS VII, July 2006. A preliminary version appeared as INRIA Research Report 5834, February 2006. • The Elliptic Curve Method for integer factorization (ECM) was invented by H. W. Lenstra, Jr., in 1985 [Lenstra87]. In the past 20 years, many improvements of ECM were proposed on the mathematical, algorithmic, and implementation sides. This paper summarizes the current state-of-the-art, as implemented in the GMP-ECM software. Erratum: on page 541 we write ``Computer experiments indicate that these curves have, on average, 3.49 powers of 2 and 0.78 powers of 3, while Suyama's family has 3.46 powers of 2 and 1.45 powers of 3''. As noticed by Romain Cosset, those experiments done by the first author are wrong, since he ran 1000 random curves with the same prime input p=10^10+19, and the results differ according on the congruence of p mod powers of 2 and 3. With 10000 random curves on the 10000 primes just above 10^20, we get an average of 2^3.34*3^1.68 = 63.9 for Suyama's family, and 2^3.36*3^0.67 = 21.5 for curves of the form (16d + 18) y^2 = x^3 + (4d + 2)x^2 + x. Erratum: on page 536, section 3.5, one should read j = pi/d2 mod d1 instead of j = -pi/d2 mod d1 (reported by Alberto Zanoni, 3 Oct 2023). MPFR: A Multiple-Precision Binary Floating-Point Library With Correct Rounding, with Laurent Fousse, Guillaume Hanrot, Vincent Lefèvre, Patrick Pélissier, INRIA Research Report RR-5753, November 2005. A revised version appeared in ACM TOMS (Transactions on Mathematical Software), volume 33, number 2, article 13, 2007. • This paper presents a multiple-precision binary floating-point library, written in the ISO C language, and based on the GNU MP library. Its particularity is to extend ideas from the IEEE-754 standard to arbitrary precision, by providing correct rounding and exceptions. We demonstrate how these strong semantics are achieved --- with no significant slowdown with respect to other tools --- and discuss a few applications where such a library can be useful. Techniques algorithmiques et méthodes de programmation (in french), 11 pages, July 2005, appeared in Encyclopédie de l'informatique et des systèmes d'information, pages 929-935, Vuibert, 2006. 5,341,321, June 2005. • This short note shows the nasty effects of patents for the development of free software, even for patents that were not written with software applications in mind. The Elliptic Curve Method, November 2002, revised April 2003 and September 2010, appeared in the Encyclopedia of Cryptography and Security, Springer, 2005 (old link). • Describes in two pages the history of ECM, how it works at high level, improvements to the method, and some applications. MPFR : vers un calcul flottant correct ? (in french), Interstices, 2005. • Obtenir un seul résultat pour un calcul donné : à première vue, cela semble une évidence ; c'est en fait un vaste sujet de recherche auquel les chercheurs apportent petit à petit leurs contributions. Une nouvelle étape est franchie aujourd'hui grâce à MPFR, une bibliothèque de calcul multi-précision sur les nombres flottants. A primitive trinomial of degree 6972593, with Richard Brent and Samuli Larvala, Mathematics of Computation, volume 74, number 250, pages 1001-1002, 2005. • The only primitive trinomials of degree 6972593 over GF(2) are x^6972593 + x^3037958 + 1 and its reciprocal. An elementary digital plane recognition algorithm, with Yan Gerard and Isabelle Debled-Rennesson, appeared in Discrete Applied Mathematics, volume 151, issue 1-3, pages 169-183, 2005. • A naive digital plane is a subset of points (x,y,z) in Z^3 verifying h ≤ ax+by+cz < h+max{ | a|,|b|, |c| } where (a,b,c,h) in Z^4. Given a finite unstructured subset of Z^3, determine whether there exists a naive digital plane containing it is called digital plane recognition. This question is rather classical in the field of digital geometry (also called discrete geometry). We suggest in this paper a new algorithm to solve it. Its asymptotic complexity is bounded by O(n^7) but its behavior seems to be linear in practice. It uses an original strategy of optimization in a set of triangular facets (triangles). The code is short and elementary (less than 300 lines) and available on http://www.loria.fr/~debled/plane and here. Searching Worst Cases of a One-Variable Function Using Lattice Reduction, with Damien Stehlé and Vincent Lefèvre, IEEE Transactions on Computers, volume 54, number 3, pages 340-346, 2005. A preliminary version appeared as INRIA Research Report 4586. Some results for the 2^x function double-extended precision are available here. • We propose a new algorithm to find worst cases for the correct rounding of a mathematical function of one variable. We first reduce this problem to the real small value problem--i.e., for polynomials with real coefficients. Then, we show that this second problem can be solved efficiently by extending Coppersmith's work on the integer small value problem--for polynomials with integer coefficients--using lattice reduction. For floating-point numbers with a mantissa less than N and a polynomial approximation of degree d, our algorithm finds all worst cases at distance less than N^-d^2/(2d+1) from a machine number in time O(N^(d+1)/(2d+1)+epsilon). For d=2, a detailed study improves on the O(N^2/3+epsilon) complexity from Lefèvre's algorithm to O(N^4/ 7+epsilon). For larger d, our algorithm can be used to check that there exist no worst cases at distance less than N^-k in time O(N^1/2+epsilon). Note added on February 10, 2020: in the IEEE TC version, page 344, Section 5,1, line 6 of the matrix: in column 1, the coefficient should read Ca instead of C; the coefficient Cb should be in the second column, and the coefficient Cc should be in the third column. Gal's Accurate Tables Method Revisited, with Damien Stehlé, INRIA Research Report RR-5359, October 2004. An improved version appeared in the Proceedings of Arith'17. Those ideas are demonstrated by an implementation of the exp2 function in double precision. Erratum in the final version of the paper: in Section 4, the simultaneous worst case for sin and cos is t0=1f09c0c6cde5e3 and not t0= 31a93fddd45e3. See also my coauthor page. • Gal's accurate tables algorithm aims at providing an efficient implementation of elementary functions with correct rounding as often as possible. This method requires an expensive pre-computation of a table made of the values taken by the function or by several related functions at some distinguished points. Our improvements of Gal's method are two-fold: on the one hand we describe what is the arguably best set of distinguished values and how it improves the efficiency and correctness of the implementation of the function, and on the other hand we give an algorithm which drastically decreases the cost of the pre-computation. These improvements are related to the worst cases for the correct rounding of mathematical functions and to the algorithms for finding them. We show that the whole method can be turned into practice by giving complete tables for 2^x and sin(x) for x in [1/2,1[, in double precision. Newton iteration revisited, with Guillaume Hanrot, March 2004. • On March 10, 2004, Dan Bernstein announced a revised draft of his paper Removing redundancy in high-precision Newton iteration, with algorithms that compute a reciprocal of order n over C[[x]] 1.5+o(1) times longer than a product; a quotient or logarithm 2.16666...+o(1) times longer; a square root 1.83333...+o(1) times longer; an exponential 2.83333...+o(1) times longer. We give better Note added on March 24, 2004: the 1.5+o(1) reciprocal algorithm was already published by Schönhage (Information Processing Letters 74, 2000, p. 41-46) Note added on July 24, 2006: in a preprint Newton's method and FFT trading, Joris van der Hoeven announces better constants for the exponential (2.333...) and the quotient (1.666...). [October 27, 2009: those constants need yet to be confirmed.] Note added on April 20, 2009: as noticed by David Harvey, in Section 3, in the Divide algorithm, Step 4 should read q <- q[0] + g[0] (h[1] - ε) x^n, where h = h[0] + x^n h[1]. Indeed, after Step 3 we have q[0] f = h[0] + ε x^n + O(x^2n), i.e., q[0] = h[0]/f + ε/f x^n + O(x^2n). Thus h/f = q[0] + (h[1] - ε)/f x^n + O(x^2n) = q[0] + g[0] (h[1] - ε) x^n + O(x^2n). Note added on September 11, 2009: as noticed by David Harvey, the 1.91666...M(n) cost for the square root (Section 4) is incorrect; it should be 1.8333...M(n) instead. Indeed, Step 3 of Algorithm SquareRoot costs only M(n)/3 to compute f[0]^2 mod x^n-1 instead of M(n)/2, since we only need two FFT transforms of size n. David Harvey has improved the reciprocal to 1.444...M(n) and the square root to 1.333...M(n) [preprint]. Arithmétique flottante, with Vincent Lefèvre, INRIA Research Report RR-5105, February 2004 (in french). • Ce document rassemble des notes d'un cours donné en 2003 dans la filière Algorithmique Numérique et Symbolique du DEA d'Informatique de l'Université Henri Poincaré Nancy 1. Ces notes sont basées en grande partie sur le livre Elementary Functions. Algorithms and Implementation de Jean-Michel Muller. Aussi disponible sur LibreCours. A Formal Proof of Demmel and Hida's Accurate Summation Algorithm, with Laurent Fousse, January 2004. • A new proof of the ``accurate summation'' algorithm proposed by Demmel and Hida is presented. The main part of that proof has been written in the Coq language and verified by the Coq proof The Middle Product Algorithm, I. Speeding up the division and square root of power series, with Guillaume Hanrot and Michel Quercia, AAECC, volume 14, number 6, pages 415-438, 2004. A preliminary version appeared as INRIA Resarch Report 3973. • We present new algorithms for the inverse, division, and square root of power series. The key trick is a new algorithm --- MiddleProduct or, for short, MP --- computing the n middle coefficients of a (2n-1) * n full product in the same number of multiplications as a full n * n product. This improves previous work of Brent, Mulders, Karp and Markstein, Burnikel and Ziegler. These results apply both to series and polynomials. Note added June 10, 2009: Part II of this work was planned to deal with integer entries, but David Harvey was faster than us, see The Karatsuba middle product for integers. Proposal for a Standardization of Mathematical Function Implementation in Floating-Point Arithmetic, with David Defour, Guillaume Hanrot, Vincent Lefèvre, Jean-Michel Muller and Nathalie Revol, January 2003, Numerical Algorithms, volume 37, number 1-4, pages 367-375, 2004. Extended version appeared as INRIA Research Report RR-5406. • Some aspects of what a standard for the implementation of the elementary functions could be are presented. Firstly, the need for such a standard is motivated. Then the proposed standard is given. The question of roundings constitutes an important part of this paper: three levels are proposed, ranging from a level relatively easy to attain (with fixed maximal relative error) up to the best quality one, with correct rounding on the whole range of every function. We do not claim that we always suggest the right choices, or that we have thought about all relevant issues. The mere goal of this paper is to raise questions and to launch the discussion towards a standard. A long note on Mulders' short product, with Guillaume Hanrot, INRIA Research Report RR-4654, November 2002. A revised version appeared in the Journal of Symbolic Computation, volume 37, pages 391-401, 2004. A corrigendum appeared in 2014 (pdf). • The short product of two power series is the meaningful part of the product of these objects, i.e., sum(a[i] b[j] x^i+j, i+j < n). In [Mulders00], Mulders gives an algorithm to compute a short product faster than the full product in the case of Karatsuba's multiplication [KaOf62]. This algorithm works by selecting a cutoff point k and performing a full k x k product and two (n-k) x (n-k) short products recursively. Mulders also gives a heuristically optimal cutoff point beta n. In this paper, we determine the optimal cutoff point in Mulders' algorithm. We also give a slightly more general description of Mulders' method. Note added November 24, 2011: Murat Cenk and Ferruh Ozbudak published a paper "Multiplication of polynomials modulo x^n" in Theoretical Computer Science, vol. 412, pages 3451-3462, 2011. The main result of this paper (Theorem 3.1) is a direct consequence of Mulders' short product, which is not cited in this paper. Blame to the authors, to the anonymous reviewers and to the editor in charge, Victor Y. Pan. Moreover the results in Table 1 of this paper are not optimal: M̂(14) <= M(10) + 2*M̂(4) <= 39+2*8 = 55 (instead of 56); M̂(16) <= M(12) + 2*M̂(4) <= 51+2*8 = 67 (instead of 70); M̂(17) <= M(12) + 2*M̂(5) <= 51+2*11 = 73 (instead of 76); M̂(18) <= M(12) + 2*M̂(6) <= 51+2*15 = 81 (instead of 85). Note added February 10, 2014: Karim Belabas pointed out an error in Algorithm ShortProduct (and in Theorem 2). Indeed, the result is not necessarily reduced modulo x^n, since for example with f= 2*x+1 and g=3*x+4 with n=2, we have l=1*4=4, h=2*3=6, m=(1+2)*(4+3)-4-6=11, thus the result is 4 + 11*x + 6*x^2. To fix this, it suffices to zero out the coefficient of x^n in the final result. Random number generators with period divisible by a Mersenne prime, with Richard Brent, Proceedings of Computational Science and its Applications (ICCSA), LNCS 2667, pages 1-10, 2003. [HAL] • Pseudo-random numbers with long periods and good statistical properties are often required for applications in computational finance. We consider the requirements for good uniform random number generators, and describe a class of generators whose period is a Mersenne prime or a small multiple of a Mersenne prime. These generators are based on "almost primitive" trinomials, that is trinomials having a large primitive factor. They have very fast vector/parallel implementations and excellent statistical properties. A Binary Recursive Gcd Algorithm, with Damien Stehlé, INRIA Research Report RR-5050, December 2003. A revised version (&copy Springer-Verlag) is published in the Proceedings of the Algorithmic Number Theory Symposium (ANTS VI). [Damien's page with erratum] [implementation in GMP] • The binary algorithm is a variant of the Euclidean algorithm that performs well in practice. We present a quasi-linear time recursive algorithm that computes the greatest common divisor of two integers by simulating a slightly modified version of the binary algorithm. The structure of the recursive algorithm is very close to the one of the well-known Knuth-Schönhage fast gcd algorithm, but the description and the proof of correctness are significantly simpler in our case. This leads to a simplification of the implementation and to better running times. Note (added 14 Oct 2009): since that work, Niels Möller has designed a classical left-to-right/MSB fast gcd algorithm, cf his paper On Schönhage's algorithm and subquadratic integer gcd computation (Mathematics of Computation, volume 77, number 261, pages 589--607, 2008) and his slides. Note (added 1st April 2010): Robert Harley had published in his ECDL code a similar extended binary gcd algorithm (however only in the quadratic case). Note (added 26 February 2019): as reported by Bo-Yin Yang and Dan Bernstein, the sequence G[n] = 0,1,-1,5,-9,29,-65,181,-441,1165,..., which is the worst-case for Theorem 2, is not necessarily the worst-case in practice (as claimed in Section 6.2). See their paper. Algorithms for finding almost irreducible and almost primitive trinomials, with Richard Brent, April 2003, Proceedings of a Conference in Honour of Professor H. C. Williams, Banff, Canada (May 2003), The Fields Institute, Toronto. [HAL] [arXiv] • Consider polynomials over GF(2). We describe efficient algorithms for finding trinomials with large irreducible (and possibly primitive) factors, and give examples of trinomials having a primitive factor of degree r for all Mersenne exponents r = + 3 mod 8 in the range 5 < r < 2976221, although there is no irreducible trinomial of degree r. We also give trinomials with a primitive factor of degree r = 2^k for 3 &lt= k &lt= 12. These trinomials enable efficient representations of the finite field GF(2^r). We show how trinomials with large primitive factors can be used efficiently in applications where primitive trinomials would normally be used. Note added April 22, 2009: this paper is mentioned in Divisibility of Trinomials by Irreducible Polynomials over F[2], by Ryul Kim and Wolfram Koepf, International Journal of Algebra, Vol. 3, 2009, no. 4, 189-197 (arxiv). Accurate Summation: Towards a Simpler and Formal Proof, with Laurent Fousse, March 2003, in Proc. of RNC'5, pages 97-108. • This paper provides a simpler proof of the ``accurate summation'' algorithm proposed by Demmel and Hida in DeHi02. It also gives improved bounds in some cases, and examples showing that those new bounds are optimal. This simpler proof will be used to obtain a computer-generated proof of Demmel-Hida's algorithm, using a proof assistant like HOL, PVS or Coq. 10^2098959 [in french], décembre 2002, paru dans la Gazette du Cines, numéro 14, janvier 2003. • Cet article décrit les premiers résultats de la recherche (avec Richard Brent et Samuli Larvala) de trinômes primitifs de degré 6972593 sur GF(2), et indique quelques conséquences amusantes de la loi de Moore. A Fast Algorithm for Testing Reducibility of Trinomials mod 2 and Some New Primitive Trinomials of Degree 3021377, with Richard Brent and Samuli Larvala, Mathematics of Computation, volume 72, number 243, pages 1443-1452, 2003. A preliminary version appeared as Report PRG TR-13-00, Oxford University Computing Laboratory, December 2000. • The standard algorithm for testing irreducibility of a trinomial of prime degree r over GF(2) requires 2r + O(1) bits of memory and of order r^2 bit-operations. We describe an algorithm which requires only 3r/2 + O(1) bits of memory and less bit-operations than the standard algorithm. Using the algorithm, we have found several new irreducible trinomials of high degree. If r is a Mersenne exponent (i.e. 2^r - 1 is a Mersenne prime), then an irreducible trinomial of degree r is necessarily primitive and can be used to give a pseudo-random number generator with period at least 2^r - 1. We give examples of primitive trinomials for r = 756839, 859433, and 3021377. The results for r = 859433 extend and correct some computations of Kumada et al [Mathematics of Computation 69 (2000), 811-814]. The two results for r = 3021377 are primitive trinomials of the highest known degree. Ten Consecutive Primes In Arithmetic Progression, with Harvey Dubner, Tony Forbes, Nik Lygeros, Michel Mizony and Harry Nelson, Mathematics of Computation, volume 71, number 239, pages 1323-1328, 2002. [HAL] • In 1967 the first set of 6 consecutive primes in arithmetic progression was found. In 1995 the first set of 7 consecutive primes in arithmetic progression was found. Between November, 1997 and March, 1998, we succeeded in finding sets of 8, 9 and 10 consecutive primes in arithmetic progression. This was made possible because of the increase in computer capability and availability, and the ability to obtain computational help via the Internet. Although it is conjectured that there exist arbitrarily long sequences of consecutive primes in arithmetic progression, it is very likely that 10 primes will remain the record for a long time. Worst Cases and Lattice Reduction, with Damien Stehlé and Vincent Lefèvre, INRIA Research Report RR-4586, October 2002. Appeared in the proceedings of the 16th IEEE Symposium on Computer Arithmetic (Arith'16), IEEE Computer Society, pages 142-147, 2003. • We propose a new algorithm to find worst cases for correct rounding of an analytic function. We first reduce this problem to the real small value problem --- i.e. for polynomials with real coefficients. Then we show that this second problem can be solved efficiently, by extending Coppersmith's work on the integer small value problem --- for polynomials with integer coefficients --- using lattice reduction [Coppersmith96a,Coppersmith96b,Coppersmith01]. For floating-point numbers with a mantissa less than $N$, and a polynomial approximation of degree $d$, our algorithm finds all worst cases at distance $< N^{\frac{-d^2}{2d+1}}$ from a machine number in time $O(N^{\frac{d+1}{2d+1}+\varepsilon})$. For $d=2$, this improves on the $O(N^{2/3+\varepsilon})$ complexity from Lef\`evre's algorithm [Lefevre00,LeMu01] to $O(N^{3/5+\varepsilon})$. We exhibit some new worst cases found using our algorithm, for double-extended and quadruple precision. For larger $d$, our algorithm can be used to check that there exist no worst cases at distance $< N^{-k}$ in time $O(N^{\frac{1}{2}+O(\frac{1}{k})})$. A Proof of GMP Square Root, with Yves Bertot and Nicolas Magaud, Journal of Automated Reasoning, volume 29, 2002, pages 225--252, Special Issue on Automating and Mechanising Mathematics: In honour of N.G. de Bruijn. A preliminary version appeared as INRIA Research Report 4475. • We present a formal proof (at the implementation level) of an efficient algorithm proposed by Paul Zimmermann [Zimmermann00] to compute square roots of arbitrary large integers. This program, which is part of the GNU Multiple Precision Arithmetic Library (GMP) is completely proven within the Coq system. Proofs are developed using the Correctness tool to deal with imperative features of the program. The formalization is rather large (more than 13000 lines) and requires some advanced techniques for proof management and reuse. Note: Vincent Lefèvre found a potential problem in the GMP implementation, which is fixed by the following patch. This does not contradicts our proof: the problem is due to the different C data types (signed or not, different width), whereas our proof assumed a unique type. Aliquot Sequence 3630 Ends After Reaching 100 Digits, with M. Benito, W. Creyaufmüller and J. L. Varona, Experimental Mathematics, volume 11, number 2, pages 201-206. • In this paper we present a new computational record: the aliquot sequence starting at 3630 converges to 1 after reaching a hundred decimal digits. Also, we show the current status of all the aliquot sequences starting with a number smaller than 10,000; we have reached at leat 95 digits for all of them. In particular, we have reached at least 112 digits for the so-called "Lehmer five sequences," and 101 digits for the "Godwin twelve sequences." Finally, we give a summary showing the number of aliquot sequences of unknown end starting with a number less than or equal 10^6. Note added 29 July 2008: in this paper, we say (page 203, middle of right column) "It is curious to note that the driver 2^9 * 3 * 11 * 31 has appeared in no place in any of the sequences given in Table 1"; Clifford Stern notes that this driver appears at index 215 of sequence 165744, which gives 2^9 * 3 * 7 * 11 * 31^2 * 37 * 10594304241173. De l'algorithmique à l'arithmétique via le calcul formel, Habilitation à diriger des recherches, novembre 2001. (Transparents de la soutenance.) • This document presents my research contributions from 1988 to 2001, performed first at INRIA Rocquencourt within the Algo project (1988 to 1992), then at INRIA Lorraine and LORIA within the projects Euréca (1993-1997), PolKA (1998-2000), and Spaces (2001). Three main periods can be roughly distinguished: from 1988 to 1992 where my research focused on analysis of algorithms and random generation, from 1993 to 1997 where I worked on computer algebra and related algorithms, finally from 1998 to 2001 where I was interested in arbitrary precision floating-point arithmetic with well-defined semantics. Arithmétique en précision arbitraire, rapport de recherche INRIA 4272, septembre 2001, paru dans la revue "Réseaux et Systèmes Répartis, Calculateurs parallèles", volume 13, numéro 4-5, pages 357-386, 2001 [in french]. • This paper surveys the available algorithms for integer or floating-point arbitrary precision calculations. After a brief discussion about possible memory representations, known algorithms for multiplication, division, square root, greatest common divisor, input and output, are presented, together with their complexity and usage. For each operation, we present the naïve algorithm, the asymptotically optimal one, and also intermediate "divide and conquer" algorithms, which often are very useful. For floating-points computations, some general-purpose methods are presented for algebraic, elementary, hypergeometric and special functions. Tuning and Generalizing Van Hoeij's Algorithm, with Karim Belabas and Guillaume Hanrot, INRIA Research report 4124, February 2001. • Recently, van Hoeij's published a new algorithm for factoring polynomials over the rational integers. This algorithms rests on the same principle as Berlekamp-Zassenhaus, but uses lattice basis reduction to improve drastically on the recombination phase. The efficiency of the LLL algorithm is very dependent on fine tuning; in this paper, we present such tuning to achieve better performance. Simultaneously, we describe a generalization of van Hoeij's algorithm to factor polynomials over number fields. Efficient isolation of a polynomial real roots, with Fabrice Rouillier, INRIA Research report 4113, February 2001. Appeared in Journal of Computational and Applied Mathematics, volume 162, number 1, pages 33-50, 2004. • This paper gives new results for the isolation of real roots of a univariate polynomial using Descartes' rule of signs, following work of Vincent, Uspensky, Collins and Akritas, Johnson, Krandick. The first contribution is a generic algorithm which enables one to describe all the existing strategies in a unified framework. Using that framework, a new algorithm is presented, which is optimal in terms of memory usage, while doing no more computations than other algorithms based on Descartes' rule of signs. We show that these critical optimizations have important consequences by proposing a full efficient solution for isolating the real roots of zero-dimensional polynomial systems. Density results on floating-point invertible numbers, with Guillaume Hanrot, Joël Rivat and Gérald Tenenbaum, Theoretical Computer Science, volume 291, number 2, 2003, pages 135-141. (The slides of a related talk I gave in January 2002 at the workshop "Number Theory and Applications" in Luminy are here.) • Let F_k denote the k-bit mantissa floating-point (FP) numbers. We prove a conjecture of J.-M. Muller according to which the proportion of numbers in F_k with no FP-reciprocal (for rounding to the nearest element) approaches 1/2-3/2 log(4/3) i.e. about 0.06847689 as k goes to infinity. We investigate a similar question for the inverse square root. Factorization of a 512-bit RSA Modulus, with Stefania Cavallar, Bruce Dodson, Arjen K. Lenstra, Walter Lioen, Peter L. Montgomery, Brian Murphy, Herman te Riele, Karen Aardal, Jeff Gilchrist, Gérard Guillerm, Paul Leyland, Joël Marchand, François Morain, Alec Muffett, Chris Putnam, Craig Putnam, Proceedings of Eurocrypt'2000, LNCS 1807, pages 1-18, 2000. • On August 22, 1999, we completed the factorization of the 512-bit 155-digit number RSA-155 with the help of the Number Field Sieve factoring method (NFS). This is a new record for factoring general numbers. Moreover, 512-bit RSA keys are frequently used for the protection of electronic commerce -- at least outside the USA -- so this factorization represents a breakthrough in research on RSA-based systems. The previous record, factoring the 140-digit number RSA-140, was established on February 2, 1999, also with the help of NFS, by a subset of the team which factored RSA-155. The amount of computing time spent on RSA-155 was about 8400 MIPS years, roughly four times that needed for RSA-140; this is about half of what could be expected from a straightforward extrapolation of the computing time spent on factoring RSA-140 and about a quarter of what would be expected from a straightforward extrapolation from the computing time spent on RSA-130. The speed-up is due to a new polynomial selection method for NFS of Murphy and Montgomery which was applied for the first time to RSA-140 and now, with improvements, to RSA-155. A proof of GMP fast division and square root implementations, September 2000. • This short note gives a detailed correctness proof of fast (i.e. subquadratic) versions of the GNU MP mpn_bz_divrem_n and mpn_sqrtrem functions, together with complete GMP code. The mpn_bz_divrem_n function divides (with remainder) a number of 2n limbs by a divisor of n limbs in 2K(n), where K(n) is the time spent in a (n times n) multiplication, using the Moenck-Borodin-Jebelean-Burnikel-Ziegler algorithm. The mpn_sqrtrem computes the square root and the remainder of a number of 2n limbs (square root and remainder have about n limbs each) in time 3K(n)/2; it uses Karatsuba Square Root. Speeding up the Division and Square Root of Power Series, with Guillaume Hanrot and Michel Quercia, INRIA Research Report 3973, July 2000. • We present new algorithms for the inverse, quotient, or square root of power series. The key trick is a new algorithm --- RecursiveMiddleProduct or RMP --- computing the n middle coefficients of a (2n * n) product in essentially the same number of operations --- K(n) --- than a full (n * n) product with Karatsuba's method. This improves previous work of Mulders, Karp and Markstein, Burnikel and Ziegler. These results apply both to series, polynomials, and multiple precision floating-point numbers. A Maple implementation is available here, together with slides of a talk given at ENS Paris in January 2004. Factorization in Z[x]: the searching phase, with John Abbott and Victor Shoup, April 2000, Proceedings of ISSAC'2000 [HAL entry]. • In this paper we describe ideas used to accelerate the Searching Phase of the Berlekamp-Zassenhaus algorithm, the algorithm most widely used for computing factorizations in Z[x]. Our ideas do not alter the theoretical worst-case complexity, but they do have a significant effect in practice: especially in those cases where the cost of the Searching Phase completely dominates the rest of the algorithm. A complete implementation of the ideas in this paper is publicly available in the library NTL. We give timings of this implementation on some difficult factorization problems. Karatsuba Square Root, INRIA Research Report 3905, November 1999. • We exhibit an algorithm to compute the square-root with remainder of a n-word number in (3/2) K(n) word operations, where K(n) is the number of words operations to multiply two n-word numbers using Karatsuba's algorithm. If the remainder is not needed, the cost can be reduced to K(n) on average. This algorithm can be used for floating-point or polynomial computations too; although not optimal asymptotically, its simplicity gives a wide range of use, from about 50 to 1,000,000 digits, as shown by computer experiments. On Sums of Seven Cubes, with Francois Bertault and Olivier Ramaré, Mathematics of Computation, volume 68, number 227, pages 1303-1310, 1999. • We show that every integer between 1290741 and 3.375 * 10^12 is a sum of 5 nonnegative cubes, from which we deduce that every integer which is a cubic residue modulo 9 and an invertible cubic residue modulo 37 is a sum of 7 nonnegative cubes. Uniform Random Generation of Decomposable Structures Using Floating-Point Arithmetic with Alain Denise, Theoretical Computer Science, volume 218, number 2, 219--232, 1999. A preliminary version appeared as INRIA Research Report 3242, September 1997. • The recursive method formalized by Nijenhuis and Wilf [NiWi78] and systematized by Flajolet, Van Cutsem and Zimmermann [FlZiVa94], is extended here to floating-point arithmetic. The resulting ADZ method enables one to generate decomposable data structures --- both labelled or unlabelled --- uniformly at random, in expected O(n^1+ε) time and space, after a preprocessing phase of O(n^2+ε) time, which reduces to O(n^1+ε) for context-free grammars. Estimations asymptotiques du nombre de chemins Nord-Est de pente fixée et de largeur bornée, avec Isabelle Dutour et Laurent Habsieger, INRIA Research Report RR-3585, décembre 1998 [in french]. • We study here a quantity related to the number of walks with North and East steps staying under the line of slope d starting from the origin. We give an asymptotic analysis of this quantity with respect to both the width n and the slope d, answering to a question asked by Bernard Mourrain. Calcul formel : ce qu'il y a dans la boîte, journées X-UPS, octobre 1997. Cinq algorithmes de calcul symbolique, INRIA Technical Report RT-0206, notes de cours d'un module de spécialisation du DEA d'informatique de l'Université Henri Poincaré Nancy 1, 1997 [in french]. • These are lecture notes of a course entitled "Some computer algebra algorithms" given by the author at the University of Nancy 1 in 1997. Five fundamental algorithms used by computer algebra systems are briefly described: Gosper's algorithm for computing indefinite sums, Zeilberger's algorithm for definite sums, Berlekamp's algorithm for factoring polynomials over finite fields, Zassenhaus' algorithm for factoring polynomials with integer coefficients, and Lenstra's integer factorization algorithm using elliptic curves. All these algorithms were implemented --- or improved --- by the author in the computer algebra system MuPAD. Progress Report on Parallelism in MuPAD, with Christian Heckler and Torsten Metzner, Inria Research Report 3154, April 1997. • MuPAD is a general purpose computer algebra system with two programming concepts for parallel processing: micro-parallelism for shared-memory machines and macro-parallelism for distributed architectures. This article describes language instructions for both concepts, the current state of implementation, together with some examples. Polynomial Factorization Challenges, with L. Bernardin and M. Monagan, poster presented at the International Symposium on Symbolic and Algebraic Computation (ISSAC), July 1996, 4 pages. • Joachim von zur Gathen has proposed a challenge for factoring univariate polynomials over finite fields to evaluate the practicability of current factorization algorithms ("A Factorization Challenge", SIGSAM Bulletin 26(2):22-24, 1992). More recently, Victor Shoup has proposed an alternate family of polynomials with a similar goal in mind. Our effort is to take these challenges on using the general purpose computer algebra systems Maple and MuPAD. The result of our work are the factorizations of the von zur Gathen polynomials f[n] and of the Shoup polynomials F[n] for n from 1 to 500. We also present the factorization of the degree 1000 von zur Gathen polynomial f[1000]. GFUN: a Maple package for the manipulation of generating and holonomic functions in one variable, with Bruno Salvy, ACM Transactions on Mathematical Software, volume 20, number 2, 1994. A preliminary version appeared as INRIA Technical Report 143, October 1992. • We describe the GFUN package which contains functions for manipulating sequences, linear recurrences or differential equations and generating functions of various types. This document is intended both as an elementary introduction to the subject and as a reference manual for the package. Random walks, heat equation and distributed algorithms, with Guy Louchard, René Schott and Michael Tolley, Journal of Computational and Applied Mathematics, volume 53, pages 243-274, 1994. • New results are obtained concerning the analysis of the storage allocation algorithm which permits one to maintain two stacks inside a shared (continuous) memory area of fixed size m and of the banker's algorithm (a deadlock avoidance policy). The formulation of these problems is in terms of random walks inside polygonal domains in a two-dimensional lattice space with several reflecting barriers and one absorbing barrier. For the two-stacks problem, the return time to the origin, the time to absorption, the last leaving time from the origin and the number of returns to the origin before absorption are investigated. For the banker's algorithm, the trend-free absorbed random walk is analyses with numerical methods. We finally analyse the average excursion along one axis for the classical random walk: an analytic method enables us to deduce asymptotic results for this average excursion. The n-Queens Problem, with Igor Rivin and Ilan Vardi, American Mathematical Monthly, volume 101, number 7, pages 629-639. • We give several lower and upper bounds for the number Q(n) of ways to put n queens on an nxn chessboard, and the number T(n) to put n queens on a toroidal chessboard (i.e., with n upper diagonals instead of 2n-1). We also conjecture that (log T(n))/(n log n) and (log Q(n))/(n log n) tend to two positive constants. Note added 11 September 2012: in Remark 1 page 636, the primes p = 2q+1 are called safe primes (and the primes q are called Sophie-Germain primes). Moreover Warren Smith pointed the paper "The n-queens problem" by Bruen and Dixon (Discrete Mathematics, 1975) which gives a construction yielding (if I am correct) at least 8 * binomial(floor((p-3)/8),2) * p different toroidal solutions for any prime p >= 13. Also on page 635 we say "Remark. Corollary 1 gives the first example of a set of n's for which Q(n) grows faster than a polynomial in n". This is wrong, since T. Kløve gives in "The modular n-queens problem" (Discrete Mathematics, 1977) a construction yielding for n=p*p at least n*(p-3)*p^p-1 (thanks again to Warren Smith for pointing us this). A Calculus of Random Generation, with Philippe Flajolet and Bernard Van Cutsem, Proceedings of European Symposium on Algorithms (ESA'93), LNCS 726, pages 169-180, 1993. • A systematic approach to the random generation of labelled combinatorial objects is presented. It applies to structures that are decomposable, i.e., formally specifiable by grammars involving union, product, set, sequence, and cycle constructions. A general strategy is developed for solving the random generation problem with two closely related types of methods: for structures of size n, the boustrophedonic algorithms exhibit a worst-case behaviour of the form O(n log n); the sequential algorithms have worst case O(n^2), while offering good potential for optimizations in the average case. (Both methods appeal to precomputed numerical tables of linear size). A companion calculus permits to systematically compute the average case cost of the sequential generation algorithm associated to a given specification. Using optimizations dictated by the cost calculus, several random generation algorithms are developed, based on the sequential principle; most of them have expected complexity 1/2 n log n, thus being only slightly superlinear. The approach is exemplified by the random generation of a number of classical combinatorial structures including Cayley trees, hierarchies, the cycle decomposition of permutations, binary trees, functional graphs, surjections, and set partitions. Epelle : un logiciel de détection de fautes d'orthographe, INRIA Research Report 2030, September 1993. • This report describes the algorithm used by the epelle program, together with its implementation in the C language. This program is able to check about 30.000 words every second on modern computers, without any error, contrary to the Unix spell program which makes use of hashing methods and could thus accept wrong words. The main principle of epelle is to use digital trees (also called dictionary trees), which in addition reduces the space needed to store the list of words (by a factor of about 5 for the french dictionary). Creating a new digital tree for the franch language (about 240.000 words) takes only a dozen of seconds. The same program is directly usable for other languages and more generally for any list of alphanumeric keys. Automatic Average-case Analysis of Algorithms, with Ph. Flajolet and B. Salvy, Theoretical Computer Science, volume 79, number 1, pages 37-109, 1991. • Many probabilistic properties of elementary discrete combinatorial structures of interest for the average-case analysis of algorithms prove to be decidable. This paper presents a general framework in which such decision procedures can be developed. It is based on a combination of generating function techniques for counting, and complex analysis techniques for asymptotic We expose here the theory of exact analysis in terms of generating functions for four different domains: the iterative/recursive and unlabelled/labelled data type domains. We then present some major components of the associated asymptotic theory and exhibit a class of naturally arising functions that can be automatically analyzed. A fair fragment of this theory is also incorporated into a system called Lambda-Upsilon-Omega. In this way, using computer algebra, one can produce automatically non-trivial average-case analyses of algorithms operating over a variety of decomposable combinatorial structures. At a fundamental level, this paper is part of a global attempt at understanding why so many elementary combinatorial problems tend to have elementary asymptotic solutions. In several cases, it proves possible to relate entire classes of elementary combinatorial problems whose structure is well defined with classes of elementary special functions and classes of asymptotic forms relative to counting, probabilities, or average-case complexity. Séries génératrices et analyse automatique d'algorithmes, PhD thesis (in french), École Polytechnique, Palaiseau, 1991 [in french]. Also available in postscript. • This thesis studies systematic methods to determine automatically the average-case cost of an algorithm. Those methods apply generally to descent schemes in decomposable data structures, which enables one to model a large class of problems. More precisely, this thesis focuses on the first stage of the analysis of an algorithm, namely the algebraic analysis, which translates the program into mathematical objects, whereas the second stage extracts from those mathematical objects the desired information about the average-case cost. We define a language to describe decomposable data-structures and descent schemes on them. When one uses generating functions as mathematical objects (counting generating functions for data-structures, and cost generating functions for programs), we show that the algorithms described in this language directly translate into systems of equations for the corresponding generating functions, moreover using simple rules. From those equations, we can then determine in polynomial time the exact average-case cost for a given size of the input data-structures. We can also use those equations to compute using asymptotic analysis the average-case cost when the input data size tends to infinity, since we know that the asymptotic average cost is directly related to the behaviour of those generating functions around their singularities. Therefore, we show that to a given class of algorithms corresponds a well-defined class of generating functions, and in turn a given class of formulae for the asymptotic average cost. Those algebraic analysis rules were included in a software tool for the average-case analysis of algorithms, called Lambda-Upsilon-Omega, which proved useful for experiments and research.
{"url":"https://homepages.loria.fr/PZimmermann/papers/","timestamp":"2024-11-10T00:12:45Z","content_type":"text/html","content_length":"116023","record_id":"<urn:uuid:d7ec281d-242a-4e22-8cee-cde2768f48f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00654.warc.gz"}
The table of integrals in the back of your textbook contains the foll The table of integrals in the back of your textbook contains the following reduction formula (this is integral 60 in the table) Show all the steps in the derivation of this formula (in other words, show me all the steps it takes to discover this formula) ) Use the formula to evaluate \int(2 x+7)^{2} \sqrt{8 x+39} d x \int u^{n} \sqrt{a+b u} d u=\frac{2}{b(2 n+3)}\ left[u^{n}(a+b u)^{\frac{3}{2}}-n a \int u^{n-1} \sqrt{a+b u} d u\right] Fig: 1 Fig: 2 Fig: 3 Fig: 4 Fig: 5
{"url":"https://tutorbin.com/questions-and-answers/the-table-of-integrals-in-the-back-of-your-textbook-contains-the-follo","timestamp":"2024-11-13T02:42:42Z","content_type":"text/html","content_length":"69753","record_id":"<urn:uuid:a4eca9af-d476-453e-b96b-567e92f1ce6f>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00007.warc.gz"}
Pay What You Pull Raffle Calculator - GEGCalculators Pay What You Pull Raffle Calculator " + ""; } A “Pay What You Pull” raffle is a unique fundraising concept. Participants pay a variable amount for raffle tickets based on their choice. This allows for flexibility and potentially higher contributions from supporters. Prizes are then awarded through a random drawing, with each participant having a chance to win based on their ticket purchase. Pay What You Pull Raffle Calculator " + " Leave a Comment Cost Per Ticket: $" + costPerTicket.toFixed(2) + " " + " Total Cost: $" + totalCost.toFixed(2) + " "; } Certainly, here’s a simplified example of a table for a “Pay What You Pull” raffle: Ticket Number Contributor’s Name Amount Paid ($) 001 John Smith 20 002 Sarah Johnson 15 003 Michael Brown 25 004 Emily Davis 10 005 Robert Wilson 30 In this table, each row represents a raffle ticket with a unique ticket number. The contributor’s name is listed, along with the amount they paid for the ticket. The raffle organizer can then use these tickets for a random drawing to determine the winners. The more someone contributes, the more chances they have to win. How do you calculate the probability of winning a raffle? The probability of winning a raffle depends on the number of tickets sold and the number of tickets you purchase. If you buy 1 ticket out of 100 total tickets, your estimated probability of winning is 1/100 or 1%. What is the best price to charge for raffle tickets? The ideal ticket price depends on your goals and the value of the prize. Typically, it’s best to set a price that covers the cost of the prize and leaves room for fundraising. An estimation could be 2 to 5 times the prize value. How do you draw a raffle? Randomly drawing a raffle can be done using a hat, a random number generator, or a computer program. Ensure transparency and fairness in the process. How do I calculate probability? Probability is calculated by dividing the favorable outcomes by the total possible outcomes. The formula is: Probability = (Number of Favorable Outcomes) / (Total Possible Outcomes). How do you calculate expected winnings probability? Expected winnings probability are calculated by multiplying the probability of winning each prize by the prize amount and summing them up. For example, if there’s a 1% chance of winning $100 and a 5% chance of winning $10, your expected winnings would be (0.01 * $100) + (0.05 * $10) = $1 + $0.50 = $1.50. Can I run a raffle for profit in the UK? In the UK, you can run a raffle for profit, but you must follow specific regulations and obtain the necessary licenses if required. Do all raffle tickets have to be the same price? No, raffle tickets do not have to be the same price. Different ticket pricing can be used to encourage larger donations and participation. What makes a good raffle prize? A good raffle prize is something appealing to your target audience, such as electronics, gift cards, vacations, or unique experiences. What is the difference between a raffle and a draw? A raffle typically involves selling tickets for a chance to win prizes, while a draw can refer to selecting random winners from a pool of entries, which could include raffle tickets. What is raffle method? Raffle methods involve how tickets are sold, drawn, and winners are determined, following legal and ethical guidelines. What’s the difference between a raffle and a tombola? A raffle is a game of chance where tickets are sold, and winners are drawn, while a tombola is a type of raffle where participants draw tickets from a container to reveal prizes. What are the 5 rules of probability? 1. Probability is between 0 and 1. 2. The probability of an event not occurring is 1 minus the probability of it occurring. 3. The probability of mutually exclusive events can be summed. 4. The probability of independent events can be multiplied. 5. The sum of all probabilities in a sample space is 1. What are the 3 types of probability? 1. Classical Probability (based on equally likely outcomes). 2. Empirical Probability (based on observed frequencies). 3. Subjective Probability (based on personal beliefs or opinions). What is the formula for and/or probability? The formula for “and” probability (the probability of both events happening) is P(A and B) = P(A) * P(B). The formula for “or” probability (the probability of either event happening) is P(A or B) = P(A) + P(B) – P(A and B). How do you calculate the expected outcome? To calculate the expected outcome, multiply each possible outcome by its probability and sum them up. It represents the average or most likely result. How do I run a raffle legally in the UK? To run a legal raffle in the UK, you may need a license from your local authority. Ensure you follow the Gambling Commission’s guidelines and adhere to relevant laws. Are raffles a good way to make money? Raffles can be a good way to raise funds for charitable or nonprofit organizations, but success depends on various factors, including ticket sales and prize Do I need a license for a raffle in the UK? You may need a license to run a raffle in the UK, depending on the value of the prizes and other factors. Check with your local authority or the Gambling Commission for guidance. Do you need a license for a free raffle? Even free raffles may require a license in the UK if they involve valuable prizes or significant fundraising. Do I need a gambling license to sell raffle tickets? In the UK, if you sell raffle tickets for a commercial or large-scale event, you may need a gambling license. Is it illegal to raffle a car in the UK? It is not illegal to raffle a car in the UK, but you must follow gambling laws and obtain any necessary licenses. What is the most popular raffle item? Popular raffle items often include electronics, vacations, cash prizes, and unique experiences. How to sell raffle tickets well? To sell raffle tickets effectively, promote your cause, offer attractive prizes, use social media, engage your community, and provide clear information about the What should be included in a raffle ticket? A raffle ticket should include the event name, date, location, ticket price, contact information, terms and conditions, and a unique ticket number. Can you give cash prizes in a raffle? Yes, cash prizes are commonly given in raffles, but you should check local regulations and tax implications. Are prize draws gambling? Prize draws can be considered a form of gambling if they involve chance and the possibility of winning a prize. Is a raffle classed as a lottery? A raffle can be considered a form of lottery, but they often have different legal and operational characteristics. What is an example of a raffle? An example of a raffle is selling tickets for a chance to win a vacation package, with the ticket numbers drawn randomly to determine the winner. How do you give away prizes at an event? Prizes at an event can be given away through drawings, raffles, contests, or games, ensuring fairness and transparency. Is a raffle the same as a lucky draw? A raffle and a lucky draw are similar concepts, both involving chance and prizes, but they may have different names depending on the region or context. How do you play the raffle game? To play a raffle game, you typically purchase one or more tickets and wait for a drawing to determine if you’ve won a prize. How much profit does a tombola make? The profit from a tombola can vary significantly depending on the number of tickets sold, the ticket price, and the value of the prizes. Profit margins are typically used for fundraising purposes. What are the four types of probability? 1. Classical Probability. 2. Empirical Probability. 3. Subjective Probability. 4. Axiomatic Probability. What is the easiest way to understand probability? The easiest way to understand probability is to think of it as the likelihood of an event happening, expressed as a fraction or percentage. What is the formula for simple probability? Simple probability is calculated as Probability = (Number of Favorable Outcomes) / (Total Possible Outcomes). What are the two types of probability? Two main types of probability are: 1. Theoretical Probability (based on mathematical models). 2. Experimental Probability (based on observations or experiments). What is an example of a 1 in 1000 chance? An example of a 1 in 1000 chance is winning the lottery jackpot with a single ticket. What are the basic rules of probability? The basic rules of probability include the addition rule, multiplication rule, and complementary rule, among others. How do you calculate probability with examples? You calculate probability by dividing the number of favorable outcomes by the total possible outcomes. For example, the probability of rolling a 6 on a fair six-sided die is 1/6 because there is 1 favorable outcome (rolling a 6) out of 6 possible outcomes (rolling any number from 1 to 6). How do you find the probability of something given something else? To find conditional probability (the probability of one event given another has occurred), use the formula: P(A|B) = P(A and B) / P (B), where P(A|B) is the probability of A given B. What does the U mean in probability? In probability notation, “U” typically represents the union of two sets, indicating the probability of either event A or event B occurring (A ∪ B). What is the probability of a dice showing a 3 or 6? If you have a fair six-sided die, the probability of rolling a 3 or a 6 is 2/6 or 1/3 because there are 2 favorable outcomes (rolling a 3 or a 6) out of 6 possible outcomes (rolling any number from 1 to 6). GEG Calculators is a comprehensive online platform that offers a wide range of calculators to cater to various needs. With over 300 calculators covering finance, health, science, mathematics, and more, GEG Calculators provides users with accurate and convenient tools for everyday calculations. The website’s user-friendly interface ensures easy navigation and accessibility, making it suitable for people from all walks of life. Whether it’s financial planning, health assessments, or educational purposes, GEG Calculators has a calculator to suit every requirement. With its reliable and up-to-date calculations, GEG Calculators has become a go-to resource for individuals, professionals, and students seeking quick and precise results for their calculations.
{"url":"https://gegcalculators.com/pay-what-you-pull-raffle-calculator/","timestamp":"2024-11-08T22:11:17Z","content_type":"text/html","content_length":"177676","record_id":"<urn:uuid:ebe481d2-99c5-421a-aa78-5ae562f58c33>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00222.warc.gz"}
Reaching 3-Connectivity via Edge-edge Additions | GIORDANO DA LOZZO Given a graph $G$ and a pair $\langle e’,e’’\rangle$ of distinct edges of $G$, an edge-edge addition on $\langle e’,e’’\rangle$ is an operation that turns $G$ into a new graph $G’$ by subdividing edges $e’$ and $e’’$ with a dummy vertex $v’$ and $v’’$, respectively, and by adding the edge $(v’,v’’)$. In this paper, we show that any $2$-connected simple planar graph with minimum degree $\delta (G) \geq 3$ and maximum degree $\Delta(G)$ can be augmented by means of edge-edge additions to a $3$-connected planar graph $G’$ with $\delta(G’) \geq 3$ and $\Delta(G’) = \Delta(G)$, where each edge of $G$ participates in at most one edge-edge addition. This result is based on decomposing the input graph into its $3$-connected components via SPQR-trees and on showing the existence of a planar embedding in which edge pairs from a special set share a common face. Our proof is constructive and yields a linear-time algorithm to compute the augmented graph. As a relevant application, we show how to exploit this augmentation technique to extend some classical NP-hardness results for bounded-degree $2$-connected planar graphs to bounded-degree $3$-connected planar graphs. In 30th International Workshop on Combinatorial Algorithms (IWOCA 2019)
{"url":"http://www.dia.uniroma3.it/~dalozzo/publication/conference-papers/2019/iwoca/","timestamp":"2024-11-05T16:57:20Z","content_type":"text/html","content_length":"31899","record_id":"<urn:uuid:71c3d93e-679f-4e97-ba30-93b0722c5e19>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00611.warc.gz"}
O(√log n) approximation to sparsest cut in õ(n<sup>2</sup>) time This paper shows how to compute O(√log n)-approximations to the sparsest Cut and Balanced Separator problems in Õ(n^2) time, thus improving upon the recent algorithm of Arora, Rao, and Vazirani [Proceedings of the 336th Annual ACM Symposium on Theory of Computing, 2004, pp. 222-231]. Their algorithm uses semidefinite programming and requires Õ(n^9.5)time. Our algorithm relies on efficiently finding expander flows in the graph and does not solve semidefinite programs. The existence of expander flows was also established by Arora, Rao, and Vazirani [Proceedings of the 36th Annual ACM Symposium on Theory of Computing, 2004, pp. 222-231]. All Science Journal Classification (ASJC) codes • General Computer Science • General Mathematics • Expander flows • Graph partitioning • Multiplicative weights Dive into the research topics of 'O(√log n) approximation to sparsest cut in õ(n^2) time'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/olog-n-approximation-to-sparsest-cut-in-%C3%B5nsup2sup-time","timestamp":"2024-11-06T23:49:24Z","content_type":"text/html","content_length":"48564","record_id":"<urn:uuid:ab7b07a0-b815-400d-9a68-7c638e78ddb1>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00294.warc.gz"}
Physics Update: How Time Works, Mostly Answers are in questions. Special thanks here. Sometimes, it's the questions which advances the science. Well, in fact, that's what science is all about, asking questions. In response to this question: ~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~. Relativity made Simple The orbital model of the atom is mainly from chemistry. You have electron orbitals .. the outer electron shell is what exchanges in reactions. The orbital model of the atom is something people can work with. If something is right or wrong is doesn't matter as much as how can work with it and understand what we're working with. Yes, electrons have mass. Change in mass is due to speed and inertia closer to the speed of light something moves. But electricity travels naturally at the speed of light. It does have a speed limit when magnetic resistance is considered tho. Free flow of electricity at the speed of light is more superconductive... There are no absolutes. There is what we measure and what works. Most folks trying to re-write physics also don't understand simple gravity or Einstein's equivalency. So ... You could say electricity has mass. Mass is therefore Energy. Electromagnetic Light, Time in Electromagnetic Space To answer your question about time, you may have to read those new books I included on the recommended reading list (its up at the top of the list). You're asking the "how does it work" question, whereas my website is geared more to the "we know that it works and what works with it" perspective. The Paul Davies book "The edge of infinity" will help quite a bit. My home page and the warp drive news and warp drive blog pages gets heavy into all of that. People liked the articles, some of them, so I guess people liked that it makes you think, particularly about the paradox. It is hard to fathom actually. I used to know the calculus approach to answering the question of why is the equation for the volume of a sphere so peculiar. ... Relativity is a calculus thing but also geometry. Calculus deals with spherical geometry. But standard geometry in say Euclidean, provides a good explanation. I never was one to retain math, but I taught myself calculus out of the book .. took classes too but .. the logic itself can be understood if the operations are known. There's a lot to memorize tho. The natural way the universe works tho is easier to understand and to be known innately. You can derive the math at any time. But knowing gravity and electromagnetism is simple and MUST be simple for it to be real, according to Einstein. It has to make common sense! But sometimes it is exactly backwards to what common sense tells us. That's because two work with one in this universe. When you see two blackholes spin down for a faster frequency of time, you have to assume one black hole by itself creates a slower time frequency surrounding it. That's the simple relation to gravity, mass, and frequency (time), for everything in the universe. Positive and negative also apply, as does electromagnetism. All of it is Light. Now where inertia comes in, well its all in motion. What can be still though will still have inertia. Thats what Newton's third law really means. That's what it is really saying. Time is frequency, of motion. Simply. The fact time speeds up farther from gravity and slows down closer to gravity is all positive electromagnetism. Negative electromagnetism occurs with two (at least), working as one. Everything wants to revert back to a light state. All particles retain that acknowledgement essentially. But density (matter, and mass) behaves in opposite to a pure light state, behaving exactly backwards to light. Time is the density of separation of matter into opposite states. Frequency of that separation from a zero point (be it a wave on a graph or a curve approaching infinity) into one polarity AND another is the space between the moments of separation, as matter separates further from light into higher densities. We consider higher density as low vibrational states, of more dense matter. That is how Relativity works, from that perspective. If something happens 10 billion miles away, then how fast does light travel? Light will travel at the same speed everywhere if you measure it from where you are. So we exist always within our own spacetime continuum. But, the speed of light does change, which has been measured in labs and across distance. That means that light measures spacetime. And according to science and Einstein, yes it does. Light measures the frequency of spacetime. How the hell else do we know when time speeds up or slows down unless we can see it, and detect electromagnetic waves from distant places in space reaching this point, or from point A to point B? Time speeds up away from gravity, out in deep space because deep space is less dense. So, light will travel faster in deep space. Light travels about 360 million miles per hour from our current gravitytime position, roughly. It'll travel that fast near and within our local time frequency of density space. It'll go faster in deep space than what we think. But we can calculate how fast if we sit and think about it. All in positive space. We you go into negative space, you're simply dealing with warp drive theory. So light travels. Rather space travels. Light is everywhere at once. Speed is another factor. That's where special relativity kicks in. That's what special relativity is all about. Speed of matter. Matter that moves fast away from mass, or in parallel with mass, has a special relation with time because of frequency of motion and thus inertia. Inertia is electromagnetic in nature. Frequency deals mainly with time. Does that explain how time works better? If you want to know why it works, or how it works, it has been mapped out. Orbiting massive bodies relative to each other tend to speed up time. Even though a binary star system has more mass, the massive stars revolving around each other speed up time instead of slowing it down due to a massive presence itself. If the two as one were to become one, such as if the stars went black hole simultaneously, then in that uniform paradoxical geometry of inertial rotation, the collision of the binary stars in paradoxical motion can unify eventually as a black hole. But in the meantime, the frequency of time will speed up, approaching light which is infinite time. So mass has that limit that it cannot be light but, well .. How do I explain that? If the two become a one (a black hole), then it will be positive in gravity energy (a strong gravity force) and will slow down time because it stretches the time frequency so much as gravity pulls on the fabric of space, that time slows down a whole heck of a lot, at a black hole. Eventually at the event horizon of the singularity hole, time will stop In a nutshell: As you approach unity (light) then gravity is less, because gravity is more present in lower densities, where there is more separation between matter and matter. Time is the thing that separates, though. Which is why in faster fields of time, there is less gravity, AND speed of light is faster. Gravity decreases in density as light increases in density 3 Comments 11/16/2016 07:46:00 am Light has no mass so its not attracted gravitationally. .. if light isn't bent by gravity then it is space that bends. 12/1/2016 01:40:30 pm Thank you 33. I'm honored. 11/24/2018 05:20:23 pm as you approach unity (light) then gravity is less, because gravity is more present in lower densities, where there is more separation between matter and matter. time is the thing that separates, though. Which is why in faster fields of time, there is less gravity, AND speed of light is faster. gravity decreases in density as light increases in density Leave a Reply.
{"url":"https://www.warp-drive-physics.com/updates-and-design-improvements/physics-update-how-time-works-mostly","timestamp":"2024-11-09T00:44:15Z","content_type":"text/html","content_length":"48112","record_id":"<urn:uuid:811ba9cd-d875-463b-8361-45d9728627e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00782.warc.gz"}
Below Is A Table Of Hourly Wages For Receptionists Working For Various Companies. What Is The Average The average wage for receptionists in this group is $11.60 How to calculate the average wage It's important to note that "average wage" can be calculated in a few different ways, such as mean, median, and mode. Mean is the sum of all wages divided by the number of individuals, while median is the middle value in a range of wages. Depending on which method is used, the average wage figure can vary. The table of hourly wages for receptionists working for various companies, the average wage for receptionists in this group will be: = $58 / 5 = $11.60 Learn more about average on Step-by-step explanation: The center of the circle is the midpoint of its diameter. [tex]\boxed{\begin{minipage}{7.4 cm}\underline{Midpoint between two points}\\\\Midpoint $=\left(\dfrac{x_2+x_1}{2},\dfrac{y_2+y_1}{2}\right)$\\\\\\where $(x_1,y_1)$ and $(x_2,y_2)$ are the endpoints. Given the endpoints of the diameter are (0, 3) and (0, -4), to find the coordinates of the center of the circle, substitute the two endpoints into the midpoint formula: As the x-values of the endpoints of the diameter are the same, the length of the diameter, d, is the absolute value of the difference in y-values of the endpoints: Therefore, the diameter of circle Q is 7 units. The radius, r, of a circle is half its diameter. Therefore: [tex]\boxed{\begin{minipage}{4 cm}\underline{Equation of a circle}\\\\$(x-h)^2+(y-k)^2=r^2$\\\\where:\\ \phantom{ww}$\bullet$ $(h,k)$ is the center. \\ \phantom{ww}$\bullet$ $r$ is the radius.\\\end Now we have determined the center and radius of circle Q, we can substitute these values into the equation of a circle to write the equation of circle Q in standard form: Therefore, the equation of circle Q in standard form is:
{"url":"https://brightideas.houstontx.gov/ideas/below-is-a-table-of-hourly-wages-for-receptionists-working-f-rw2e","timestamp":"2024-11-14T17:36:49Z","content_type":"text/html","content_length":"94839","record_id":"<urn:uuid:873182bd-6aab-4115-95d8-3e976b745ed8>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00138.warc.gz"}
Applied Mathematics and Numerical Analysis Seminar 04/07/2024, 15:00 — 16:00 — Room P4.35, Mathematics Building Paulo Amorim, Escola de Matemática Aplicada, Fundação Getúlio Vargas - FGV EMAp, Rio de Janeiro Predator-prey and epidemiology models using transport equations I will present some models in ecology and epidemiology using a transport equation approach, so called structured models. The first models are of predator-prey type and include a variable hunger structure. They take the form of nonlocal transport equations coupled to ODEs. Then, we use a similar approach in an epidemiological model including disease awareness and variable susceptibility. We show well-posedness results, asymptotic behavior, and numerical simulations. This is joint work with C. Rebelo, A. Margheri, and P. Lafargeas. 18/06/2024, 15:00 — 16:00 — Room P3.10, Mathematics Building Ruy M. Ribeiro, Theoretical Biology and Biophysics, Los Alamos National Laboratory Viral Dynamics: mathematical modeling to study virus biology Modeling of the non-linear dynamics of virus in vivo, for example during primary infection or following drug treatment, has been used in the last two decades to study the biology of diverse viruses. I will discuss, with examples from HIV and hepatitis C virus (HCV) infection, the principles and approach of this methodology. I will also present recent examples of insights into the biology of these viruses and SARS-CoV-2 gained with viral dynamics. 11/04/2024, 15:00 — 16:00 — Room P3.10, Mathematics Building Alberto Girelli, Department of Mathematics and Physics Università Cattolica del Sacro Cuore Brescia, Italy Multiscale Modelling of Fluid Flow in a Lymph Node Lymph nodes (LNs) are organs scattered throughout the lymphatic system which play a vital role in our immune response by breaking down bacteria, viruses, and waste; the interstitial fluid, called lymph once inside the lymphatic system, is of fundamental importance in this process as it transports these substances inside the lymph node. The main mechanical features of the lymph node include the presence of a porous bulk region (lymphoid compartment, LC), surrounded by a thin layer (subcapsular sinus, SCS) where the fluid can flow freely. These nodes are vital for filtering and processing lymph, which contains immune cells, antigens, and other molecules. Understanding the fluid dynamics within lymph nodes is essential for elucidating immune mechanisms and developing therapies for lymphatic disorders. Despite its significance, few models in the literature attempt to describe lymph behavior from a mechanical In this talk, we will introduce a mathematical model, derived using the asymptotic homogenization technique, to describe fluid flow within a lymph node, considering its multiscale nature. We will discuss how this model can elucidate flow patterns, pressure distribution, and shear stress within the node. 17/01/2024, 15:00 — 16:00 — Room P3.10, Mathematics Building Maria Grazia Quarta, Università del Salento, Lecce, Italy Deep Learning for parameter estimation in reaction-diffusion PDEs for battery modeling One of the key development areas in battery research is finding ways to use metallic anodes, like Zn and Mg, but avoiding lithium, which is pyrophoric and sourced only in potentially critical geopolitical areas. Unfortunately, use of post-Li batteries is impaired by poorly understood shape changes, responsible for various failure modes. Over the past decade, in the framework of the RD-PDEs, a powerful mathematical approach has been developed in [1], able to capture the essential features of unstable material growth in electrochemical systems in terms of Turing pattern formation. Recharge instability problems in batteries with metal anodes are a special case of this phenomenon. On the other hand, the difficulty of studying materials in real-life battery context leads to a methodological gap between theory and experiments. For this reason, parameter identification in the above PDE modelling is crucial for advancement in this direction. In this research, based on [2], we propose to apply Deep-Learning as a new approach for parameter estimation, instead of the more traditional PDE constrained optimization, as for example in [3]. In the seminar we will discuss the Convolutional Neural Network devised for our goals, trained with the numerical solutions of the morphochemical PDE model that is able to capture the essential features of unstable material growth in electrochemical systems. We will show that the CNN carries out three tasks: 1. automatic partitioning of the parameter space associated to the PDE model, according to the types of patterns generated; 2. classification of simulated and experimental patterns; 3. identification of the model parameters for experimental electrode images. 1. B. Bozzini, D. Lacitignola, I. Sgura. Spatio-temporal organization in alloy electrodeposition: a morphochemical mathematical model and its experimental validation. J. Solid State Electrochem 2. I. Sgura, L. Mainetti, F. Negro, M. G. Quarta, B. Bozzini. Deep-Learning based parameter identification enables rationalization of battery material evolution in complex electrochemical systems. J. of Computational Science (2023). 28/06/2023, 10:00 — 11:00 — Room P3.10, Mathematics Building Jan Karel, Department of Technical Mathematics, Faculty of Mechanical Engineering, Czech Technical University in Prague Numerical simulations of various physical problems on unstructured dynamically adapted grids The aim of the talk is numerical simulations of various physical problems, specifically a streamer propagation and a fluid flow, on unstructured dynamically adapted grids. The streamer simulations are aimed on a proper choice of an adaptation criterion, an influence of width of a computational domain, interactions among streamer filaments and a streamer branching. The fluid flow simulations are devoted to a compressible turbulent flow through a turbine cascade and a restrictor. 27/06/2023, 10:00 — 11:00 — Room P3.10, Mathematics Building Tomas Neustupa, Department of Technical Mathematics, Faculty of Mechanical Engineering, Czech Technical University in Prague Existence of a solution of a steady flow through a cascade of profiles and radial turbine with arbitrary large inflow The aim of the talk is the existence of a solution to the Navier-Stokes-type problem in a more complicated geometry. Specially in 2D cascade of profiles (a model of blade machine) and a 2D multiply connected domain, modelling a flow of an incompressible viscous fluid through a rotating radial turbine. The main goal is to consider artificial boundary condition of the ''do nothing'' type on the outflow part of the domain and prove the existence for an arbitrarily large flux through the inflow part of the 17/05/2023, 16:00 — 17:00 — Room P3.10, Mathematics Building Rodrigo Weber dos Santos, Federal University of Juiz de Fora, Brazil Assessing the risk of cardiac arrhythmia using patient-specific models of the heart Mathematical models of the heart have emerged as a powerful tool in understanding the mechanisms underlying proper heart function and dysfunction in cardiovascular diseases. One particular area of interest is the propagation of the action potential wave that precedes and synchronizes the heart's contraction, which is crucial for proper heart function. Disruptions in this process can lead to cardiac arrhythmia, a condition characterized by irregular heartbeats. To better understand the mechanisms underlying cardiac arrhythmia, personalized patient-specific models of the heart can be generated by incorporating both personalized electrophysiological data, such as electrocardiogram (ECG), and geometric information from imaging techniques such as cardiac MRI. These models accurately capture the individualized anatomy and electrophysiology of the In this talk, we will highlight the benefits of patient-specific models for assessing the risk of arrhythmia and improving patient outcomes in the management of cardiac arrhythmia. 11/04/2023, 17:30 — 18:30 — Room P3.10, Mathematics Building Nikolai Vasilievich Chemetov, DCM-FFCLRP, Universidade de São Paulo The Rigid Body Motion in Cosserat's Fluid with Navier's Slip Boundary Conditions The aim of the talk is to give a brief presentation of novel results related to the body-fluid interaction problem. The motion is described by a system of coupled differential equations: Newton’s second law and Navier-Stokes type equations. We shall discuss the global solvability result of weak solution of the problem, when the slippage is allowed at the boundaries of the rigid body and of the bounded domain, occupied by the fluid. This result completely resolves a famous non-contact paradox between the rigid body and the domain boundary. 25/01/2023, 16:00 — 17:00 — Room P3.10, Mathematics Building Victor Ortega, Departmento de Matemática Aplicada, Universidad de Granada, Spain and CEMAT, Faculdade de Ciências, Universidade de Lisboa, Portugal Some stability criteria in the periodic prey-predator Lotka-Volterra model In this talk, we present some stability results in a classical model concerning population dynamics, the nonautonomous prey-predator Lotka-Volterra model under the assumption that the coefficients are $T$-periodic functions \begin{equation}\label{sysLV} \left\lbrace \begin{array}{l} \dot{u}= u(a(t) - b(t)\,u - c(t)\,v), \\ \dot{v}= v(d(t) + e(t)\,u - f(t)\,v), \end{array} \right. \end{equation} where $u\gt 0$, $v\gt 0$. The variables $u$ and $v$ represent the population of a prey and its predator, respectively. Some instances with this kind of dynamics can be: snowshoe hare and lynx canadensis, paramecium and didinium, fish population and fishermen, etc. The periodicity of this model takes into account changes of the environment in which the predation process takes place. For instance, seasonality or variations of the temperature in laboratory conditions. In the system \eqref{sysLV} the coefficients $b(t)$, $c(t)$, $e(t)$ and $f(t)$ are positive. The coefficients $c(t)$ and $e(t)$ describe the interaction between $u$ and $v$; $a(t)$ and $b(t)$ describe the growth rate for the prey $u$; $d(t)$ and $f(t)$ represent the analogous for the predator $v$. Solutions for the system \eqref{sysLV} with both components positive are called coexistence states and the necessary and sufficient conditions for their existence are well understood, see [2]. After reviewing those conditions, we present some results concerning the stability of a special kind of coexistence state, positive $T$-periodic solutions. In [3] the author gave a sufficient condition for the uniqueness and asymptotic stability of the positive $T$-periodic solution. This criterion is formulated in terms of the $L^1$ norm of the coefficients of a planar linear system associated to \eqref{sysLV}. On the other hand, in [1], assuming that the system \eqref{sysLV} has no sub-harmonic solutions of second order (periodic solutions with minimal period $2T$), the authors proved that there exists at least one asymptotically stable $T$-periodic solution. Here the result is formulated in terms of the $L^\infty$ norm. Our result, in [4], gives a $L^p$ criterion, building a bridge between the two previous results. This is a Joint work with Carlota Rebelo (Departmento de Matemática and CEMAT, Faculdade de Ciências, Universidade de Lisboa, Portugal). Acknowledgements: This work was partially supported by the Spanish Ministerio de Universidades and Next Generation Funds of the European Union. 1. Z. Amine, R. Ortega, A periodic prey-predator system, Journal of Mathematical Analysis and Applications,185(2): 477-489, 1994. 2. J. López-Gómez, R. Ortega and A. Tineo, The periodic predator-prey Lotka-Volterra model, Adv. Differential Equations, 1(3): 403-423, 1996. 3. R. Ortega, Variations on Lyapunov's stability criterion and periodic prey-predator systems, Electronic Research Archive, 29(6): 3995-4008, 2021. 4. V. Ortega, C. Rebelo, A $L^p$ stability criterion in the periodic prey-predator Lotka-Volterra model, In preparation, 2023. 24/11/2022, 15:00 — 16:00 — Room P3.10, Mathematics Building Euripides J. Sellountos, CEMAT, Instituto Superior Técnico Boundary Element Methods in flow problems governed by Navier-Stokes equations In this presentation will be discussed recent advances of Boundary Element Method (BEM) in Computational Fluid Dynamics (CFD). Unlike other methods, BEM is a a multi-angle numerical technique, that permits the approach to a partial differential equation (PDE) in completely different ways. In Navier-Stokes equations in particular, many different test functions can be used in the weak form, as the Laplace, the Stokeslet, the convective parabolic-diffusion or other convective fundamental solutions, among others. Apart from that, it is found recently that hypersingular BEM in Navier-Stokes equations have a broad area of applicability, as they provide the gradients of the field. These gradients can further be applied to numerous cases as impovement of system's condition number, enforcing continuity, computation of wall quantities such as wall vorticities, strain and stress tensors, and pressure calculation, among others. However, derivation of such equations is not always simple since they are accompanied with extra terms, mainly in convection. Another important finding is that hypersingular equations can permit the use of constant elements simplifying immensely the preparation of the computational model. Another part of the talk will be dedicated to the transformation of the BEM system to Finite Element (FEM) or Finite Volume (FVM) equivalent in terms of sparsity. A system produced by BEM with domain unknowns cannot be solved efficiently, but with proper transformations it can be changed to a sparse system, which can be solved remarkably faster. Other accelerating techniques like hypersingular BEM/ Fast multipole (FMM) and meshless Local Boundary integral equation (LBIE) will be discussed. 27/10/2022, 15:00 — 16:00 — Room P3.10, Mathematics Building Yassine Tahraoui, CMA-FCT, Universidade Nova de Lisboa On the optimal control and the stochastic perturbation of a third grade fluid Most studies on fluid dynamics have been devoted to Newtonian fluids, which are characterized by the classical Newton’s law of viscosity. However, there exist many real fluids with nonlinear viscoelastic behavior that does not obey Newton’s law of viscosity. My aim is to discuss two problems related to a class of non-Newtonian fluids of differential type. Namely, the optimal control of incompressible third-grade fluids in 2D, via Pontryagin’s maximum principle and the strong well-posedness, in PDEs and probabilistic senses, of the 3D stochastic third-grade fluids in the presence of multiplicative noise driven by a Q-Wiener process. The talk is based on recent works with Fernanda Cipriano (CMA, Univ. NOVA de Lisboa). 27/07/2022, 16:00 — 17:00 — Mathematics Building Thomas Eiter, Weierstrass Institute for Applied Analysis and Stochastics, Berlin, Germany Resolvent estimates for the flow past a rotating body and existence of time-periodic solutions 23/06/2022, 16:30 — 17:30 — Room P3.10, Mathematics Building Anna Lancmanová, Faculty of Mechanical Engineering, Czech Technical University in Prague, Czech Republic, and CEMAT On the development of a numerical model for the simulation of air flow in the human airways The main motivation for this study is the air flow in the human respiratory system, although similar problems are also common in other areas of biomedical, environmental or industrial fluid mechanics. The detailed experimental studies of respiratory system in humans and animals are very challenging and even impossible in many cases due to various medical, technical or ethical reasons. This leads to the development of more and more realistic mathematical and numerical models of the flow in airways including the complex geometry of the problem, but also various fluid- and bio-mechanics features. The main difficulties are not just in the geometrical complexity of the computational domain with several levels of branching, but also in the need to prescribe mathematically suitable, but yet sufficiently realistic boundary conditions for the computational model. This leads to a complex multiscale problem, whose solution requires large amount of complicated and time-consuming numerical calculations. In this work we are considering simplified simulations in a two-dimensional rigid channel coupled with a one-dimensional extended flow model derived from a 3D fluid-structure interaction (FSI) model under certain conditions. For this purpose we built a simple test code employing an immersed boundary method and a finite difference discretization. At this stage the air flow in human airways is considered as incompressible, described by the Navier-Stokes equations. This simple code was developed with the aim of testing and improving boundary conditions using reduced order models. The incompressible model will later be replaced by a compressible one, to be able to evaluate the impact of intensive pressure changes in human airways while using realistic, patient specific airways geometry. The main idea is to use different dimensional models, 3D(2D), 1D and 0D, with different levels of complexity and accuracy and to couple them into a single working In the present talk, first results of the 2D-1D coupled toy model will be presented, focusing on the main features of the computational setup, coupling strategy and parameter sensitivity. In addition, some long term outlook of the more complex 3D-1D(-0D) model will be discussed. Acknowledgment: Center for Computational and Stochastic Mathematics - CEMAT (UIDP/04621/2022 IST-ID). 08/06/2022, 15:00 — 16:00 — Room P4.35, Mathematics Building Thi Minh Thao Le, University of Tours, France Multiple Timescales in Microbial Interactions The purpose of this work is the theoretical and numerical study of an epidemiological model of multi-strain co-infection. Depending on the situation, the model is written as ordinary differential equations or reaction-advection-diffusion equations. In all cases, the model is written at the host population level on the basis of a classical susceptible-infected-susceptible system (SIS). The infecting agent is structured into N strains, which differ according to 5 traits: transmissibility, clearance rate of single infections, clearance rate of double infections, probability of transmission of strains, and co-infection rates. The resulting system is a large system ($N^2 + N + 1$ equations) whose complete theoretical study is generally inaccessible. This work is therefore based on a simplifying assumption of trait similarity - the so-called quasi-neutrality assumption. In this framework, it is then possible to implement Tikhonov-type time scale separation methods. The system is thus decomposed into two simpler subsystems. The first one is a so-called neutral system - i.e., the value of the traits of all the strains are equal - which supports a detailed mathematical analysis and whose dynamics turn out to be quite simple. The second one is a ”replication equation” type system that describes the frequency dynamics of the strains and contains all the complexity of the interactions between strains induced by the small variations in the trait values. The first part explicitly determines the slow system in an a spatial framework for N strains using a system of ordinary differential equations and justifies that this system describes the complete system well. This system is a replication system that can be described using the $N(N −1)$ fitnesses of interaction between the pairs of strains. It is shown that these fitnesses are a weighted average of the perturbations of each trait. The second part consists in using explicit expressions of these fitnesses to describe the dynamics of the pairs (i.e. the case $N = 2$) exhaustively. This part is illustrated with many simulations, and applications on vaccination are discussed. The last part consists in using this approach in a spatialized framework. The SIS model is then a reaction-diffusion system in which the coefficients are spatially heterogeneous. Two limiting cases are considered: the case of an asymptotically small diffusion coefficient and the case of an asymptotically large diffusion coefficient. In the case of slow diffusion, we show that the slow system is a system of type ”replication equations”, describing again the temporal but also spatial evolution of the frequencies of the strains. This system is of the reaction-advection-diffusion type, the additional advection term explicitly involving the heterogeneity of the associated neutral system. In the case of fast diffusion, classical methods of aggregation of variables are used to reduce the spatialized SIS problem to a homogenized SIS system on which we can directly apply the previous results. 26/05/2022, 15:00 — 16:00 — Room P3.10, Mathematics Building Sílvia Barbeiro, CMUC, Department of Mathematics, University of Coimbra Learning stable nonlinear cross-diffusion models for image restoration Image restoration is one of the major concerns in image processing with many interesting applications. In the last decades there has been intensive research around the topic and hence new approaches are constantly emerging. Partial differential equation based models, namely of non-linear diffusion type, are well-known and widely used for image noise removal. In this seminar we will start with a concise introduction about diffusion and cross-diffusion models for image restoration. Then, we will discuss a flexible learning framework in order to optimize the parameters of the models improving the quality of the denoising process. This is based on joint work with Diogo Lobo. 05/05/2022, 17:00 — 18:00 — Room P3.10, Mathematics Building Arnab Roy, Basque Center of Applied Mathematics, Bilbao, Spain Existence of strong solutions for a compressible viscous fluid and a wave equation interaction system In this talk, we consider a fluid-structure interaction system where the fluid is viscous and compressible and where the structure is a part of the boundary of the fluid domain and is deformable. The reference configuration for the fluid domain is a rectangular cuboid with the elastic structure being the top face. The fluid is governed by the barotropic compressible Navier–Stokes system, whereas the structure displacement is described by a wave equation. We show that the corresponding coupled system admits a unique, locally-in-time strong solution for an initial fluid density and an initial fluid velocity in $H^3$ and for an initial deformation and an initial deformation velocity in $H^4$ and $H^3$ respectively. 05/05/2022, 16:00 — 17:00 — Room P3.10, Mathematics Building Pierre-Alexandre Bliman, INRIA, Sorbonne Université, Université Paris-Diderot SPC, CNRS, Laboratoire Jacques-Louis Lions, Paris, France Modelling, analysis, observability and identifiability of epidemic dynamics with reinfections In order to understand if counting the number of reinfections may provide supplementary information on the evolution of an epidemic, we consider in this paper a general SEIRS model describing the dynamics of an infectious disease including latency, waning immunity and infection-induced mortality. We derive an infinite system of differential equations that provides an image of the same infection process, but counting also the reinfections. Well-posedness is established in a suitable space of sequence valued functions, and the asymptotic behavior of the solutions is characterized, according to the value of the basic reproduction number. This allows to determine several mean numbers of reinfections related to the population at endemic equilibrium. We then show how using jointly measurement of the number of infected individuals and of the number of primo-infected provides observability and identifiability to a simple SIS model for which none of these two measures is sufficient to ensure on its own the same properties. This is a joint work with Marcel Fang. More details may be found in the report https://arxiv.org/abs/2011.12202. 02/03/2022, 16:00 — 17:00 — Online Irene Marín Gayte, Instituto Superior Técnico, CEMAT Minimal time optimal control problems This talk is devoted to the theoretical and numerical analysis of some minimal time optimal and control problems associated to linear and nonlinear differential equations. We start by studying simple cases concerning linear and nonlinear ODEs. Then, we deal with the heat equation. In all these situations, we analyze the existence of solution, we deduce optimality results and we present several algorithms for the computation of optimal controls. Finally, we illustrate the results with several numerical experiments. 10/11/2021, 16:00 — 17:00 — Online Jesús Bellver Arnau, Laboratoire Jacques-Louis Lions and INRIA, Paris Dengue outbreak mitigation via instant releases In the fight against arboviruses, the endosymbiotic bacterium Wolbachia has become in recent years a promising tool as it has been shown to prevent the transmission of some of these viruses between mosquitoes and humans. This method offers an alternative strategy to the more traditional sterile insect technique, which aims at reducing or suppressing entirely the population instead of replacing it. In this presentation I will present an epidemiological model including mosquitoes and humans. I will discuss optimal ways to mitigate a Dengue outbreak using instant releases, comparing the use of mosquitoes carrying Wolbachia and that of sterile mosquitoes. This is a joint work with Luis Almeida (Laboratoire Jacques-Louis Lions), Yannick Privat (Université de Strasbourg) and Carlota Rebelo (Universidade de Lisboa). 27/10/2021, 16:00 — 17:00 — Room P3.10, Mathematics Building Online Pierre-Alexandre Bliman, INRIA and Laboratoire Jacques-Louis Lions, Paris Minimizing epidemic final size through social distancing How to apply partial or total containment measures during a given finite time interval, in order to minimize the final size of an epidemic - that is the cumulative number of cases infected during its course? We provide here a complete answer to this question for the SIR epidemic model. Existence and uniqueness of an optimal strategy is proved for the infinite-horizon problem corresponding to control on an interval $[0,T]$, $T\gt 0$ (1st problem), and then on any interval of length $T$ (2nd problem). For both problems, the best policy consists in applying the maximal allowed social distancing effort until the end of the interval $[0,T]$ (1st problem), or during a whole interval of length $T$ (2nd problem), starting at a date that is not systematically the closest date and that may be computed by a simple algorithm. These optimal interventions have to begin before the proportion of susceptible individuals crosses the herd immunity level, and lead to limit values of that proportion smaller than this threshold. More precisely, among all policies that stop at a given distance from the threshold, the optimal policies are the ones that realize this task with the minimal containment duration. Numerical results are exposed that provide the best possible performance for a large set of basic reproduction numbers and lockdown durations and intensities. Details and proofs of the results are available in [BDPV,BD]. This is a joint work with Michel Duprez (Inria), Yannick Privat (Université de Strasbourg) and Nicolas Vauchelet (Université Sorbonne Paris Nord). [BDPV] Bliman, P.-A., Duprez, M., Privat, Y., and Vauchelet, N. (2020). Optimal immunity control by social distancing for the SIR epidemic model. Journal of Optimization Theory and Applications. [BD] Bliman, P. A., and Duprez, M. (2021). How best can finite-time social distancing reduce epidemic final size?. Journal of Theoretical Biology 511, 110557. https://www.sciencedirect.com/
{"url":"https://math.tecnico.ulisboa.pt/seminars/numerica/?action=past","timestamp":"2024-11-10T11:13:00Z","content_type":"text/html","content_length":"68554","record_id":"<urn:uuid:8aac1e97-055f-4e21-b7c8-0a39b7be6268>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00361.warc.gz"}
Module x_salsa20_poly1305 Expand description Encryption and decryption using the (secret)box algorithms popularised by Libsodium. Libsodium defines and implements two encryption functions secretbox and box. The former implements shared secret encryption and the latter does the same but with a DH key exchange to generate the shared secret. This has the effect of being able to encrypt data so that only the intended recipient can read it. This is also repudiable so both participants know the data must have been encrypted by the other (because they didn’t encrypt it themselves) but cannot prove this to anybody else (because they could have encrypted it themselves). If repudiability is not something you want, you need to use a different approach. Note that the secrets are located within the secure lair keystore (@todo actually secretbox puts the secret in WASM, but this will be fixed soon) and never touch WASM memory. The WASM must provide either the public key for box or an opaque reference to the secret key so that lair can encrypt or decrypt as required. Note that even though the elliptic curve is the same as is used by ed25519, the keypairs cannot be shared because the curve is mathematically translated in the signing vs. encryption algorithms. In theory the keypairs could also be translated to move between the two algorithms but Holochain doesn’t offer a way to do this (yet?). Create new keypairs for encryption and save the associated public key to your local source chain, and send it to peers you want to interact with. • Generate a new x25519 keypair in lair from entropy. Only the pubkey is returned from lair because the secret key never leaves lair. • Libsodium keypair based authenticated encryption: box_open • Libsodium keypair based authenticated encryption: box. • Libsodium secret-key authenticated encryption: secretbox_open • Libsodium secret-key authenticated encryption: secretbox. • Generate a new secure random shared secret suitable for encrypting and decrypting using x_salsa20_poly1305_{en,de}crypt. If key_ref is None an opaque reference will be auto-generated. If key_ref is Some and that key already exists in the store, this function will return an error. If Ok, this function will return the KeyRef by which the shared secret may be accessed. • Using the Libsodium box algorithm, encrypt a shared secret so that it may be forwarded to another specific peer. • Using the Libsodium box algorithm, decrypt a shared secret, storing it in the keystore so that it may be used in x_salsa20_poly1305_decrypt. This method may be co-opted to ingest shared secrets generated by other custom means. Just be careful, as WASM memory is not a very secure environment for cryptographic secrets. If key_ref is None an opaque reference string will be auto-generated. If key_ref is Some and that key already exists in the store, this function will return an error. If Ok, this function will return the KeyRef by which the shared secret may be accessed.
{"url":"https://docs.rs/hdk/latest/hdk/x_salsa20_poly1305/index.html","timestamp":"2024-11-13T11:43:20Z","content_type":"text/html","content_length":"28890","record_id":"<urn:uuid:b2ed7d92-3b20-450d-a5b5-d5b0c55e802f>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00518.warc.gz"}
Hyper-Kähler geometry and invariants of three-manifolds We study a 3-dimensional topological sigma-model, whose target space is a hyper-Kähler manifold X. A Feynman diagram calculation of its partition function demonstrates that it is a finite type invariant of 3-manifolds which is similar in structure to those appearing in the perturbative calculation of the Chern-Simons partition function. The sigma-model suggests a new system of weights for finite type invariants of 3-manifolds, described by trivalent graphs. The Riemann curvature of X plays the role of Lie algebra structure constants in Chern-Simons theory, and the Bianchi identity plays the role of the Jacobi identity in guaranteeing the so-called IHX relation among the weights. We argue that, for special choices of X, the partition function of the sigma-model yields the Casson-Walker invariant and its generalizations. We also derive Walker's surgery formula from the SL(2, Z) action on the finite-dimensional Hubert space obtained by quantizing the sigma-model on a two-dimensional torus. All Science Journal Classification (ASJC) codes • General Mathematics • General Physics and Astronomy • Casson's invariant • Topological sigma-models Dive into the research topics of 'Hyper-Kähler geometry and invariants of three-manifolds'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/hyper-k%C3%A4hler-geometry-and-invariants-of-three-manifolds","timestamp":"2024-11-13T21:56:18Z","content_type":"text/html","content_length":"49026","record_id":"<urn:uuid:793d3e5f-9540-4a60-a428-dc0b36b1ee2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00305.warc.gz"}
Excel Compare Columns - Solve Your Tech Excel Compare Columns There are plenty of “Excel compare columns” methods that you could employ within Microsoft Excel, and the option that you choose is ultimately going to depend upon what you are attempting to If your “Excel compare columns” search is being triggered in an effort to check one column for instances of a value in another column, then you may want to utilize the VLOOKUP function in a new column within your spreadsheet. Another way to perform this function is to utilize an IF statement within an Excel formula, which also allows you to check for a specific value within an entire Excel column. Conversely, if you simply want to look for the difference between values in different columns but in the same row (ex – A1, B1, C1 etc.), then you would simply subtract one value from the other in an empty column, which would show the difference between the two values. If you are working with a large spreadsheet, consider modifying it so that the top row repeats on every page in Excel. Excel Compare Columns with VLOOKUP The VLOOKUP function in Excel works with four variables that you use to check the values in one column for similar values in another column. The function looks like – =VLOOKUP(xxx, yyy, zzz, FALSE) The different variables are – xxx = the cell value that you are looking for yyy = the range of cells in which you want to look for that value zzz = the column number in which that range of cells is located FALSE = this will trigger the function to display “#NA” if no match is found. If a match is found, then the matched value will be displayed instead. If you look at the picture below, you can see all of the data and the formula used to do a simple example of this Excel column comparison. Use an IF Statement to Compare Excel Columns The IF statement has a few more options to do an Excel compare columns exercise, but the syntax is slightly more difficult to understand. The formula looks like – =IF(COUNTIF(xxx, yyy),zzz,0) and the different variables are – xxx = the column of values that you are checking yyy = the value in the xxx column that you are looking for zzz = the value to display if a match is found 0 = the value to display if a match is not found You can check the picture below for an example – Compare Excel Columns with a Simple Formula Sometimes, an Excel compare columns activity is as simple as subtracting a value in one column from a value in another column. Additionally, once you are familiar with this basic concept, it becomes simpler to determine comparative values between cells all throughout your spreadsheet. You can even use this Excel compare columns structure to perform additional arithmetic operations, such as addition, multiplication, and division. Check the example image below for more information on how you should structure your formula and your data. As you can see in the image above, a simple subtraction formula looks like “=XX-YY”, where “XX” is the starting value, and “YY” is the value you are subtracting from it. In basic expressions like this, you can substitute “+” “*” and “/” to perform addition, multiplication, and division, respectively. Another useful Excel formula is called Concatenate. This provides a simple way to combine data that exists in multiple cells. Continue Reading Matthew Burleigh has been writing tech tutorials since 2008. His writing has appeared on dozens of different websites and been read over 50 million times. After receiving his Bachelor’s and Master’s degrees in Computer Science he spent several years working in IT management for small businesses. However, he now works full time writing content online and creating websites. His main writing topics include iPhones, Microsoft Office, Google Apps, Android, and Photoshop, but he has also written about many other tech topics as well.
{"url":"https://www.solveyourtech.com/excel-compare-columns/","timestamp":"2024-11-04T12:15:00Z","content_type":"text/html","content_length":"249619","record_id":"<urn:uuid:cb6e1df4-0ed9-4db2-8e16-d6ec97a72cf2>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00899.warc.gz"}
The Free lance. (State College, Pa.) 1887-1904, January 01, 1900, Image 3 The Free lance. (State College, Pa.) 1887-1904, January 01, 1900, Image 3 SIIARFUS CREAM SEPARATORS Centrifugal Force varies directly as the diameter. Centrifugal Force varies as the square of the revolutions. A 4 inch diameter bowl will need three times the revolutions of a 12 inch diameter bowl to give the same circumferential speed. The increase of three times as many revolutions would give 3 X 3 = 9 times the centrifugal force if the bowls were of the same diameter. But the higher revolution bowl is but 1 / 3 the diameter of the larger, so the smaller diameter bowl will have but of 9 times or 3 times the centrifugal. force. Thus a bowl 4 inch diameter running at the same circum ferential speed will have three times the centrifugal force of a bowl 12 inches in diameter. Three men will lift a stone which one man cannot budge. Three times the centrifugal force will recover small cream glob ules that otherwise cannot be separated. Some manufacturers use complicated internal devices to in crease capacity, we accomplish a better result by the simple sci entific method of reducing diameter and increasing revolutions. In this way, we preserve simplicity and durability, make a bowl weighing thirty pounds do more work than any other bowl of three times the weight, and produce a cream unequaled in smoothness and value, because it goes directly' through au un obstructed bowl, thus obviating any tendency to break the globules. . . 7.'lo..slzarples Tubular :is .a clean, rapid, ,superb sOnmer of large reserve . capacity and they are sold absolutely on 4hefr merits and subject to a rigid guarantee as to results. P. M. SHARPLES, THE SHARPLES CO., Canal and Washington Strode, CHICAGO, ILL. West Chester, Pa.
{"url":"https://panewsarchive.psu.edu/lccn/sn85054901/1900-01-01/ed-1/seq-3/ocr/","timestamp":"2024-11-02T15:06:19Z","content_type":"text/html","content_length":"13533","record_id":"<urn:uuid:a2703f90-b050-4347-9060-d2b3ab03cda1>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00020.warc.gz"}
Causal Dynamic Networks \(\def\loading{......LOADING......Please Wait......} \def\RR{\bf R} \def\real{\mathbb{R}} \def\bold#1{\bf #1} \def\d{\mbox{Cord}} \def\hd{\widehat \mbox{Cord}} \DeclareMathOperator{\cov}{cov} \ DeclareMathOperator{\var}{var} \DeclareMathOperator{\cor}{cor} \newcommand{\ac}[1]{\left\{#1\right\}} \DeclareMathOperator{\Ex}{\mathbb{E}} \DeclareMathOperator{\diag}{diag} \newcommand{\bm}[1]{\ boldsymbol{#1}} \def\wait{......LOADING......Please Wait......}\) Causal Dynamic Networks: ODE Network Modeling of fMRI Xi (Rossi) LUO Brown University Department of Biostatistics Center for Statistical Sciences Computation in Brain and Mind Brown Institute for Brain Science Brown Data Science Initiative ABCD Research Group CMStatistics, Pisa Italy December 16, 2018 Funding: NIH R01EB022911, P20GM103645, P01AA019072, P30AI042853; NSF/DMS (BD2K) 1557467 Xuefei Cao Brown Applied Math Björn Sandstede Brown Applied Math Slides viewable on web: fMRI Experiments • Task fMRI: performs tasks under brain scanning • Randomized stop/go task: □ press button if "go"; □ withhold pressing if "stop" • Resting-state: "do nothing" during scanning Goal: infer task-related brain activation and connectivity fMRI data: blood-oxygen-level dependent (BOLD) signals from each cube/voxel (~millimeters), $10^5$ ~ $10^6$ voxels in total. Conceptual Model with Stimulus Sci Goal: infer intrinsic connections, "Go"-task related connections, "Stop" connections Some Existing Methods and Limitions • Functional (nondirectional) connectivity: □ Correlations □ PCA, independent component analysis (ICA) Calhoun, Guo, and colleagues □ Graphical models (inverse covariance) □ Bayesian methods Bownman, Guindani, Vannucci, Zhang, and colleagues • Effective (directional) connectivity: □ Granger causality, autoregressive modelsDing, Hu, Ombao, and colleagues □ Structural equation models • Limitations:biophysical interpretability $\propto$ scalability${}^{-1}$ □ Fail to model task-depend connections/activations □ Connections unlikely to be causal/neuronal □ Some are hard to scale to large networks Dynamic Causal Modeling (DCM) • Proposed by Friston et al, 2003 (> 3000 citations) • System approach to address previous limitations: □ Latent neuronal states: a network ODE model □ From neuronal states to observed BOLD signals: another ODE □ (Bayesian) priors for model parameters □ Bayes factors for comparing a few candidate models • DCM essentially unchanged for the past 15 yrs Friston et al, 17 DCM: Advantages and Limitations • Advantages: □ Task-dependent, directional connections □ Neuronal/causal connections □ Model brain activations (and non-stationary time series) • Limitations: □ Computationally expensive □ Bayesian model comparison over exponentially many models □ Model performance depends on priors Frassle et al, 15 □ Hard to scale (~10 nodes), some successes for simplified models □ Mostly for hypothesis validation, not data driven Causal Dynamic Networks • A two-level model • 1. DCM neuronal state model (latent $\bm{x}$, stimulus $\bm{u}$): $$\frac{d\bm{x}(t)}{dt}=\bm{A}\bm{x}(t)+\sum_{j}u_j(t)\bm{B_j}\bm{x}(t)+\bm{C}\bm{u}(t)$$ • 2. BOLD data model (data $\bm{y}$, noise $\bm{\epsilon}$) at discrete $t_i$: $$ \bm{y}(t_i) =\int h(s)\bm{x}(t_i-s) ds + \bm{\epsilon} (t_i) $$ • $h$ hemodynamic response function • $\bm{A}$ intrinsic connection matrix, $\bm{B}$ task-dependent connection tensor, $\bm{C}$ stimulus activation matrix Functional/Dynamic Data Analysis • Usually, observed data model $$ y(t) = x(t) + \epsilon(t) $$ and latent $x(t)$ follows an ODE model of interest • Various approaches for estimating the ODE parameters: nonlinear least squares Xue, Miao, Wu, 10, two-stage smoothing Varah, 82, principal differential analysis Ramsey, 96, Bayesian Girolami, 08, EcoG Zhang et al, 15 • The observed data model not applicable to fMRI • For example, two-stage smoothing approaches not directly applicable to BOLD convolutions: $$ y(t) = \int x(t-u)h(u) du + \epsilon(t) $$ Hemodynamic Response Function (HRF) fMRI responses last long (~30 seconds) after neural activities "Smooth" BOLD far from neuronal activity • An optimization-based approach • Minimize the following \[\begin{multline*} \scriptstyle l(\bm{x},\bm{\theta})=\sum_{t_i} \| \bm{y}(t_i)-h \star \bm{x}(t_i)) \|^2 \\ \scriptstyle +\lambda\int \left \| \frac{d \bm{x} (t) } {dt} - (A \bm{x}(t)+\sum_{j} u_j(t) \bm{B_j} \bm{x}(t)+ \bm{C}\bm{u}(t)) \right\|^2 dt \end{multline*} \] • Balancing data fitting errors and ODE fitting errors • Plug in basis-expansion of $\bm{x}(t) = \bm{\Gamma} \bm{\Phi}(t)$ • Allows convolution (vs two-stage smooth approach) • Computationally fast to allow Bootstrap inference • Prove conditional convexity of $O(J d^2)$ parameters • Iterative block coordinate descent algorithm • Prove explicit update formulas (no numerical optimization algorithms needed) Special Case: Resting-state fMRI • Set our parameter $\bm{B}$ and $\bm{C}$ to zero • Only fit intrinsic connection $\bm{A}$ • Can fit much larger networks Simulation: vs GCA/VAR Our CDN yields higher network recovery accuracy than Granger Causality Analysis (GCA, aka vector autoregressive models) Simulation: vs DCM Our CDN yields higher accuracy using only a fraction of the computation time of DCM Uncovering Neuronal States Decent recovery of (latent) neuronal states Task fMRI and Resting-state fMRI Stop/Go fMRI Brain activations and instrinsic connections between regions Task Specific Connections Better understanding of brain mechanisms Resting-state Connections Ours (A) close to DCM (C), different from correaltions (B) Real Data: 264 Brain Regions CDN uncovers a large-scale brain network • Joint optimization method for infer ODE networks • Flexible models for observations from causal ODEs • Computationally efficient for large-scale modeling • PyPI pacakge: cdn-fmri Thank you! Comments? Questions? or BrainDataScience.com
{"url":"https://talks.bigcomplexdata.com/CDN_CMStat_2018.html","timestamp":"2024-11-04T07:52:03Z","content_type":"text/html","content_length":"26796","record_id":"<urn:uuid:bfcb805b-e9c8-44cb-ac66-f2e7490f578b>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00248.warc.gz"}
Time and Work Concept Most of the competitive exams like 11 plus have seen the trend of testing students' ability to apply concepts to real problems. This is very evident from the question with situation from daily life. One of such categories of questions is on time and work concept. In our daily lives, we come across so much work that needs to be completed within a specific time period. Usually, some persons are specifically designated to do these works. What will we do, if after some time we realize that the above said work would not be completed in the desired time? We assign more guys to help the earlier people and finish the work in time. This makes the very basic concept of time and work i.e. more people finish the work in lesser time, and lesser persons take more time to finish work. Let us assume a wall has to be built which has to be done in 10 days. If a person can build a wall in 10 days, then in 1 day he will build 1/10th of that wall. This basic approach can be applied to solve most of the time and work problems. Combined formula for Time and Work Another important concept that is used in time work problems is the combined efficiency of two or more persons. In questions on time and work, the rates at which certain persons or machines work alone are usually given, and it is necessary to compute the rate at which they work together (or vice versa). Let us say, for example, it takes 3 & 6 hours for Jack and Jill, respectively, to break a dam working alone. So, in 1 hour Jack would have broken one-third or 1/3rd or 33% of the dam and Jill would have broken one-sixth or 1/6th or 16.67% of the dam. In 2 hours, Jack would have destroyed 1/3*2 or 33% *2 = 66.66% of the dam and Jill would have destroyed 1/6th *2= 1/3= 33.33% of the dam. So if Both Jack and Jill work together, they would have destroyed 66.66+ 33.33 (2/3+1/3) or 100% of the dam in 2 hours. Therefore, if both worked together for 1 hour, they would have destroyed 1/3 + 1/6= ½ or half of the dam. Thus in 2 hours, the dam is destroyed. Generalizing, we conclude that in 1 hour, Jack does 1/r of the job, Jill does 1/s of the job, and Jack & Jill together do 1/h of the job or that together they can finish the job in 'h' hours where the formula for work comes out as 1/r + 1/s = 1/h. Time and Work Problems Tricks: LCM Approach The same concept can be learned with unit's work approach as well, which assumes the total work to be done as the LCM (Learn how to calculate LCM) of the number of days taken by each of the persons to complete the work. Let's assume that Nasir can do a piece of work in 20 days working alone and Sophia can do it in 30 days of his own. Now in the above-mentioned case, let us assume that the work consists of the LCM of 20 & 30 i.e. 60 units to be done by Nasir & Sophia. Since Nasir completes 60 units in 20 days, so he completes 60/20 = 3 units of work per day. Similarly, Sophia completes 60 units of the work in 30 days, so he completes 60/30 = 2 units per day. They are doing the same work together, so they do 3 + 2 = 5 units per day. So, 60 units will be done in 60/5 = 12 days. You should go through the following time and work examples in order to understand the concept better. This is one of the favourite areas of the examiner. You will see aptitude questions on time and work in almost all the competitive examinations. Go through the following Examples to learn the concept of work and time and try to understand the time work questions. Example 1: Sam can do a job in 30 days. In how many days can he complete 70% of the job? Sol: Now as per the question he finishes the work in 30 days, or he can do 100% of the work in 30 days. If he has to do only 70% of the work, he will require 70% of the time. Number of days required = 30 × 70/100 = 21 days. Example 2: Rozy can do 75% job in 45 days. In how many days can she complete the job? Sol: Every work is 100% in itself. Rozy does 75% of the work in 45 days. That means she does 1% of the work in 45/75 days and she will do 100% of the work in 100 × 45/75 = 60 days. Example 3: John can do a piece of work in 60 days; he will do how much of the work in 40 days? Sol: In 1 day, John does 1/60th of the work, so in 40 days he will do 40 × 1/60 = 2/3rd of the work. Example 4: Andy can finish a piece of work in 30 days. He will finish what percent of the work in 15 days? Sol: In 1 day, he does 1/30th of the work, and in 15 days, he will do 15/30th of the work which is 100 × 1530 = 50%. Example 5: Ria can do a piece of work in 40 days, she will take how many days to finish three-fourth of the work? Sol: Ria can complete the work in 40 days. She will do ¾th of the work in ¾th of the total time. i.e. she will need 40 × 3/4 = 30 days. Some more examples Try to solve the following time and work questions using time and work formula: Q.1. Raj is twice as efficient as Sally and can finish a piece of work in 25 days less than Sally. Sally can finish this work in how many days? a) 45 Daysb) 30 Daysc) 90 Daysd) 25 Days e) 50 Days Answer & Explanation Sol : Option E Explanation: Work Formula: Efficiency of Raj: Efficiency of Sally = 2: 1. Rai will take 1/2 of time as compared to Sally. Say, Sally takes 2x days and Raj takes x days. Therefore, 2x - x = 25 => x = 25. Therefore, Sally takes 25 × 2 = 50 days to do the work. Q.2. A can do a piece of work in 10 days, and B can do the same work in 20 days. With the help of C, they finished the work in 4 days. C can do the work in how many days, working alone? a) 10 Daysb) 20 Daysc) 30 Daysd) 40 Days e) 50 Days Answer & Explanation Sol : Option A Explanation: C alone will take 1/4 - 1/10 - 1/20 = 2/20 = 1/10 => 10 days to complete the work. Q.3. Daisy is thrice as efficient as Mary and together they can finish a piece of work in 60 days. Mary will take how many days to finish this work alone? a) 80 Daysb) 160 Daysc) 240 Daysd) 320 Days e) 400 Days Answer & Explanation Sol : Option C Explanation: Daisy is thrice as efficient as Mary. Let, Mary takes 3x days and Chetan takes x days to complete the work. So 1/x + 1/3x = 1/60 => x = 80. So, Mary will take 80 × 3 = 240 days to complete the work. Q.4.Mr. David has a sum of money, which is enough to pay Stanley's wages for 30 days and Monika's wages for 60 days. If he employs them together, the money is enough to pay their wages for how many a) 12 Daysb) 10 Daysc) 30 Daysd) 20 Days e) 36 Days Answer & Explanation Sol : Option D Explanation: The concept is the same here as the normal time and work problems. Stanley's one day's wage bill is 1/30 of the total money. Monika's wage bill is 1/60 of the total money. That means together their wage bill is 1/30 + 1/60 = 3/60 = 1/20 of the total money. Thus, the money is enough for their 20 days' wages. Q.5.X can do a piece of work in 20 days. He worked at it for 5 days and then Y finished it in 15 days. In how many days can X and Y together finish the work? a) 10 Daysb) 15 Daysc) 18 Daysd) 24 Days e) 32 Days Answer & Explanation Sol : Option A Explanation: X's five day work = 5/20 = 1/4. Remaining work = 1 - 1/4 = 3/4. This work was done by Y in 15 days. Y does 3/4th of the work in 15 days, he will finish the work in 15 × 4/3 = 20 days. \ X & Y together would take 1/20 + 1/20 = 2/20 = 1/10 i.e. 10 days to complete the work. Q.6.X can do a piece of work in 30 days. Y can do it in 20 days, and Z can do it in 24 days. In how many days will they all do it together? a) 6 Daysb) 5 Daysc) 4 Daysd) 4 2/3 Days e) 8 Days Answer & Explanation Sol : Option E Explanation: They all will take 1/30 + 1/20 + 1/24 = 1/8. => 8 days to complete the work Q.7.M and N can do a piece of work in 8 days and O can do it in 24 days. In how many days will M, N, and O do it together? a) 6 Daysb) 5 Daysc) 4 Daysd) 4 2/3 Days e) 3 Days Answer & Explanation Sol : Option A Explanation: They will together take 1/8 + 1/24 = 1/6 => 6 days to finish the work. Q.8.One man can paint a house in, '10' days and another man can do it in 15 days. If they work together, they can do it in 'd' days, then the value of 'd' is a) 6 Daysb) 8 Daysc) 12 Daysd) 10 Days e) 3 Days Answer & Explanation Sol : Option A Explanation: Total time is d when they are working together = 1/10 + 1/15 = 1/6. => 6 days to complete the work. Q.9.A can do a piece of work in 9 days and B in 18 days. They begin together, but A goes away three days before the work is finished. The work lasts for a) 6 Daysb) 8 Daysc) 12 Daysd) 10 Days e) 3 Days Answer & Explanation Sol : Option B Explanation: Let the work lasts for x days. B works for x days and A works for x - 3 days. So(x-3)/9 + (x/18) = 1 => x = 8 Q.10.It takes 'h' hours to mow a lawn. In one hour what part of the lawn is mowed? a) h - 1b) 1 - hc) 1 / hd) h / (1 + h) e) h Answer & Explanation Sol : Option C Explanation: Total time to mow a lawn = h. So In one hour lawn mowed = 1/h.
{"url":"https://champslearning.co.uk/app/blog/time-and-work-concept","timestamp":"2024-11-04T11:53:22Z","content_type":"text/html","content_length":"40218","record_id":"<urn:uuid:724431ad-546d-478f-9a02-c632c12e4e8a>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00558.warc.gz"}
1:0.3-4ubuntu1 : psi4-dbgsym : armhf : Xenial (16.04) : Ubuntu psi4-dbgsym 1:0.3-4ubuntu1 (armhf binary) in ubuntu xenial 1. 1:0.3-4ubuntu1 PSI4 is an ab-initio quantum chemistry program. It is especially designed to accurately compute properties of small to medium molecules using highly correlated techniques. PSI4 is the parallelized successor of PSI3 and includes many state-of-the-art theoretical methods. It can compute energies and gradients for the following methods: * Restricted, unrestricted and general restricted open shell Hartree-Fock * Restricted, unrestricted and general restricted open shell Densitry-Functional Theory, including density-fitting (DF-DFT) * Density Cumulant Functional Theory (DCFT) * Closed-shell Density-fitted Moeller-Plesset perturbation theory (DF-MP2) * Unrestricted Moeller-Plesset perturbation theory (MP2) * Orbital-Optimized MP2 theory (OMP2) * Third order Moeller-Plesset perturbation theory (MP3) * Orbital-Optimized MP3 theory (OMP3) * Coupled-cluster singles doubles (CCSD) * Coupled-cluster singles doubles with perturbative triples (CCSD(T)) (only for unrestricted (UHF) reference wavefunctions) * Equation-of-motion coupled-cluster singles doubles (EOM-CCSD) Additionally, it can compute energies for the following methods: * Closed/open shell Moeller-Plesset perturbation theory (MP2) * Spin-component scaled MP2 theory (SCS-MP2) * Fourth order Moeller-Plesset perturbation theory (MP4) * Density-fitted symmetry-adapted perturbation theory (DF-SAPT) * Multireference configuration-interaction (MRCI) * Closed-shell Density-fitted coupled-cluster singles doubles (DF-CCSD) * Closed-shell Density-fitted Coupled-cluster singles doubles with perturbative triples (DF-CCSD(T)) * Second/third-order approximate coupled-cluster singles doubles (CC2/CC3) * Mukherjee Multireference coupled-cluster singles doubles theory (mk-MRCCSD) * Mukherjee Multireference coupled-cluster singles doubles with perturbative triples theory (mk-MRCCSD(T)) * Second order algebraic-diagrammatic construction theory (ADC(2)) * Quadratic configuration interaction singles doubles (QCISD) * Quadratic configuration interaction singles doubles with perturbative triples (QCISD(T)) * Density Matrix Renormalization Group SCF (DMRG-SCF) and CI (DMRG-CI) Further features include: * Flexible, modular and customizable input format via python * Excited state calculations with the EOM-CC2/CC3, EOM-CCSD, ADC(2), MRCI and mk-MRCC methods * Utilization of molecular point-group symmetry to increase efficiency * Internal coordinate geometry optimizer * Harmonic frequencies calculations (via finite differences) * Potential surface scans * Counterpoise correction * One-electron properties like dipole/quadrupole moments, transition dipole moments, natural orbitals occupations or electrostatic potential * Composite methods like complete basis set extrapolation or G2/G3 Package version:
{"url":"https://answers.launchpad.net/ubuntu/xenial/armhf/psi4-dbgsym/1%3A0.3-4ubuntu1/+index","timestamp":"2024-11-13T10:58:43Z","content_type":"application/xhtml+xml","content_length":"18366","record_id":"<urn:uuid:29b17d03-06ae-4f2c-9e8b-1ef3de3c4a50>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00582.warc.gz"}
Revision #3 to TR21-009 | 9th November 2021 17:23 One-way Functions and a Conditional Variant of MKTP One-way functions (OWFs) are central objects of study in cryptography and computational complexity theory. In a seminal work, Liu and Pass (FOCS 2020) proved that the average-case hardness of computing time-bounded Kolmogorov complexity is equivalent to the existence of OWFs. It remained an open problem to establish such an equivalence for the average-case hardness of some natural $NP$-complete problem. In this paper, we make progress on this question by studying a conditional variant of the Minimum KT-complexity Problem (MKTP), which we call McKTP, as follows. 1. First, we prove that if McKTP is average-case hard on a polynomial fraction of its instances, then there exist OWFs. 2. Then, we observe that McKTP is $NP$-complete under polynomial-time randomized reductions. 3. Finally, we prove that the existence of OWFs implies the nontrivial average-case hardness of McKTP. Thus the existence of OWFs is inextricably linked to the average-case hardness of this $NP$-complete problem. In fact, building on recent results of Ren and Santhanam (CCC 2021), we show that McKTP is hard-on-average if and only if there are logspace-computable OWFs. Changes to previous version: We edited the timeline of related papers (that appears in the Introduction), to make it more precise. Revision #2 to TR21-009 | 19th October 2021 23:03 One-way Functions and a Conditional Variant of MKTP One-way functions (OWFs) are central objects of study in cryptography and computational complexity theory. In a seminal work, Liu and Pass (FOCS 2020) proved that the average-case hardness of computing time-bounded Kolmogorov complexity is equivalent to the existence of OWFs. It remained an open problem to establish such an equivalence for the average-case hardness of some natural $NP$-complete problem. In this paper, we make progress on this question by studying a conditional variant of the Minimum KT-complexity Problem (MKTP), which we call McKTP, as follows. 1. First, we prove that if McKTP is average-case hard on a polynomial fraction of its instances, then there exist OWFs. 2. Then, we observe that McKTP is $NP$-complete under polynomial-time randomized reductions. 3. Finally, we prove that the existence of OWFs implies the nontrivial average-case hardness of McKTP. Thus the existence of OWFs is inextricably linked to the average-case hardness of this $NP$-complete problem. In fact, building on recent results of Ren and Santhanam (CCC 2021), we show that McKTP is hard-on-average if and only if there are logspace-computable OWFs. Changes to previous version: This is an updated version that incorporates feedback from our reviewers. Thank you! One-way functions (OWFs) are central objects of study in cryptography and computational complexity theory. In a seminal work, Liu and Pass (FOCS 2020) proved that the average-case hardness of computing time-bounded Kolmogorov complexity is \emph{equivalent} to the existence of OWFs. It remained an open problem to establish such an equivalence for the average-case hardness of some $\mathsf {NP}$-complete problem. In this paper, we make progress on this question by studying a conditional variant of the Minimum KT-complexity Problem (MKTP), which we call McKTP, as follows. 1. First, we prove that if McKTP is average-case hard on a polynomial fraction of its instances, then there exist OWFs. 2. Then, we observe that McKTP is $\mathsf{NP}$-complete under polynomial-time randomized reductions. That is, there \emph{are} $\mathsf{NP}$-complete problems whose average-case hardness implies the existence of OWFs. 3. Finally, we prove that the existence of OWFs implies the nontrivial average-case hardness of McKTP. Thus the existence of OWFs is inextricably linked to the average-case hardness of this $\mathsf{NP}$-complete problem. Changes to previous version: We took care of the bugs found by Mikito Nanashima and Hanlin Ren. Thank you for your help! Just as we were preparing to post this article, we were made aware that Liu and Pass, working independently (and having seen the earlier version of our work, that had errors), now claim that a (slightly different) version of time-bounded conditional Kolmogorov complexity 1. $\mathsf{NP}$-complete under polynomial-time randomized reductions, and 2. hard on average on a \emph{polynomial} fraction of its instances if and only if OWFs exist. The main points of departure between the work by Liu and Pass and ours, are that Liu and Pass consider conditional $K^t$ complexity, while we consider conditional KT complexity, and that their work claims an \emph{equivalence} between the average-case hardness of an $\mathsf{NP}$-complete problem and the existence of OWFs. TR21-009 | 1st February 2021 23:40 One-way Functions and Partial MCSP One-way functions (OWFs) are central objects of study in cryptography and computational complexity theory. In a seminal work, Liu and Pass (FOCS 2020) proved that the average-case hardness of computing time-bounded Kolmogorov complexity is equivalent to the existence of OWFs. It remained an open problem to establish such an equivalence for the average-case hardness of some NP-complete problem. In this paper, we make progress on this question by studying a polynomially-sparse variant of Partial Minimum Circuit Size Problem (Partial MCSP), which we call Sparse Partial MCSP, as 1. First, we prove that if Sparse Partial MCSP is zero-error average-case hard on a polynomial fraction of its instances, then there exist OWFs. 2. Then, we observe that Sparse Partial MCSP is NP-complete under polynomial-time deterministic reductions. That is, there are NP-complete problems whose average-case hardness implies the existence of OWFs. 3. Finally, we prove that the existence of OWFs implies the nontrivial zero-error average-case hardness of Sparse Partial MCSP. Thus the existence of OWFs is inextricably linked to the average-case hardness of this NP-complete problem. Comment #1 to TR21-009 | 6th February 2021 17:16 One-way Functions and Partial MCSP Mikito Nanashima and Hanlin Ren reported to us bugs in the proofs of Lemma 4.3 and Lemma 4.4, respectively. Thank you very much! We are now working on fixing these bugs.
{"url":"https://eccc.weizmann.ac.il/report/2021/009/","timestamp":"2024-11-12T12:26:21Z","content_type":"application/xhtml+xml","content_length":"31490","record_id":"<urn:uuid:d29f294c-5416-41e3-aaae-d7bde7b1f668>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00448.warc.gz"}
Accueil > Production scientifique Matière Molle (424) Articles dans des revues Clogging of a single pore by colloidal particles Auteur(s): Dersoir Benjamin, Robert de Saint Vincent Matthieu, Abkarian M., Tabuteau Hervé (Article) Publié: Microfluidics And Nanofluidics, vol. 19 p.953--961 (2015) The Influence of Long-Range Surface Forces on the Contact Angle of Nanometric Droplets and Bubbles Auteur(s): Stocco A. (Article) Publié: Langmuir, vol. 31 p.11835–11841 (2015) Texte intégral en Openaccess : Conducting polymer nanofibers with controlleddiameters synthesized in hexagonal mesophases Auteur(s): Ghosh Srabanti, Ramos L., Remita Samy, Dazzi Alexandre, Deniset-Besseau Ariane, Beaunier Patricia, Goubard Fabrice, Aubert Pierre-Henri, Remita Hynd (Article) Publié: New Journal Of Chemistry, vol. p.8311-8320 (2015) Texte intégral en Openaccess : Simultaneous Phase Transfer and Surface Modification of TiO2 Nanoparticles Using Alkylphosphonic Acids: Optimization and Structure of the Organosols Auteur(s): Schmitt Pauly Céline, Genix A.-C., Alauzun Johan G., Guerrero Gilles, Appavou Marie-Sousai, Javier Pérez, Oberdisse J., Mutin P. Hubert (Article) Publié: Langmuir, vol. 31 p.10966-10974 (2015) Holographic microscopy reconstruction in both object and image half-spaces with an undistorted three-dimensional grid Auteur(s): Verrier N., Alexandre D., Tessier Gilles, Gross M. (Article) Publié: Applied Optics, vol. 54 p.4672-4677 (2015) Texte intégral en Openaccess : Enhanced active motion of Janus colloids at the water surface Auteur(s): Wang X., In M., Blanc C., Nobili M., Stocco A. (Article) Publié: Soft Matter, vol. 11 p.7376-7384 (2015) Viscoelasticity of colloidal polycrystals doped with impurities Auteur(s): Louhichi A., Tamborini E., Oberdisse J., Cipelletti L., Ramos L. (Article) Publié: Physical Review E: Statistical, Nonlinear, And Soft Matter Physics, vol. 92 p.032307 (2015) Texte intégral en Openaccess :
{"url":"https://coulomb.umontpellier.fr/spip.php?page=publications&aigle_annee=&aigle_auteur=&id_theme=15&details=0&typedoc=ART&aigle_frame=0&module=1&aigle_item_by_page=7&offset=238&lang=fr","timestamp":"2024-11-10T11:25:15Z","content_type":"application/xhtml+xml","content_length":"106806","record_id":"<urn:uuid:fdf0ab40-6286-4c88-b890-52625780ab25>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00865.warc.gz"}
Author content All content in this area was uploaded by Vladimír Matušek on Sep 14, 2018 Content may be subject to copyright. Math Educ Res Appl, 2015(1), 1 Received: 2014-11-12 Accepted: 2015-02-06 Online published: 2015-05-25 DOI:10.15414/meraa.2015.01.01.18-22 Original paper The centre of gravity in technical practice Vladimír Matušek1, Eva Matušeková2 1 Slovak University of Agriculture, Faculty of Economics and Management, Department of Mathematics, Tr. A. Hlinku 2, 949 76 Nitra, Slovak Republic 2 Slovak University of Agriculture, Faculty of Economics and Management, Department of Languages, Tr. A. Hlinku 2, 949 76 Nitra, Slovak Republic The aim of this paper is to show the different methods of determination of a position of a centre of gravity in education: derivation of a formula for calculating the centre of gravity of a trapezoid and a derivation of a formula for calculating the volume of a truncated cylinder using gravity. The centre of gravity can be determined graphically, by calculation and experimentally. We use the calculation of a position of the centre of gravity by means of applying mathematics in engineering branches. KEYWORDS: centre of gravity, application, trapezoid, truncated cylinder JEL CLASSIFICATION: I21, J25 The concept of "a centre of mass" in the form of the "a centre of gravity" was first introduced by the ancient Greek physicist, mathematician, and engineer Archimedes of Syracuse. He worked with simplified assumptions about gravity that amount to a uniform field, thus arriving at the mathematical properties of what we now call the centre of mass [1]. Archimedes showed that the torque exerted on a lever by weights resting at various points along the lever is the same as what it would be if all of the weights were moved to a single point their centre of mass. In work on floating bodies he demonstrated that the orientation of a floating object is the onethat makes its centre of mass as low as possible. He developed mathematical techniques for finding the centres of mass of objects of uniform density of various well-defined shapes. Later mathematicians who developed the theory of the center of mass include Pappus of Alexandria, Guido Ubaldi, Francesco Maurolico, Federico Commandino, Simon Stevin, Luca Valerio, Jean-Charles de la Faille, Paul Guldin, John Corresponding author: Vladimír Matušek, Slovak University of Agriculture, Faculty of Economics and Management, Department of Mathematics, Tr. A. Hlinku 2, 949 76 Nitra, Slovak Republic E-mail: vladimir.matusek@uniag.sk Mathematics in Education, Research and Applications (MERAA), ISSN 2453-6881 Math Educ Res Appl, 2015(1), 1 Slovak University of Agriculture in Nitra :: Department of Mathematics, Faculty of Economics and Management :: 2015 19 Wallis, Louis Carré, Pierre Varignon, and Alexis Clairaut. Newton's second law is reformulated with respect to the center of mass in Euler's first law The experimental determination of the center of mass of a body uses gravity forces on the body and relies on the fact that in the parallel gravity field near the surface of theearth the center of mass is the same as the center of gravity. The center of mass of a body with an axis of symmetry and constant density must lie on this axis. Thus, the center of mass of a circular cylinder of constant density has its center of mass on the axis of the cylinder. In the same way, the center of mass of a spherically symmetric body of constant density is at the center of the sphere. In general, for any symmetry of a body, its center of mass will be a fixed point of that symmetry [2]. The term centre of gravity is introduced to pupils at elementary schools for the first time. Further information about the problem they get at secondary schools and universities. Table 1 shows different methods of teaching centre of gravity in connection with the particular type of Table 1 Centre of gravity at different type of schools description of the table geometric average of symmetry Calculate the coordinates of the centre of gravity of an isosceles trapezoid shown in the Fig. 1, 21212 ,,,,0,0,,0 ddDccCBaA . Obviously, the centre of gravity lies on the line segment , at a distance from the side A coordinate can be found out easily, if we calculate the midpoint of or the midpoint . Next, we concentrate on the calculation of the coordinates In the calculation we use a double integral, therefore, we must determine the straight line of We create them by applying the knowledge of an analytic geometry and subsequently we get Next, the coordinate will be calculated in accordance with the formula Mathematics in Education, Research and Applications (MERAA), ISSN 2453-6881 Math Educ Res Appl, 2015(1), 1 Slovak University of Agriculture in Nitra :: Department of Mathematics, Faculty of Economics and Management :: 2015 20 Fig. 1 The coordinates of the centre of gravity of a isosceles trapezoid Then, an elementary area D is given by The area S of a trapezoid can be calculated by a definite integral, so we get then we have CG dxdyx )( 21 After adjusting we get 222 )(2 3cda cda From the Fig.1 we can see that After substituting the previously given and its substituent modification, we can derive a coordinate in the form of The above consideration can be generalized: The center of area (center of mass for a uniform lamina) lies along the line joining the midpoints of the parallel sides, at a perpendicular distance x from the longer side a. The situation is illustrated in the Fig. 2. Mathematics in Education, Research and Applications (MERAA), ISSN 2453-6881 Math Educ Res Appl, 2015(1), 1 Slovak University of Agriculture in Nitra :: Department of Mathematics, Faculty of Economics and Management :: 2015 21 Fig. 2 Trapezoid - coordinate Source: our own In terms of gravity and its applications, a task of calculating the volume of a truncated cylinder seems to be really interesting. Let`s suppose that the cylindrical body has a projection in the plane (x, y) and is bounded by a plane from above. The cylindrical body is shown in the Fig. 3. Fig. 3 Truncated cylinder Let`s express the plane by the function , where and calculate the volume of body .)(),( dydxrdxdyyqdxxpdydxrqypxdydxyxfV Mathematics in Education, Research and Applications (MERAA), ISSN 2453-6881 Math Educ Res Appl, 2015(1), 1 Slovak University of Agriculture in Nitra :: Department of Mathematics, Faculty of Economics and Management :: 2015 22 We adjust the given terms so that the formula calculating the centre of gravity of the shape acts in the last expression. are the first moments of area with respect to the axis x (y) and P is a volume of an elementary area D, that is the volume of the base of the cylinder. After some further adjustments we get Furthermore, from the definition of gravity and geometric shape, it is obvious that CGCGCGCGCG zPrqypxPPrqypxPV Calculation of the solids centre of gravity plays a very important part in the educational process not only at elementary and secondary schools, but predominantly at technical universities. In this paper, we derive the formula for calculating the centre of gravity of a trapezoid, if sizes of its sides are given, next we derived the formula to calculate the volume of a truncated cylinder using gravity. It is important for teachers to have some kind of experience when explaining students a centre of gravity in different subjects (mathematics, physics, technical mathematics, statics, etc.) [1] Goldstein, H., Poole, Ch., Safko, J.(2001). Classical Mechanics. 3rd Edition. Addison Wesley. [2] Miškin, A. (1975). Introductory Mathematics for Engineers. Oxford: Wiley. [3] Wikipedia. (2014). Center of mass. [cit. 2014-10-14]. Retrieved from Reviewed by 1. Doc. Jozef Rédl, PhD., Department of Machine Design, Faculty of Engineering, Slovak University of Agriculture in Nitra, Tr. A. Hlinku 2, 94976 Nitra 2. Doc. Pavol Findura, PhD., Department of Machines and Production Systems, Faculty of Enginnering, Slovak University of Agriculture, Tr. A. Hlinku 2, 94976 Nitra
{"url":"https://www.researchgate.net/publication/284195245_THE_CENTRE_OF_GRAVITY_IN_TECHNICAL_PRACTICE","timestamp":"2024-11-02T06:52:21Z","content_type":"text/html","content_length":"410976","record_id":"<urn:uuid:01cb7272-c4ac-4fe9-85f5-c3dafc653da0>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00835.warc.gz"}
The Stacks project A representable morphism $X \to Y$ of algebraic spaces is a monomorphism according to Section 67.3 if for every scheme $Z$ and morphism $Z \to Y$ the morphism $Z \times _ Y X \to Z$ is representable by a monomorphism of schemes. This means exactly that $Z \times _ Y X \to Z$ is an injective map of sheaves on $(\mathit{Sch}/S)_{fppf}$. Since this is supposed to hold for all $Z$ and all maps $Z \ to Y$ this is in turn equivalent to the map $X \to Y$ being an injective map of sheaves on $(\mathit{Sch}/S)_{fppf}$. Thus we may define a monomorphism of a (possibly nonrepresentable1) morphism of algebraic spaces as follows. Definition 67.10.1. Let $S$ be a scheme. A morphism of algebraic spaces over $S$ is called a monomorphism if it is an injective map of sheaves, i.e., a monomorphism in the category of sheaves on $(\ The following lemma shows that this also means that it is a monomorphism in the category of algebraic spaces over $S$. Lemma 67.10.2. Let $S$ be a scheme. Let $j : X \to Y$ be a morphism of algebraic spaces over $S$. The following are equivalent: $j$ is a monomorphism (as in Definition 67.10.1), $j$ is a monomorphism in the category of algebraic spaces over $S$, and the diagonal morphism $\Delta _{X/Y} : X \to X \times _ Y X$ is an isomorphism. Lemma 67.10.2. Let $S$ be a scheme. Let $j : X \to Y$ be a morphism of algebraic spaces over $S$. The following are equivalent: $j$ is a monomorphism in the category of algebraic spaces over $S$, and the diagonal morphism $\Delta _{X/Y} : X \to X \times _ Y X$ is an isomorphism. Proof. Note that $X \times _ Y X$ is both the fibre product in the category of sheaves on $(\mathit{Sch}/S)_{fppf}$ and the fibre product in the category of algebraic spaces over $S$, see Spaces, Lemma 65.7.3. The equivalence of (1) and (3) is a general characterization of injective maps of sheaves on any site. The equivalence of (2) and (3) is a characterization of monomorphisms in any category with fibre products. $\square$ Proof. This is true because an isomorphism is a closed immersion, and Lemma 67.10.2 above. $\square$ Proof. True because a composition of injective sheaf maps is injective. $\square$ Lemma 67.10.5. The base change of a monomorphism is a monomorphism. Proof. This is a general fact about fibre products in a category of sheaves. $\square$ Lemma 67.10.6. Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic spaces over $S$. The following are equivalent $f$ is a monomorphism, for every scheme $Z$ and morphism $Z \to Y$ the base change $Z \times _ Y X \to Z$ of $f$ is a monomorphism, for every affine scheme $Z$ and every morphism $Z \to Y$ the base change $Z \times _ Y X \to Z$ of $f$ is a monomorphism, there exists a scheme $V$ and a surjective étale morphism $V \to Y$ such that the base change $V \times _ Y X \to V$ is a monomorphism, and there exists a Zariski covering $Y = \bigcup Y_ i$ such that each of the morphisms $f^{-1}(Y_ i) \to Y_ i$ is a monomorphism. Lemma 67.10.6. Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic spaces over $S$. The following are equivalent for every scheme $Z$ and morphism $Z \to Y$ the base change $Z \times _ Y X \to Z$ of $f$ is a monomorphism, for every affine scheme $Z$ and every morphism $Z \to Y$ the base change $Z \times _ Y X \to Z$ of $f$ is a monomorphism, there exists a scheme $V$ and a surjective étale morphism $V \to Y$ such that the base change $V \times _ Y X \to V$ is a monomorphism, and there exists a Zariski covering $Y = \bigcup Y_ i$ such that each of the morphisms $f^{-1}(Y_ i) \to Y_ i$ is a monomorphism. Proof. We will use without further mention that a base change of a monomorphism is a monomorphism, see Lemma 67.10.5. In particular it is clear that (1) $\Rightarrow $ (2) $\Rightarrow $ (3) $\ Rightarrow $ (4) (by taking $V$ to be a disjoint union of affine schemes étale over $Y$, see Properties of Spaces, Lemma 66.6.1). Let $V$ be a scheme, and let $V \to Y$ be a surjective étale morphism. If $V \times _ Y X \to V$ is a monomorphism, then it follows that $X \to Y$ is a monomorphism. Namely, given any cartesian diagram of sheaves \[ \vcenter { \xymatrix{ \mathcal{F} \ar[r]_ a \ar[d]_ b & \mathcal{G} \ar[d]^ c \\ \mathcal{H} \ar[r]^ d & \mathcal{I} } } \quad \quad \mathcal{F} = \mathcal{H} \times _\mathcal {I} \mathcal{G} \] if $c$ is a surjection of sheaves, and $a$ is injective, then also $d$ is injective. Thus (4) implies (1). Proof of the equivalence of (5) and (1) is omitted. $\square$ Lemma 67.10.7. An immersion of algebraic spaces is a monomorphism. In particular, any immersion is separated. Proof. Let $f : X \to Y$ be an immersion of algebraic spaces. For any morphism $Z \to Y$ with $Z$ representable the base change $Z \times _ Y X \to Z$ is an immersion of schemes, hence a monomorphism, see Schemes, Lemma 26.23.8. Hence $f$ is representable, and a monomorphism. $\square$ We will improve on the following lemma in Decent Spaces, Lemma 68.19.1. Lemma 67.10.8. Let $S$ be a scheme. Let $k$ be a field and let $Z \to \mathop{\mathrm{Spec}}(k)$ be a monomorphism of algebraic spaces over $S$. Then either $Z = \emptyset $ or $Z = \mathop{\mathrm Proof. By Lemmas 67.10.3 and 67.4.9 we see that $Z$ is a separated algebraic space. Hence there exists an open dense subspace $Z' \subset Z$ which is a scheme, see Properties of Spaces, Proposition 66.13.3. By Schemes, Lemma 26.23.11 we see that either $Z' = \emptyset $ or $Z' \cong \mathop{\mathrm{Spec}}(k)$. In the first case we conclude that $Z = \emptyset $ and in the second case we conclude that $Z' = Z = \mathop{\mathrm{Spec}}(k)$ as $Z \to \mathop{\mathrm{Spec}}(k)$ is a monomorphism which is an isomorphism over $Z'$. $\square$ Lemma 67.10.9. Let $S$ be a scheme. If $X \to Y$ is a monomorphism of algebraic spaces over $S$, then $|X| \to |Y|$ is injective. Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 042K. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 042K, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/042K","timestamp":"2024-11-03T16:18:14Z","content_type":"text/html","content_length":"21594","record_id":"<urn:uuid:7f74957a-7284-4bf2-9a28-4bd63ef4e6cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00320.warc.gz"}
e of Afterglow constraints on the viewing angle of binary neutron star mergers and determination of the Hubble constant Published in The Astrophysical Journal 909, 114 (Wednesday, March 10, 2021) One of the key properties of any binary is its viewing angle (i.e., inclination), θobs. In binary neutron star (BNS) mergers it is of special importance due to the role that it plays in the measurement of the Hubble constant, H0. The opening angle of the jet that these mergers launch, θj, is also of special interest. Following the detection of the first BNS merger, GW170817, there were numerous attempts to estimate these angles using the afterglow light curve, finding a wide range of values. Here we provide a simple formula for the ratio θobs/θj based on the afterglow light curve and show that this is the only quantity that can be determined from the light curve alone. Namely, it is impossible to determine each of the angles separately without additional information. Our result explains the inconsistency of the values found by the various studies of GW170817 that were largely driven by the different priors taken in each study. Among the additional information that can be used to estimate θobs and θj, the most useful is a VLBI measurement of the afterglow image superluminal motion. An alternative is an identification of the afterglow transition to the sub-relativistic phase. These observations are possible only for mergers observed at small viewing angles, whose afterglow is significantly brighter than the detector's threshold. We discuss the implications of these results to measurements of H0 using GW observations. We show that while the viewing angle will be measured only in a small fraction of future BNS mergers, it can significantly reduce the uncertainty in H0 in each one of these events, possibly to a level of 4-5\%. A minority of the mergers with high precision measurements of this kind may dominate in the future the precision in which H0 will be measured using this method.
{"url":"https://compact-binaries.org/content/publications/2021-03-10/afterglow-constraints-viewing-angle-binary-neutron-star-mergers-and","timestamp":"2024-11-07T10:33:25Z","content_type":"text/html","content_length":"32712","record_id":"<urn:uuid:8b81176c-6d0e-4a6a-ada8-52cec20fe28f>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00617.warc.gz"}
Math 420 - Supplement on Gaussian integers This is a brief supplemental note on the Gaussian integers, written for my Spring 2016 Elementary Number Class at Brown University. With respect to the book, the nearest material is the material in Chapters 35 and 36, but we take a very different approach. A pdf of this note can be found here. I'm sure there are typos, so feel free to ask me or correct me if you think something is amiss. In this note, we cover the following topics. 1. What are the Gaussian integers? 2. Unique factorization within the Gaussian integers. 3. An application of the Gaussian integers to the Diophantine equation ${y^2 = x^3 - 1}$. 4. Other integer-like sets: general rings. 5. Specific examples within ${\mathbb{Z}[\sqrt{2}]}$ and ${\mathbb{Z}[\sqrt{-5}]}$. 1. What are the Gaussian Integers? The Gaussian Integers are the set of numbers of the form ${a + bi}$, where ${a}$ and ${b}$ are normal integers and ${i}$ is a number satisfying ${i^2 = -1}$. As a collection, the Gaussian Integers are represented by the symbol ${\mathbb{Z}[i]}$, or sometimes ${\mathbb{Z}[\sqrt{-1}]}$. These might be pronounced either as The Gaussian Integers or as Z append i. In many ways, the Gaussian integers behave very much like the regular integers. We've been studying the qualities of the integers, but we should ask — which properties are really properties of the integers, and which properties hold in greater generality? Is it the integers themselves that are special, or is there something bigger and deeper going on? These are the main questions that we ask and make some progress towards in these notes. But first, we need to describe some properties of Gaussian integers. We will usually use the symbols ${z = a + bi}$ to represent our typical Gaussian integer. One adds and multiples two Gaussian integers just as you would add and multiply two complex numbers. Informally, you treat ${i}$ like a polynomial indeterminate ${X}$, except that it satisfies the relation ${X^2 = -1}$. Definition 1 For each complex number ${z = a + bi}$, we define the conjugate of ${z}$, written as ${\overline{z}}$, by $$\overline{z} = a - bi.$$ We also define the norm of ${z}$, written as ${N (z)}$, by $$N(z) = a^2 + b^2.$$ You can check that ${N(z) = z \overline{z}}$ (and in fact this is one of your assigned problems). You can also chack that ${N(zw) = N(z)N(w)}$, or rather that the norm is multiplicative (this is also one of your assigned problems). Even from our notation, it's intuitive that ${z = a + bi}$ has two parts, the part corresponding to ${a}$ and the part corresponding to ${b}$. We call ${a}$ the real part of ${z}$, written as ${\Re z = a}$, and we call ${b}$ the imaginary part of ${z}$, written as ${\Im z = b}$. I should add that the name ''imaginary number'' is a poor name that reflects historical reluctance to view complex numbers as acceptable. For that matter, the name ''complex number'' is also a poor name. As a brief example, consider the Gaussian integer ${z = 2 + 5i}$. Then ${N(z) = 4 + 25 = 29}$, ${\Re z = 2}$, ${\Im z = 5}$, and ${\overline{z} = 2 - 5i}$. We can ask similar questions to those we asked about the regular integers. What does it mean for ${z \mid w}$ in the complex case? Definition 2 We say that a Gaussian integer ${z}$ divides another Gaussian integer ${w}$ if there is some Gaussian integer ${k}$ so that ${zk = w}$. In this case, we write ${z \mid w}$, just as we write for regular integers. For the integers, we immediately began to study the properties of the primes, which in many ways were the building blocks of the integers. Recall that for the regular integers, we said ${p}$ was a prime if its only divisors were ${\pm 1}$ and ${\pm p}$. In the Gaussian integers, the four numbers ${\pm 1, \pm i}$ play the same role as ${\pm 1}$ in the usual integers. These four numbers are distinguished as being the only four Gaussian integers with norm equal to ${1}$. That is, the only solutions to ${N(z) = 1}$ where ${z}$ is a Gaussian integer are ${z = \pm 1, \pm i}$. We call these four numbers the Gaussian units. With this in mind, we are ready to define the notion of a prime for the Gaussian integers. Definition 3 We say that a Gaussian integer ${z}$ with ${N(z) > 1}$ is a Gaussian prime if the only divisors of ${z}$ are ${u}$ and ${uz}$, where ${u = \pm 1, \pm i}$ is a Gaussian unit. Remark 1 When we look at other integer-like sets, we will actually use a different definition of a prime. It's natural to ask whether the normal primes in ${\mathbb{Z}}$ are also primes in ${\mathbb{Z}[i]}$. And the answer is no. For instance, ${5}$ is a prime in ${\mathbb{Z}}$, but $$5 = (1 + 2i)(1 - 2i)$$ in the Gaussian integers. However, the two Gaussian integers ${1 + 4i}$ and ${1 - 4i}$ are prime. It also happens to be that ${3}$ is a Gaussian prime. We will continue to investigate which numbers are Gaussian primes over the next few lectures. With a concept of a prime, it's also natural to ask whether or not the primes form the building blocks for the Gaussian integers like they form the building blocks for the regular integers. We take up this in our next topic. 2. Unique Factorization in the Gaussian Integers Let us review the steps that we followed to prove unique factorization for ${\mathbb{Z}}$. 1. We proved that for ${a,b}$ in ${\mathbb{Z}}$ with ${b \neq 0}$, there exist unique ${q}$ and ${r}$ such that ${a = bq + r}$ with ${0 \leq r < b}$. This is called the Division Algorithm. 2. By repeatedly applying the Division Algorithm, we proved the Euclidean Algorithm. In particular, we showed that the last nonzero remainder was the GCD of our initial numbers. 3. By performing reverse substition on the steps of the Euclidean Algorithm, we showed that there are integer solutions in ${x,y}$ to the Diophantine equation ${ax + by = \gcd(a,b)}$. This is often called Bezout's Theorem or Bezout's Lemma, although we never called it by that name in class. 4. With Bezout's Theorem, we showed that if a prime ${p}$ divides ${ab}$, then ${p \mid a}$ or ${p \mid b}$. This is the crucial step towards proving Unique Factorization. 5. We then proved Unique Factorization. Each step of this process can be repeated for the Gaussian integers, with a few notable differences. Remarkably, once we have the division algorithm, each proof is almost identical for ${\mathbb{Z} [i]}$ as it is for ${\mathbb{Z}}$. So we will prove the division algorithm, and then give sketches of the remaining ideas, highlighting the differences that come up along the way. In the division algorithm, we require the remainder ${r}$ to ''be less than what we are dividing by.'' A big problem in translating this to the Gaussian integers is that the Gaussian integers are not ordered. That is, we don't have a concept of being greater than or less than for ${\mathbb{Z}[i]}$. When this sort of problem emerges, we will get around this by taking norms. Since the norm of a Gaussian integer is a typical integer, we will be able to use the ordering of the integers to order our Theorem 4 For ${z,w}$ in ${\mathbb{Z}[i]}$ with ${w \neq 0}$, there exist ${q}$ and ${r}$ in ${\mathbb{Z}[i]}$ such that ${z = qw + r}$ with ${N(r) < N(w)}$. Proof: Here, we will cheat a little bit and use properties about general complex numbers and the rationals to perform this proof. One can give an entirely intrinsic proof, but I like the approach I give as it also informs how to actually compute the ${q}$ and ${r}$. The entire proof boils down to the idea of writing ${z/w}$ as a fraction and approximating the real and imaginary parts by the nearest integers. Let us now transcribe that idea. We will need to introduce some additional symbols. Let ${z = a_1 + b_1 i}$ and ${w = a_2 + b_2 i}$. Then \frac{z}{w} &= \frac{a_1 + b_1 i}{a_2 + b_2 i} = \frac{a_1 + b_1 i}{a_2 + b_2 i} \frac{a_2 - b_2 i}{a_2 - b_2 i} \\ &= \frac{a_1a_2 + b_1 b_2}{a_2^2 + b_2^2} + i \frac{b_1 a_2 - a_1 b_2}{a_2^2 + b_2 ^2} \\ &= u + iv. By rationalizing the denominator by multiplying by ${\overline{w}/ \overline{w}}$, we are able to separate out the real and imaginary parts. In this final expression, we have named ${u}$ to be the real part and ${v}$ to be the imaginary part. Notice that ${u}$ and ${v}$ are normal rational numbers. We know that for any rational number ${u}$, there is an integer ${u'}$ such that ${\lvert u - u' \rvert \leq \frac{1}{2}}$. Let ${u'}$ and ${v'}$ be integers within ${1/2}$ of ${u}$ and ${v}$ above, Then we claim that we can choose ${q = u' + i v'}$ to be the ${q}$ in the theorem statement, and let ${r}$ be the resulting remainder, ${r = z - qw}$. We need to check that ${N(r) < N(w)}$. We will check that explicitly. We compute N(r) &= N(z - qw) = N\left(w \left(\frac{z}{w} - q\right)\right) = N(w) N\left(\frac{z}{w} - q\right). Note that we have used that ${N(ab) = N(a)N(b)}$. In this final expression, we have already come across ${\frac{z}{w}}$ before — it's exactly what we called ${u + iv}$. And we called ${q = u' + i v'}$. So our final expression is the same as $$N(r) = N(w) N(u + iv - u' - i v') = N(w) N\left( (u - u') + i (v - v')\right).$$ How large can the real and imaginary parts of ${(u-u') + i (v - v')}$ be? By our choice of ${u'}$ and ${v'}$, they can be at most ${1/2}$. So we have that $$N(r) \leq N(w) N\left( (\tfrac{1}{2})^2 + (\tfrac{1}{2})^2\right) = \frac{1}{2} N(w).$$ And so in particular, we have that ${N(r) < N(w)}$ as we needed. $\Box$ Note that in this proof, we did not actually show that ${q}$ or ${r}$ are unique. In fact, unlike the case in the regular integers, it is not true that ${q}$ and ${r}$ are unique. Example 1 Consider ${3+5i, 1 + 2i}$. Then we compute $$\frac{3+5i}{1+2i} = \frac{3+5i}{1+2i}\frac{1-2i}{1-2i} = \frac{13}{5} + i \frac{-1}{5}.$$ The closest integer to ${13/5}$ is ${3}$, and the closest integer to ${-1/5}$ is ${0}$. So we take ${q = 3}$. Then ${r = (3+5i) - (1+2i)3 = -i}$, and we see in total that $$3+5i = (1+2i) 3 - i.$$ Note that ${N(-i) = 1}$ and ${N(1 + 2i) = 5}$, so this choice of ${q}$ and ${r}$ works. As ${13/5}$ is sort of close to ${2}$, what if we chose ${q = 2}$ instead? Then ${r = (3 + 5i) - (1 + 2i)2 = 1 + i}$, leading to the overall expression $$3_5i = (1 + 2i) 2 + (1 + i).$$ Note that ${N(1+i) = 2 < N(1+2i) = 5}$, so that this choice of ${q}$ and ${r}$ also works. This is an example of how the choice of ${q}$ and ${r}$ is not well-defined for the Gaussian integers. In fact, even if one decides to choose ${q}$ to that ${N(r)}$ is minimal, the resulting choices are still not necessarily unique. This may come as a surprise. The letters ${q}$ and ${r}$ come from our tendency to call those numbers the quotient and remainder after division. We have shown that the quotient and remainder are not well-defined, so it does not make sense to talk about ''the remainder'' or ''the quotient.'' This is a bit strange! Are we able to prove unique factorization when the process of division itself seems to lead to ambiguities? Let us proceed forwards and try to see. Our next goal is to prove the Euclidean Algorithm. By this, we mean that by repeatedly performing the division algorithm starting with two Gaussian integers ${z}$ and ${w}$, we hope to get a sequence of remainders with the last nonzero remainder giving a greatest common divisor of ${z}$ and ${w}$. Before we can do that, we need to ask a much more basic question. What do we mean by a greatest common divisor? In particular, the Gaussian integers are not ordered, so it does not make sense to say whether one Gaussian integer is bigger than another. For instance, is it true that ${i > 1}$? If so, then certainly ${i}$ is positive. We know that multiplying both sides of an inequality by a positive number doesn't change that inequality. So multiplying ${i > 1}$ by ${i}$ leads to ${-1 > i}$, which is absurd if ${i}$ was supposed to be positive! To remedy this problem, we will choose a common divisor of ${z}$ and ${w}$ with the greatest norm (which makes sense, as the norm is a regular integer and thus is well-ordered). But the problem here, just as with the division algorithm, is that there may or may not be multiple such numbers. So we cannot talk about ''the greatest common divisor'' and instead talk about ''a greatest common divisor.'' To paraphrase Lewis Carroll's\footnote{Carroll was also a mathematician, and hid some nice mathematics inside some of his works.} Alice, things are getting curiouser and curiouser! Definition 5 For nonzero ${z,w}$ in ${\mathbb{Z}[i]}$, a greatest common divisor of ${z}$ and ${w}$, denoted by ${\gcd(z,w)}$, is a common divisor with largest norm. That is, if ${c}$ is another common divisor of ${z}$ and ${w}$, then ${N(c) \leq N(\gcd(z,w))}$. If ${N(\gcd(z,w)) = 1}$, then we say that ${z}$ and ${w}$ are relatively prime. Said differently, if ${1}$ is a greatest common divisor of ${z}$ and ${w}$, then we say that ${z}$ and ${w}$ are relatively prime. Remark 2 Note that ${\gcd(z,w)}$ as we're writing it is not actually well-defined, and may stand for any greatest common divisor of ${z}$ and ${w}$. With this definition in mind, the proof of the Euclidean Algorithm is almost identical to the proof of the Euclidean Algorithm for the regular integers. As with the regular integers, we need the following result, which we will use over and over again. Lemma 6 Suppose that ${z \mid w_1}$ and ${z \mid w_2}$. Then for any ${x,y}$ in ${\mathbb{Z}[i]}$, we have that ${z \mid (x w_1 + y w_2)}$. Proof: As ${z \mid w_1}$, there is some Gaussian integer ${k_1}$ such that ${z k_1 = w_1}$. Similarly, there is some Gaussian integer ${k_2}$ such that ${z k_2 = w_2}$. Then ${xw_1 + yw_2 = zxk_1 + zyk_2 = z(xk_1 + yk_2)}$, which is divisible by ${z}$ as this is the definition of divisibility. $\Box$ Notice that this proof is identical to the analogous statement in the integers, except with differently chosen symbols. That is how the proof of the Euclidean Algorithm goes as well. Theorem 7 let ${z,w}$ be nonzero Gaussian integers. Recursively apply the division algorithm, starting with the pair ${z, w}$ and then choosing the quotient and remainder in one equation the new pair for the next. The last nonzero remainder is divisible by all common divisors of ${z,w}$, is itself a common divisor, and so the last nonzero remainder is a greatest common divisor of ${z}$ and ${w}$. Symbolically, this looks like z &= q_1 w + r_1, \quad N(r_1) < N(w) \\\\ w &= q_2 r_1 + r_2, \quad N(r_2) < N(r_1) \\\\ r_1 &= q_3 r_2 + r_3, \quad N(r_3) < N(r_2) \\\\ \cdots &= \ cdots \\\\ r_k &= q_{k+2} r_{k+1} + r_{k+2}, \quad N(r_{k+2}) < N(r_{k+1}) \\\\ r_{k+1} &= q_{k+3} r_{k+2} + 0, where ${r_{k+2}}$ is the last nonzero remainder, which we claim is a greatest common divisor of ${z}$ and ${w}$. Proof: We are claiming several thing. Firstly, we should prove our implicit claim that this algorithm terminates at all. Is it obvious that we should eventually reach a zero remainder? In order to see this, we look at the norms of the remainders. After each step in the algorithm, the norm of the remainder is smaller than the previous step. As the norms are always nonnegative integers, and we know there does not exist an infinite list of decreasing positive integers, we see that the list of nonzero remainders is finite. So the algorithm terminates. We now want to prove that the last nonzero remainder is a common divisor and is in fact a greatest common divisor. The proof is actually identical to the proof in the integer case, merely with a different choice of symbols. Here, we only sketch the argument. Then the rest of the argument can be found by comparing with the proof of the Euclidean Algorithm for ${\mathbb{Z}}$ as found in the course textbook. For ease of exposition, suppose that the algorithm terminated in exatly 3 steps, so that we have z &= q_1 w + r_1, \\ w &= q_2 r_1 + r_2 \\ r_1 &= q_3 r_2 + 0. On the one hand, suppose that ${d}$ is a common divisor of ${z}$ and ${w}$. Then by our previous lemma, ${d \mid z - q_1 w = r_1}$, so that we see that ${d}$ is a divisor of ${r_1}$ as well. Applying to the next line, we have that ${d \mid w}$ and ${d \mid r_1}$, so that ${d \mid w - q_2 r_1 = r_2}$. So every common divisor of ${z}$ and ${w}$ is a divisor of the last nonzero remainder ${r_2}$. On the other hand, ${r_2 \mid r_1}$ by the last line of the algorithm. Then as ${r_2 \mid r_1}$ and ${r_2 \mid r_1}$, we know that ${r_2 \mid q_2 r_1 + r_2 = w}$. Applying this to the first line, as ${r_2 \mid r_1}$ and ${r_2 \mid w}$, we know that ${r_2 \mid q_1 w + r_1 = z}$. So ${r_2}$ is a common divisor. We have shown that ${r_2}$ is a common divisor of ${z}$ and ${w}$, and that every common divisor of ${z}$ and ${w}$ divides ${r_2}$. How do we show that ${r_2}$ is a greatest common divisor? Suppose that ${d}$ is a common divisor of ${z}$ and ${w}$, so that we know that ${d \mid r_2}$. In particular, this means that there is some nonzero ${k}$ so that ${dk = r_2}$. Taking norms, this means that ${N(dk) = N(d)N(k) = N(r_2)}$. As ${N(d)}$ and ${N(k)}$ are both at least ${1}$, this means that ${N(d) \leq N(r_2)}$. This is true for every common divisor ${d}$, and so ${N(r_2)}$ is at least as large as the norm of any common divisor of ${z}$ and ${w}$. Thus ${r_2}$ is a greatest common divisor. The argument carries on in the same way for when there are more steps in the algorithm. $\Box$ Theorem 8 The greatest common divisor of ${z}$ and ${w}$ is well-defined, up to multiplication by ${\pm 1, \pm i}$. In other words, if ${\gcd(z,w)}$ is a greatest common divisor of ${z}$ and ${w} $, then all greatest common divisors of ${z}$ and ${w}$ are given by ${\pm \gcd(z,w), \pm i \gcd(z,w)}$. Proof: Suppose ${d}$ is a greatest common divisor, and let ${\gcd(z,w)}$ denote a greatest common divisor resulting from an application of the Euclidean Algorithm. Then we know that ${d \mid \gcd (z,w)}$, so that there is some ${k}$ so that ${dk = \gcd(z,w)}$. Taking norms, we see that ${N(d)N(k) = N(\gcd(z,w)}$. But as both ${d}$ and ${\gcd(z,w)}$ are greatest common divisors, we must have that ${N(d) = N(\gcd(z,w))}$. So ${N(k) = 1}$. The only Gaussian integers with norm one are ${\pm 1, \pm i}$, so we have that ${du = \gcd(z,w)}$ where ${u}$ is one of the four Gaussian units, ${\pm 1, \pm i}$. Conversely, it's clear that the four numbers ${\pm \gcd(z,w), \pm i \gcd(z,w)}$ are all greatest common divisors. $\Box$ Now that we have the Euclidean Algorithm, we can go towards unique factorization in ${\mathbb{Z}[i]}$. Let ${g}$ denote a greatest common divisor of ${z}$ and ${w}$. Reverse substitution in the Euclidean Algorithm shows that we can find Gaussian integer solutions ${x,y}$ to the (complex) linear Diophantine equation $$zx + wy = g.$$ Let's see an example. Example 2 Consider ${32 + 9i}$ and ${4 + 11i}$. The Euclidean Algorithm looks like 32 + 9i &= (4 + 11i)(2 - 2i) + 2 - 5i, \\\\ 4 + 11i &= (2 - 5i)(-2 + i) + 3 - i, \\\\ 2 - 5i &= (3-i)(1-i) - i, \\\\ 3 - i &= -i (1 + 3i) + 0. So we know that ${-i}$ is a greatest common divisor of ${32 + 9i}$ and ${4 + 11i}$, and so we know that ${32+9i}$ and ${4 + 11i}$ are relatively prime. Let us try to find a solution to the Diophantine equation $$x(32 + 9i) + y(4 + 11i) = 1.$$ Performing reverse substition, we see that -i &= (2 - 5i) - (3-i)(1-i) \\\\ &= (2 - 5i) - (4 + 11i - (2-5i)(-2 + i))(1-i) \\\\ &= (2 - 5i) - (4 + 11i)(1 - i) + (2 - 5i)(-2 + 1)(1 - i) \\\\ &= (2 - 5i)(3i) - (4 + 11i)(1 - i) \\\\ &= (32 + 9i - (4 + 11i)(2 - 2i))(3i) - (4 + 11i)(1 - i) \\\\ &= (32 + 9i) 3i - (4 + 11i)(2 - 2i)(3i) - (4 + 11i)(1-i) \\\\ &= (32 + 9i) 3i - (4 + 11i)(7 + 5i). Multiplying this through by ${i}$, we have that $$1 = (32 + 9i) (-3) + (4 + 11i)(5 - 7i).$$ So one solution is $ {(x,y) = (-3, 5 - 7i)}$. Although this looks more complicated, the process is the same as in the case over the regular integers. The apparent higher difficulty comes mostly from our lack of familiarity with basic arithmetic in ${\mathbb{Z}[i]}$. The rest of the argument is now exactly as in the integers. Theorem 9 Suppose that ${z, w}$ are relatively prime, and that ${z \mid wv}$. Then ${z \mid v}$. Proof: This is left as an exercise (and will appear on the next midterm in some form — cheers to you if you've read this far in these notes). But it's now the almost the same as in the regular integers. $\Box$ Theorem 10 Let ${z}$ be a Gaussian integer with ${N(z) > 1}$. Then ${z}$ can be written uniquely as a product of Gaussian primes, up to multiplication by one of the Gaussian units ${\pm 1, \pm i} Proof: We only sketch part of the proof. There are multiple ways of doing this, but we present the one most similar to what we've done for the integers. If there are Gaussian integers without unique factorization, then there are some (maybe they tie) with minimal norm. So let ${z}$ be a Gaussian integer of minimal norm without unique factorization. Then we can write $$p_1 p_2 \cdots p_k = z = q_1 q_2 \cdots q_\ell,$$ where the ${p}$ and ${q}$ are all primes. As ${p_1 \mid z = q_1 q_2 \cdots q_\ell}$, we know that ${p_1}$ divides one of the ${q}$ (by Theorem~9), and so (up to units) we can say that ${p_1}$ is one of the ${q}$ primes. We can divide each side by ${p_1}$ and we get two supposedly different factorizations of a Gaussian integer of norm ${N(z)/N(p_1) < N(z)}$, which is less than the least norm of an integer without unique factorization (by what we supposed). This is a contradiction, and we can conclude that there are no Gaussian integers without unique factorization. $\ If this seems unclear, I recommend reviewing this proof and the proof of unique factroziation for the regular integers. I should also mention that one can modify the proof of unique factorization for ${\mathbb{Z}}$ as given in the course textbook as well (since it is a bit different than what we have done). Further, the course textbook does proof of unique factorization for ${\mathbb{Z}[i]}$ in Chapter 36, which is very similar to the proof sketched above (although the proof of Theorem~9 is very different.) 3. An application to ${y^2 = x^3 - 1}$. We now consider the nonlinear Diophantine equation ${y^2 = x^3 - 1}$, where ${x,y}$ are in ${\mathbb{Z}}$. This is hard to solve over the integers, but by going up to ${\mathbb{Z}[i]}$, we can determine all solutions. In ${\mathbb{Z}[i]}$, we can rewrite $$ y^2 + 1 = (y + i)(y - i) = x^3. \tag{1}$$ We claim that ${y+i}$ and ${y-i}$ are relatively prime. To see this, suppose that ${d}$ is a common divisor of ${y+i} $ and ${y-i}$. Then ${d \mid (y + i) - (y - i) = 2i}$. It happens to be that ${2i = (1 + i)^2}$, and that ${(1 + i)}$ is prime. To see this, we show the following. Lemma 11 Suppose ${z}$ is a Gaussian integer, and ${N(z) = p}$ is a regular prime. Then ${z}$ is a Gaussian prime. Proof: Suppose that ${z}$ factors nontrivially as ${z = ab}$. Then taking norms, ${N(z) = N(a)N(b)}$, and so we get a nontrivial factorization of ${N(z)}$. When ${N(z)}$ is a prime, then there are no nontrivial factorizations of ${N(z)}$, and so ${z}$ must have no nontrivial factorization. $\Box$ As ${N(1+i) = 2}$, which is a prime, we see that ${(1 + i)}$ is a Gaussian prime. So ${d \mid (1 + i)^2}$, which means that ${d}$ is either ${1, (1 + i)}$, or ${(1+i)^2}$ (up to multiplication by a Gaussian unit). Suppose we are in the case of the latter two, so that ${(1+i) \mid d}$. Then as ${d \mid (y + i)}$, we know that ${(1 + i) \mid x^3}$. Taking norms, we have that ${2 \mid x^6}$. By unique factorization in ${\mathbb{Z}}$, we know that ${2 \mid x}$. This means that ${4 \mid x^2}$, which allows us to conclude that ${x^3 \equiv 0 \pmod 4}$. Going back to the original equation $ {y^2 + 1 = x^3}$, we see that ${y^2 + 1 \equiv 0 \pmod 4}$, which means that ${y^2 \equiv 3 \pmod 4}$. A quick check shows that ${y^2 \equiv 3 \pmod 4}$ has no solutions ${y}$ in ${\mathbb{Z}/4\ So we rule out the case then ${(1 + i) \mid d}$, and we are left with ${d}$ being a unit. This es exactly the case that ${y+i}$ and ${y-i}$ are relatively prime. Recall that ${(y+i)(y-i) = x^3}$. As ${y+i}$ and ${y-i}$ are relatively prime and their product is a cube, by unique factorization in ${\mathbb{Z}[i]}$ we know that ${y+i}$ and ${y-i}$ much each be Gaussian cubes. Then we can write ${y+i = (m + ni)^3}$ for some Gaussian integer ${m + ni}$. Expanding, we see that $$y+i = m^3 - 3mn^2 + i(3m^2n - n^3).$$ Equating real and imaginary parts, we have that y &= m(m^2 - 3n^2) \\ 1 &= n(3m^2 - n^2). This second line shows that ${n \mid 1}$. As ${n}$ is a regular integer, we see that ${n = 1}$ or ${-1}$. If ${n = 1}$, then that line becomes ${1 = (3m^2 - 1)}$, or after rearranging ${2 = 3m^2}$. This has no solutions. If ${n = -1}$, then that line becomes ${1 = -(3m^2 - 1)}$, or after rearranging ${0 = 3m^2}$. This has the solution ${m = 0}$, so that ${y+i = (-i)^3 = i}$, which means that ${y = 0}$. Then from ${y^ 2 + 1 = x^3}$, we see that ${x = 1}$. And so the only solution is ${(x,y) = (1,0)}$, and there are no other solutions. 4. Other Rings The Gaussian integers have many of the same properties as the regular integers, even though there are some differences. We could go further. For example, we might consider the following integer-like sets, $$\mathbb{Z}(\sqrt{d}) = { a + b \sqrt{d} : a,b \in \mathbb{Z} }.$$ One can add, subtract, and multiply these together in similar ways to how we can add, subtract, and multiply together integers, or Gaussian integers. We might ask what properties these other integer-like sets have. For instance, do they have unique factorization? More generally, there is a better name than ''integer-like set'' for this sort of construction. Suppose ${R}$ is a collection of elements, and it makes sense to add, subtract, and multiply these elements together. Further, we want addition and multiplication to behave similarly to how they behave for the regular integers. In particular, if ${r}$ and ${s}$ are elements in ${R}$, then we want ${r + s = s + r}$ to be in ${R}$; we want something that behaves like ${0}$ in the sense that $ {r + 0 = r}$; for each ${r}$, want another element ${-r}$ so that ${r + (-r) = 0}$; we want ${r \cdot s = s \cdot r}$; we want something that behaves like ${1}$ in the sense that ${r \cdot 1 = r}$ for all ${r \neq 0}$; and we want ${r(s_1 + s_2) = r s_1 + r s_2}$. Such a collection is called a ring. (More completely, this is called a commutative unital ring, but that's not important.) It is not important that you explicitly remember exactly what the definition of a ring is. The idea is that there is a name for things that are ''integer-like'' and that we might wonder what properties we have been thinking of as properties of the integers are actually properties of rings. As a total aside: there are very many more rings too, things that look much more different than the integers. This is one of the fundamental questions that leads to the area of mathematics called Abstract Algebra. With an understanding of abstract algebra, one could then focus on these general number theoretic problems in an area of math called Algebraic Number Theory. 5. The rings ${\mathbb{Z}[\sqrt{d}]}$ We can describe some of the specific properties of ${\mathbb{Z}[\sqrt{d}]}$, and suggest how some of the ideas we've been considering do (or don't) generalize. For a general element ${n = a + b \sqrt {d}}$, we can define the conjugate ${\overline{n} = a - b\sqrt {d}}$ and the norm ${N(n) = n \cdot \overline{n} = a^2 - d b^2}$. We call those elements ${u}$ with ${N(u) = 1}$ the units in ${\mathbb Some of the definitions we've been using turn out to not generalize so easily, or in quite the ways we expect. If ${n}$ doesn't have a nontrivial factoriation (meaning that we cannot write ${n = ab}$ with ${N(a), N(b) \neq 1}$), then we call ${n}$ an irreducible. In the cases of ${\mathbb{Z}}$ and ${\mathbb{Z}[i]}$, we would have called these elements prime. In general, we call a number ${p}$ in ${\mathbb{Z}{\sqrt{d}}}$ a prime if ${p}$ has the property that ${p \mid ab}$ means that ${p \mid a}$ or ${p \mid b}$. Of course, in the cases of ${\mathbb{Z}}$ and ${\mathbb{Z}[i]}$, we showed that irreducibles are primes. But it turns out that this is not usually the case. Let us look at ${\mathbb{Z}{\sqrt{-5}}}$ for a moment. In particular, we can write ${6}$ in two ways as $$6 = 2 \cdot 3 = (1 + \sqrt{-5})(1 - \sqrt{-5}).$$ Although it's a bit challenging to show, these are the only two fundamentally different factorizations of ${6}$ in ${\mathbb{Z}[\sqrt{-5}]}$. One can show (it's not very hard, but it's not particularly illuminating to do here) that neither ${2}$ or ${3}$ divides ${(1 + \sqrt{-5})}$ or ${(1 - \sqrt{-5})}$ (and vice versa), which means that none of these four numbers are primes in our more general definition. One can also show that all four numbers are irreducible. What does this mean? This means that ${6}$ can be factored into irreducibles in fundamentally different ways, and that ${\mathbb{Z}[\sqrt{-5}]}$ does not have unique factorization. It's a good thought exercise to think about what is really different between ${\mathbb{Z}[\sqrt{-5}]}$ and ${\mathbb{Z}}$. At the beginning of this course, it seemed extremely obvious that ${\mathbb {Z}}$ had unique factorization. But in hindsight, is it really so obvious? Understanding when there is and is not unique factorization in ${\mathbb{Z}[\sqrt{d}]}$ is something that people are still trying to understand today. The fact is that we don't know! In particular, we really don't know very much when ${d}$ is positive. One reason why can be seen in ${\mathbb{Z}[\sqrt{2}]}$. If ${n = a + b \sqrt{2}}$, then ${N(n) = a^2 - 2 b^2}$. A very basic question that we can ask is what are the units? That is, which ${n}$ have ${N(n) = 1}$? Here, that means trying to solve the equation $$ a^2 - 2 b^2 = 1. \tag{2}$$ We have seen this equation a few times before. On the second homework assignment, I asked you to show that there were infinitely many solutions to this equation by finding lines and intersecting them with hyperbolas. We began to investigate this Diophantine equation because each solution leads to another square-triangular number. So there are infinitely many units in ${\mathbb{Z}[\sqrt{2}]}$. This is strange! For instance, ${3 + 2 \sqrt{2}}$ is a unit, which means that it behaves just like ${\pm 1}$ in ${\mathbb{Z}}$, or like ${\pm 1, \pm i}$ in ${\mathbb{Z}[i]}$. Very often, the statements we've been looking at and proving are true ''up to multiplication by units.'' Since there are infinitely many in ${\mathbb{Z}[\sqrt 2]}$, it can mean that it's annoying to determine even if two numbers are actually the same up to multiplication by units. As you look further, there are many more strange and interesting behaviours. It is really interesting to see what properties are very general, and what properties vary a lot. It is also interesting to see the different ways in which properties we're used to, like unique factorization, can fail. For instance, we have seen that ${\mathbb{Z}[\sqrt -5]}$ does not have unique factorization. We showed this by seeing that ${6}$ factors in two fundamentally different ways. In fact, some numbers in ${\mathbb{Z}[\sqrt -5]}$ do factor uniquely, and others do not. But if one does not, then it factors in at most two fundamentally different ways. In other rings, you can have numbers which factor in more fundamentally different ways. The actual behaviour here is also really poorly understood, and there are mathematicians who are actively pursuing these topics. It's a very large playground out there. Info on how to comment To make a comment, please send an email using the button below. Your email address won't be shared (unless you include it in the body of your comment). If you don't want your real name to be used next to your comment, please specify the name you would like to use. If you want your name to link to a particular url, include that as well. bold, italics, and plain text are allowed in comments. A reasonable subset of markdown is supported, including lists, links, and fenced code blocks. In addition, math can be formatted using $(inline math)$ or $$(your display equation)$$. Please use plaintext email when commenting. See Plaintext Email and Comments on this site for more. Note also that comments are expected to be open, considerate, and respectful. Comments (1) 1. 2016-04-20 davidlowryduda [Thank you to Deepak for pointing out a few typos.]
{"url":"https://davidlowryduda.com/math-420-supplement-on-gaussian-integers/","timestamp":"2024-11-14T05:34:32Z","content_type":"text/html","content_length":"39593","record_id":"<urn:uuid:1f96bd69-886d-480c-b47c-db1cce8f9223>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00883.warc.gz"}
Deviation Scaled VWAP with Fractal Energy for ThinkorSwim - useThinkScript Community For VWAP and Fractual Energy fans. Deviation scaled MA of the VWAP. You can adjust the length to fit your trading. The fractual energy is added as a coloring of the plot line. The FE does not give an indication of direction. It shows energy of the trend. Magenta is an exhaustion in the trend (below .382). Cyan is a compression or squeeze ( above .618) and may hint a breakout one way or the other may occur. # Deviation Scaled VWAP with Fractual Energy Coloring. # Adapted from ToS DSMA # Mobius FE added with two choices of coloring in the code, # < > .5 # and > .618 between and below .382 # Horserider 12/1/2019 input length = 55; def zeros = vwap - vwap[2]; def filter = reference EhlersSuperSmootherFilter(price = zeros, "cutoff length" = 0.5 * length); def rms = Sqrt(Average(Sqr(filter), length)); def scaledFilter = filter / rms; def alpha = 5 * AbsValue(scaledFilter) / length; def deviationScaledVWAP = CompoundValue(1, alpha * vwap + (1 - alpha) * deviationScaledVWAP[1], vwap); input nFE = 8;#hint nFE: length for Fractal Energy calculation. input AlertOn = no; input Glength = 13; input betaDev = 8; input data = close; def w = (2 * Double.Pi / Glength); def beta = (1 - Cos(w)) / (Power(1.414, 2.0 / betaDev) - 1 ); def alphafe = (-beta + Sqrt(beta * beta + 2 * beta)); def Go = Power(alphafe, 4) * open + 4 * (1 – alphafe) * Go[1] – 6 * Power( 1 - alphafe, 2 ) * Go[2] + 4 * Power( 1 - alphafe, 3 ) * Go[3] - Power( 1 - alphafe, 4 ) * Go[4]; def Gh = Power(alphafe, 4) * high + 4 * (1 – alphafe) * Gh[1] – 6 * Power( 1 - alphafe, 2 ) * Gh[2] + 4 * Power( 1 - alphafe, 3 ) * Gh[3] - Power( 1 - alphafe, 4 ) * Gh[4]; def Gl = Power(alphafe, 4) * low + 4 * (1 – alphafe) * Gl[1] – 6 * Power( 1 - alphafe, 2 ) * Gl[2] + 4 * Power( 1 - alphafe, 3 ) * Gl[3] - Power( 1 - alphafe, 4 ) * Gl[4]; def Gc = Power(alphafe, 4) * data + 4 * (1 – alphafe) * Gc[1] – 6 * Power( 1 - alphafe, 2 ) * Gc[2] + 4 * Power( 1 - alphafe, 3 ) * Gc[3] - Power( 1 - alphafe, 4 ) * Gc[4]; # Variables: def o; def h; def l; def c; def CU1; def CU2; def CU; def CD1; def CD2; def CD; def L0; def L1; def L2; def L3; # Calculations o = (Go + Gc[1]) / 2; h = Max(Gh, Gc[1]); l = Min(Gl, Gc[1]); c = (o + h + l + Gc) / 4; def gamma = Log(Sum((Max(Gh, Gc[1]) - Min(Gl, Gc[1])), nFE) / (Highest(Gh, nFE) - Lowest(Gl, nFE))) / Log(nFE); L0 = (1 – gamma) * Gc + gamma * L0[1]; L1 = -gamma * L0 + L0[1] + gamma * L1[1]; L2 = -gamma * L1 + L1[1] + gamma * L2[1]; L3 = -gamma * L2 + L2[1] + gamma * L3[1]; if L0 >= L1 then { CU1 = L0 - L1; CD1 = 0; } else { CD1 = L1 - L0; CU1 = 0; if L1 >= L2 then { CU2 = CU1 + L1 - L2; CD2 = CD1; } else { CD2 = CD1 + L2 - L1; CU2 = CU1; if L2 >= L3 then { CU = CU2 + L2 - L3; CD = CD2; } else { CU = CU2; CD = CD2 + L3 - L2; plot DSVWAP = deviationScaledVWAP; #DSVWAP.DefineColor("Up", GetColor(1)); #DSVWAP.DefineColor("Down", GetColor(0)); #DSVWAP.AssignValueColor(if gamma < .5 then DSVWAP.Color("Down") else DSVWAP.Color("Up")); DSVWAP.AssignValueColor(if gamma > .618 then Color.CYAN else if gamma < .382 then Color.MAGENTA else Color.WHite); Last edited: Join useThinkScript to post your question to a community of 21,000+ developers and traders. Very inventive way of presenting VWAP. Thank you. Curious, why did you use 55 length & EhlersSS? Always learning... Thanks again, Markos Last edited: I agree with . When I first looked at I set it aside. Looking at today and liking the possibilities. The length was based on what best fit my likes. Looking for a smoother curve catching trends that skipped some of the small retraces. As noted in the header adapted from DSMA which uses Ehlers filter. Hope it works for you. Any tweaks let me know please. I was a bit surprised more people did not have an interest. Seems to me to be a simple straightforward indicator. Maybe it is too simple and not full of lines and arrows. The length was based on what best fit my likes. Looking for a smoother curve catching trends that skipped some of the small retraces. As noted in the header adapted from DSMA which uses Ehlers filter. Hope it works for you. Any tweaks let me know please. I was a bit surprised more people did not have an interest. Seems to me to be a simple straightforward indicator. Maybe it is too simple and not full of lines and arrows. Has anyone written a scan for this yet? tried no success. Added a better description of the concept of the indicator as some have requested such a description be done. I'm looking for a study that allows you to program standard deviation bands with VWAP as the center. Screenshot attached from Sierra Charts on how this configuration works. I've learned how to calculate expected gamma bands, which needs a "programmable" offset for the deviation bands each day. I would prefer to keep this in TOS and not have a second account in Sierra Charts. Has anyone seen something like this for TOS? Thanks in advance for your help- Well-known member 2019 Donor I'm looking for a study that allows you to program standard deviation bands with VWAP as the center. Screenshot attached from Sierra Charts on how this configuration works. I've learned how to calculate expected gamma bands, which needs a "programmable" offset for the deviation bands each day. I would prefer to keep this in TOS and not have a second account in Sierra Charts. Has anyone seen something like this for TOS? Thanks in advance for your help- I wonder if you can use the ToS standard out of the box channel indicator, StandardDevChannel, and change the price parameter to VWAP instead of close price, then add as many channels based on the deviation you want to show. The following study might help - VWAP Standard Deviation Bands, best to be run on an intraday aggregation e.g. 30 mins . Deviation bands are pretty sharp and I like how they adjust with VWAP. For now I'm calculating the gamma offsets and marking on my chart. Working well so far but nothing guaranteed. Appreciate your hello is that cloud moving average (purple and blue) available in here if so could you point out where thanks. by chance do you have a setting for 5 or 15 min that you would recommend for this hello is that cloud moving average (purple and blue) available in here if so could you point out where thanks. by chance do you have a setting for 5 or 15 min that you would recommend for this As requested - Length depends on your style. It is a Deviation Scaled Moving Average. So try your favorite MA lengths. Is there a way to get an alert when the arrows show up Hi Horserider; I observe that there are very big gap on the " Deviation Scaled VWAP with Fractual Energy Coloring" study verus regular standard VWAP especially at lower aggregation like 1min; I am not sure how to interpret this as I am newbie - any help to explain. Thanks. Yes there will be a difference. It is a moving average of the VWAP scaled to a standard deviation to the mean. Sorry cannot explain the math better. So it will be more adaptive to rapid price movements. Just think of it as another type moving average. Not sure. What are you alerting? What is useThinkScript? useThinkScript is the #1 community of stock market investors using indicators and other tools to power their trading strategies. Traders of all skill levels use our forums to learn about scripting and indicators, help each other, and discover new ways to gain an edge in the markets. How do I get started? We get it. Our forum can be intimidating, if not overwhelming. With thousands of topics, tens of thousands of posts, our community has created an incredibly deep knowledge base for stock traders. No one can ever exhaust every resource provided on our site. If you are new, or just looking for guidance, here are some helpful links to get you started. • The most viewed thread: • Our most popular indicator: • Answers to frequently asked questions: What are the benefits of VIP Membership? VIP members get exclusive access to these proven and tested premium indicators: Buy the Dip, Advanced Market Moves 2.0, Take Profit, and Volatility Trading Range. In addition, VIP members get access to over 50 VIP-only custom indicators, add-ons, and strategies, private VIP-only forums, private Discord channel to discuss trades and strategies in real-time, customer support, trade alerts, and much more. Learn all about VIP membership here. How can I access the premium indicators? To access the premium indicators, which are plug and play ready, sign up for VIP membership here.
{"url":"https://usethinkscript.com/threads/deviation-scaled-vwap-with-fractal-energy-for-thinkorswim.1270/","timestamp":"2024-11-11T05:12:49Z","content_type":"text/html","content_length":"186667","record_id":"<urn:uuid:32ac1bff-b2a0-4281-849c-3ecbdc2ac153>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00356.warc.gz"}
Amount vs. Number: What's the Difference? "Amount" refers to a quantity of something uncountable; "number" refers to a count of individual items. Key Differences "Amount" and "number" are two terms used in the English language to refer to the quantity of things. "Amount" typically addresses the total of uncountable nouns, while "number" pertains to the count of individual, countable items. When discussing substances or things that can't be counted individually, like water or sand, we use the term "amount". On the other hand, "number" is suitable for things that are countable, such as apples or cars. Consider the sentence, "The amount of water in the jug is insufficient." Here, "amount" refers to the volume or quantity of water. Compare this to, "The number of apples in the basket is ten," where "number" gives a specific count of apples. Using "amount" or "number" incorrectly can lead to grammatical errors. Saying, "The number of sugar" is incorrect because sugar, being uncountable, requires "amount". Similarly, "The amount of students" is wrong as students are countable and should be referred to by "number". To sum up, "amount" deals with uncountable quantities and "number" is used for countable items. Both terms emphasize quantity, but their usage depends on the type of noun they're associated with. Comparison Chart Example Noun Sugar, water, information Apples, cars, students Grammar Usage Amount of [uncountable noun] Number of [countable noun] Refers to volume, magnitude, or degree Refers to a specific count Sentence Structure Example "The amount of rain was excessive." "The number of rainy days was high." Amount and Number Definitions The degree or extent of something. The amount of effort he put into the project was commendable. A distinct issue or edition. The latest number of the magazine is out. The total of two or more quantities. The amount of money raised was astonishing. A count of individual items. The number of candies in the jar is 100. A quantity of uncountable items. The amount of salt in the dish was perfect. A mathematical symbol representing a quantity. The number 7 is considered lucky in many cultures. A principal plus its interest. The total amount due on the loan is $5,000. A song, dance, or other performance. She performed a number from a popular musical. The sum total of effects. His contributions amount to very little in the grand scheme. Something that arouses strong emotions. That speech was quite a number on everyone. The total of two or more quantities; the aggregate. A member of the set of positive integers; one of a series of symbols of unique meaning in a fixed order that can be derived by counting. A member of any of the following sets of mathematical objects Integers, rational numbers, real numbers, and complex numbers. These sets can be derived from the positive integers through various algebraic and analytic constructions. When should I use "amount"? Use "amount" when referring to uncountable nouns, like water or sugar. Can "amount" refer to money? Yes, you can use "amount" to refer to a sum of money, e.g., "The amount owed is $50." When should I use "number"? Use "number" when referring to countable nouns, such as apples or students. Is it correct to say "number of information"? No, "information" is uncountable, so you should use "amount of information." Can "number" refer to a position in a sequence? Yes, e.g., "She's number one in the competition." Can "amount" indicate degree or extent? Yes, e.g., "The amount of his knowledge on the subject is vast." Is "number" used for editions or issues? Yes, e.g., "The latest number of the journal was enlightening." Can "number" be singular and plural? Yes, e.g., "The number of apples is ten" vs. "There are a number of reasons." How is "amount" used in finance? "Amount" can refer to a principal plus its interest, e.g., "The amount due is significant." Can "amount" refer to volume? Yes, e.g., "The amount of water in the tank is 50 liters." Does "number" always indicate a specific count? Usually, but it can be used more generally, e.g., "A number of people agree." Is "amount" used in measurements? Yes, it often refers to volume, magnitude, or degree. How can "number" be used in entertainment? "Number" can refer to a song, dance, or performance, e.g., "She danced a lively number." Is it correct to say "amount of people"? No, since people are countable, you should use "number of people." Can "number" be used in mathematics? Yes, "number" refers to a mathematical symbol representing quantity. Can "amount" relate to total effects? Yes, e.g., "His deeds amount to nothing." Is "amount" singular or plural? "Amount" is singular but refers to a collective quantity. Can "number" evoke emotions? In a figurative sense, yes. E.g., "That story was a real number on my heart." Is "number" always about counting? Mostly, but not always. It can also refer to a musical performance or a distinct issue of a publication. Can "amount" and "number" be used interchangeably? No, their usage depends on whether the noun is countable or uncountable. About Author Written by Harlon Moss Harlon is a seasoned quality moderator and accomplished content writer for Difference Wiki. An alumnus of the prestigious University of California, he earned his degree in Computer Science. Leveraging his academic background, Harlon brings a meticulous and informed perspective to his work, ensuring content accuracy and excellence. Edited by Aimie Carlson Aimie Carlson, holding a master's degree in English literature, is a fervent English language enthusiast. She lends her writing talents to Difference Wiki, a prominent website that specializes in comparisons, offering readers insightful analyses that both captivate and inform.
{"url":"https://www.difference.wiki/amount-vs-number/","timestamp":"2024-11-12T06:09:37Z","content_type":"text/html","content_length":"130615","record_id":"<urn:uuid:cc3badf6-6c21-433f-bcf0-2061dd4e307b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00370.warc.gz"}
Mastering Boolean Testing: Different Data Types and Operators in Python - Adventures in Machine Learning Python Booleans: Understanding Boolean Type, Values, and Operations Have you ever heard of Booleans in Python programming? Booleans are one of the fundamental data types in Python, representing binary values that are either true or false. In this article, we’ll explore Python Booleans in detail, including their values, use as keywords and variables, boolean operations, truth tables, comparison operators, and more. Let’s get started! Python Boolean Type and Values In Python, the Boolean type is a subtype of the integer type, with two possible values: True and False The values are case-sensitive, so writing true or false will cause a syntax error. To create a Boolean value, you can use the bool() function: >>> x = bool(1) >>> y = bool(0) >>> print(x, y) True False As you can see, the bool() function returns True or False depending on the input value. Note that every value in Python has an associated Boolean value, which is determined by its truthiness. For instance, 0, None, and empty sequences (such as ” or []) are considered False, while non-zero integers, non-empty sequences, and non-empty containers are considered True. Booleans as Keywords and Variables In Python, True and False are also reserved keywords, which means they cannot be used as variable names. You can, however, assign them to variables: >>> x = True >>> y = False >>> print(type(x), type(y)) In addition to the keywords, Python has a built-in Boolean class that you can use to create Boolean objects. The Boolean class has two constant properties, True and False, which are the only instances of the class: >>> x = bool(1) >>> y = bool(0) >>> print(isinstance(x, bool), isinstance(y, bool)) True True Booleans as Numbers Although Booleans are not numbers, in Python, you can use them in arithmetic operations, where True is equivalent to 1 and False is equivalent to 0: >>> x = True >>> y = False >>> print(x + y) >>> print(x * 3) >>> print(y * 4) You can also use comparison operators to compare Boolean values, which return either True or False: >>> x = True >>> y = False >>> print(x == y) >>> print(x != y) >>> print(x > y) Boolean Operators Boolean operators are used to evaluate Boolean expressions, which produce a Boolean result. Python has three Boolean operators: not, and, and or. Let’s look at each of them in detail. The not Operator The not operator is a unary operator that reverses the truth value of its operand. If the operand is True, the not operator returns False, and if the operand is False, the not operator returns True. Here’s an example: >>> x = True >>> print(not x) The and Operator and Short-Circuit Evaluation The and operator is a binary operator that returns True if both of its operands are True, and False otherwise. Unlike other programming languages, Python’s and operator uses short-circuit evaluation, which means that if the left operand is False, the right operand is not evaluated, because the outcome of the expression is already known to be False. Here’s an example: >>> x = False >>> y = True >>> print(x and y) >>> print(y and x) Note that short-circuit evaluation can be advantageous in cases where the evaluation of the second operand can have side effects, such as reading from a file or calling a function, which may be The or Operator and Short-Circuit Evaluation The or operator is a binary operator that returns True if either of its operands is True, and False otherwise. Like the and operator, Python’s or operator also uses short-circuit evaluation, which means that if the left operand is True, the right operand is not evaluated, because the outcome of the expression is already known to be True. Here’s an example: >>> x = False >>> y = True >>> print(x or y) >>> print(y or x) Other Boolean Operators Apart from the basic Boolean operators, Python also has other operators based on mathematical theory and Boolean logic, such as the xor (exclusive or), nand (not and), nor (not or), and implication operators. These are two-input Boolean operators that return either True or False, depending on the input values. You can use bitwise operators to implement these operators in Python. Comparison Operators In addition to Boolean operators, Python also has comparison operators, which are used to compare two values and return a Boolean result. The most common comparison operators are == (equality), != (inequality), <, >, <=, and >= (order comparisons). Here’s an example: >>> x = 5 >>> y = 3 >>> print(x == y) >>> print(x != y) >>> print(x > y) In this article, we’ve explored the basics of Python Booleans, including their type and values, use as keywords and variables, arithmetic operations, comparison operators, Boolean operators, truth tables, and short-circuit evaluation. By understanding these concepts, you’ll have a solid foundation to build upon when programming in Python. Happy coding! Boolean Testing: Understanding Boolean Values for Different Types In Python, Boolean values are essential for programming. You can use them to test conditions, iterate over items, and create loops. But did you know that various data types can also be tested as Boolean values? In this article, we’ll explore some other data types in Python that can be evaluated as Boolean values, including None, numbers, sequences, and other types, including NumPy arrays. We’ll also discuss operators and functions that can be used in conjunction with Boolean testing in Python. None as a Boolean Value None is a built-in constant in Python and represents the null object. It is commonly used for default function arguments or variables that have not been initialized yet. When tested as a Boolean value, None is considered False. # Example of None as a Boolean value x = None if x: print("x is True.") print("x is False.") In this case, the output will be “x is False” since None is False in Boolean testing. Numbers as Boolean Values In Python, any non-zero number evaluates as True in Boolean testing. Hence, variables with non-zero values convert to True in Boolean testing, while variables with zero values convert to False. Let’s look at an example. # Example of numbers as Boolean values x = 5 if x: print("x is True.") print("x is False.") In this example, the output will be “x is true” since x is non-zero. Sequences as Boolean Values Python has many types of sequences, such as lists, tuples, and strings. In Boolean testing, empty sequences are considered False, while non-empty sequences are considered True. # Example of sequences as Boolean values my_list = [] if my_list: print("The list is not empty.") print("The list is empty.") In this example, the output will be “The list is empty” since the list is empty. Other Types as Boolean Values Besides None, numbers, and sequences, many other types can be tested as Boolean values in Python. Specifically, classes can define a method called “__nonzero__” that returns a Boolean value, indicating whether an instance of that class should be evaluated as True or False in a Boolean test. For instance, if an object has a non-zero length, it is considered True. Here’s an example: # Example of other types as Boolean values class MyClass: def __nonzero__(self): return False my_obj = MyClass() if my_obj: print("my_obj is True.") print("my_obj is False.") In this example, the output will be “my_obj is False” since the Boolean test for my_obj return False. Example: NumPy Arrays NumPy is a powerful numerical computing library in Python. NumPy arrays can be tested as Boolean values and work differently than other sequences. A NumPy array with non-zero elements is considered True, but a zero-length array is considered False. # Example of NumPy Arrays as Boolean values import numpy as np array = np.array([1, 2, 3]) if array: print("The array is not empty.") print("The array is empty.") In this example, the output will be “The array is not empty,” since the NumPy array is non-empty. Operators and Functions In Python, Boolean testing works well with operators and functions. Here are some commonly used ones: • and: Returns True if both operands are True. x = 5 y = 10 if x < y and y > 0: print("Both conditions are True.") In this example, the output will be “Both conditions are True.” • or: Returns True if either operand is True. x = 5 y = 10 if x > y or y > 0: print("At least one condition is True.") In this example, the output will be “At least one condition is True.” • not: Reverses the Boolean value of an operand. x = True if not x: print("x is False.") print("x is True.") In this example, the output will be “x is False.” • all: Returns True if all elements in an iterable are True. numbers = [1, 2, 3] if all(numbers): print("All elements are True.") print("At least one element is False.") In this example, the output will be “All elements are True,” since all of the elements in the numbers list are non-zero. • any: Returns True if any element in an iterable is True. numbers = [0, 1, 2] if any(numbers): print("At least one element is True.") print("All elements are False.") In this example, the output will be “At least one element is True,” since the second element is non-zero. These operators and functions can be used in conjunction with Boolean testing in Python to create powerful and efficient code. In this article, we’ve explored various data types in Python that can be tested as Boolean values, including None, numbers, sequences, other types, and NumPy arrays. We’ve also covered some commonly used operators and functions that are useful in Boolean testing. By understanding these concepts, programmers can create code that is more efficient, readable, and maintainable. Related Video Course For those interested in learning more about Python Booleans and writing efficient code, the course “Python Data Structures and Algorithms” on Udemy is a great resource. It covers topics such as Boolean logic, control flow structures, loops, and more. In this article, we explored the concept of Boolean testing in Python, which involves evaluating various types of data as Boolean values. These data types include None, numbers, sequences, other types, and NumPy arrays. We also discussed popular operators and functions that work in conjunction with Boolean testing, such as and, or, not, all, and any. By understanding these concepts, programmers can write more efficient and maintainable code that enhances their coding experiences. This article emphasizes the importance of understanding Boolean testing, its associated data types, and operators for anyone looking to expand their knowledge or skillset in programming.
{"url":"https://www.adventuresinmachinelearning.com/mastering-boolean-testing-different-data-types-and-operators-in-python/","timestamp":"2024-11-07T00:28:00Z","content_type":"text/html","content_length":"93524","record_id":"<urn:uuid:9380370e-be4d-414e-92d5-e5e2c7149069>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00621.warc.gz"}
Fractions Of Whole Numbers Worksheet Pdf Fractions Of Whole Numbers Worksheet Pdf act as foundational tools in the world of maths, giving a structured yet versatile system for students to discover and master numerical ideas. These worksheets provide a structured method to understanding numbers, nurturing a strong structure whereupon mathematical effectiveness flourishes. From the simplest counting workouts to the intricacies of innovative calculations, Fractions Of Whole Numbers Worksheet Pdf satisfy students of diverse ages and skill degrees. Introducing the Essence of Fractions Of Whole Numbers Worksheet Pdf Fractions Of Whole Numbers Worksheet Pdf Fractions Of Whole Numbers Worksheet Pdf - Fractions Of Whole Numbers Worksheet Pdf, Finding Fractions Of Whole Numbers Worksheet Pdf, Fraction Times Whole Number Worksheet Pdf, Dividing Fractions By Whole Numbers Worksheet Pdf, Subtracting Fractions From Whole Numbers Worksheet Pdf, Multiplication Of Fractions And Whole Numbers Worksheets Pdf, Fractions Of Amounts Worksheet Pdf, Whole Number As A Fraction Worksheet This page includes Fractions worksheets for understanding fractions including modeling comparing ordering simplifying and converting fractions and operations with fractions We start you off with the obvious modeling fractions Fraction worksheets for grades 1 6 starting with the introduction of the concepts of equal parts parts of a whole and fractions of a group or set and proceeding to reading and writing fractions adding subtracting multiplying and dividing proper and improper fractions and mixed numbers At their core, Fractions Of Whole Numbers Worksheet Pdf are lorries for conceptual understanding. They envelop a myriad of mathematical principles, assisting students via the labyrinth of numbers with a collection of engaging and deliberate workouts. These worksheets transcend the boundaries of conventional rote learning, encouraging active interaction and promoting an instinctive grasp of numerical connections. Supporting Number Sense and Reasoning Fractions Worksheets Printable Fractions Worksheets For Teachers Fractions Worksheets Printable Fractions Worksheets For Teachers This free set of practice worksheets is meant to challenge kids to apply a knowledge of fractions to a new type of problem finding a fraction of a whole number To begin a visual array is given To solve students can divide the whole into parts Below are six versions of our grade 5 math worksheet where students are asked to find the product of whole numbers and proper fractions These worksheets are pdf files Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 5 More Similar Multiply fractions denominator 2 12 Multiply fractions denominators 2 The heart of Fractions Of Whole Numbers Worksheet Pdf lies in growing number sense-- a deep comprehension of numbers' meanings and affiliations. They motivate exploration, welcoming students to explore math operations, understand patterns, and unlock the enigmas of series. Through provocative challenges and sensible challenges, these worksheets become gateways to refining thinking skills, supporting the analytical minds of budding mathematicians. From Theory to Real-World Application Fractions Of A Whole Worksheet Fractions Of A Whole Worksheet Finding fractions Fraction drills Multiplying fractions by whole numbers Common Core Standards Grade 4 Number Operations Fractions CCSS Math Content 4 NF B 4 Download Fractions of a Whole Worksheet PDFs These math worksheets should be practiced regularly and are free to download in PDF formats Fractions of a Whole Worksheet 1 Download PDF Fractions of a Whole Worksheet 2 Download PDF Fractions of a Whole Worksheet 3 Download PDF Fractions Of Whole Numbers Worksheet Pdf serve as conduits bridging theoretical abstractions with the palpable realities of day-to-day life. By instilling functional circumstances into mathematical exercises, students witness the importance of numbers in their surroundings. From budgeting and measurement conversions to understanding analytical information, these worksheets empower students to wield their mathematical prowess beyond the confines of the classroom. Diverse Tools and Techniques Flexibility is inherent in Fractions Of Whole Numbers Worksheet Pdf, using a toolbox of instructional tools to cater to diverse understanding designs. Aesthetic aids such as number lines, manipulatives, and digital resources act as buddies in envisioning abstract concepts. This varied technique makes certain inclusivity, suiting students with various choices, strengths, and cognitive Inclusivity and Cultural Relevance In an increasingly varied world, Fractions Of Whole Numbers Worksheet Pdf embrace inclusivity. They transcend cultural borders, integrating instances and issues that resonate with students from varied backgrounds. By including culturally pertinent contexts, these worksheets foster an atmosphere where every learner really feels stood for and valued, improving their connection with mathematical ideas. Crafting a Path to Mathematical Mastery Fractions Of Whole Numbers Worksheet Pdf chart a course towards mathematical fluency. They instill willpower, essential reasoning, and analytical skills, vital qualities not just in mathematics but in different elements of life. These worksheets encourage students to navigate the detailed surface of numbers, nurturing a profound gratitude for the style and logic inherent in maths. Embracing the Future of Education In an age noted by technological innovation, Fractions Of Whole Numbers Worksheet Pdf seamlessly adapt to electronic systems. Interactive user interfaces and digital resources enhance standard understanding, offering immersive experiences that go beyond spatial and temporal limits. This amalgamation of standard techniques with technical developments declares an appealing age in education, promoting an extra vibrant and appealing understanding setting. Final thought: Embracing the Magic of Numbers Fractions Of Whole Numbers Worksheet Pdf represent the magic inherent in mathematics-- an enchanting journey of expedition, exploration, and mastery. They go beyond conventional pedagogy, functioning as catalysts for sparking the flames of curiosity and query. With Fractions Of Whole Numbers Worksheet Pdf, students start an odyssey, opening the enigmatic world of numbers-- one issue, one remedy, at once. Multiplying Fractions With Whole Numbers 4th Grade Math Worksheets Whole Numbers Worksheets For Grade 6 PDF Check more of Fractions Of Whole Numbers Worksheet Pdf below Dividing Fractions By Whole Numbers Worksheet Template Tips And Reviews Free Multiplying Fractions With Whole Numbers Worksheets Fractions Of Whole Numbers Worksheet For 4th 5th Grade Lesson Planet Fraction Sheets For Grade 4 Education For Kids Whole Number Fractions Worksheet Multiplying Fractions By Whole Numbers Worksheets Pdf Kidsworksheetfun Fractions Worksheets For Grades 1 6 K5 Learning Fraction worksheets for grades 1 6 starting with the introduction of the concepts of equal parts parts of a whole and fractions of a group or set and proceeding to reading and writing fractions adding subtracting multiplying and dividing proper and improper fractions and mixed numbers Fractions Of Whole Numbers Worksheet Randomly Generated Here is our random worksheet generator for fractions of whole number worksheets Using this generator will let you create your own worksheets for Find unit fractions of a range of whole numbers Find proper fractions of a range of whole numbers Choose exactly which fractions you want to use Fraction worksheets for grades 1 6 starting with the introduction of the concepts of equal parts parts of a whole and fractions of a group or set and proceeding to reading and writing fractions adding subtracting multiplying and dividing proper and improper fractions and mixed numbers Here is our random worksheet generator for fractions of whole number worksheets Using this generator will let you create your own worksheets for Find unit fractions of a range of whole numbers Find proper fractions of a range of whole numbers Choose exactly which fractions you want to use Fraction Sheets For Grade 4 Education For Kids Free Multiplying Fractions With Whole Numbers Worksheets Whole Number Fractions Worksheet Multiplying Fractions By Whole Numbers Worksheets Pdf Kidsworksheetfun Fraction Worksheets Pdf Downloads MATH ZONE FOR KIDS Fraction Of A Whole Number Fraction Of A Whole Number Fraction Times Whole Number Worksheets
{"url":"https://szukarka.net/fractions-of-whole-numbers-worksheet-pdf","timestamp":"2024-11-08T12:11:02Z","content_type":"text/html","content_length":"28672","record_id":"<urn:uuid:7db99f80-5086-4203-a96a-5b0f18e2a0c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00544.warc.gz"}
The Best Bolt Grain for Crossbows - Survival Freedom The Best Bolt Grain for Crossbows This site contains affiliate links for which I may be compensated. The best grain for crossbow bolts is 400-450 (26-29 grams). Heavier grains will drop significantly, while lighter ones have less penetrating power. You can go as low as 375 grains (24.3 grams), but your accuracy will suffer. You want to use 18-inch (45.7 cm) bolts or longer to avoid wobbling. This article explores the relationship between arrow weight, kinetic energy, and crossbow performance. Think of it as a “choosing the right tool for the job” guide, but for crossbow hunters. How Important Is Arrow Weight? Arrow weight is extremely important. It’s one of the three main that maximize your crossbow’s performance, alongside draw weight and arrow length. But of the three, arrow weight has the most potential to improve your hunting. That’s because it directly influences kinetic energy and accuracy. Arrow weight affects kinetic energy in a direct relationship. The heavier the arrow, the more kinetic energy it will have. And that makes it an effective tool for hunting. Heavier arrows also tend to be more accurate than lighter arrows. They are less affected by wind and other external factors that can cause an arrow to veer off course. If you are looking to maximize the performance of your crossbow, arrow weight is an excellent place to start. How Much Kinetic Energy Do You Need? For smaller animals, you need as little as 30 ft·lbs (40.7 J) of kinetic energy. You will need at least 50 ft·lbs (67.8 J) for larger animals like deer. If you are planning on hunting big game, like elk or bear, you will need even at least 75 ft·lbs (102 J) An arrow’s kinetic energy is directly related to its effectiveness on game animals. So, 50 ft·Ibs (67.8 J) of kinetic energy means a more powerful impact than 30 ft·Ibs (40.7 J), which leads to better penetration and a quicker kill. Related 3 Most Accurate Fixed Blade Broadheads. How To Calculate Kinetic Energy Kinetic energy is determined by its weight and speed. You can calculate it using the following formula: • m = mass of the arrow (in grains) • v = velocity of the arrow (in feet per second) For example, let’s say you are shooting an arrow that weighs 400 grains (26 g) at a velocity of 300 feet per second (91.4 m/s). Using the formula above, we can calculate that this arrow has a kinetic energy of 135 ft·lbs (183 J). How Arrow Weight Affects Velocity Now that you know how vital arrow weight is, you might be tempted to choose the heaviest arrows possible. But it’s not that simple. The weight of your arrows also has a direct relationship with another important factor: velocity. Arrow weight affects velocity directly. The heavier the arrow, the slower it will travel. It takes more energy to accelerate a heavier object to the same velocity as a lighter object. For example, let’s say you are shooting an arrow that weighs 400 grains (26 g) and has a velocity of 300 feet per second (91.4 m/s). This arrow has a kinetic energy of 135 ft·lbs (183 J). Now let’s say you increase the arrow’s weight to 500 grains (32.4 g). To keep the same kinetic energy, you would need to increase the arrow’s velocity to 350 feet per second (106.7 m/s). As you can see, increasing the weight of your arrows will decrease their velocity. And as we learned earlier, speed is directly related to accuracy. So, if you are looking for the most accurate arrows, you will need to find a balance between weight and velocity. What Is the Best Arrow Weight for Your Crossbow? The best arrow weight for your crossbow will give you a balance between kinetic energy and accuracy. For most people, that is 400 to 450 grains (26-29 grams). Of course, the perfect arrow weight for you will also depend on the speed of your crossbow. The faster your crossbow shoots, the heavier your arrows can be. For example, if you are shooting a crossbow with a velocity of 400 feet per second (121.9 m/s), you can get away with shooting arrows that weigh as much as 560 grains (36.3 g). To find the perfect arrow weight for your crossbow, you will need to experiment with different weights and see what works best. Start with arrows on the heavy side and work your way down until you find the weight that gives you the best accuracy. Parting Shot Arrow weight is an important factor in crossbow hunting. It affects the arrow’s kinetic energy, determining its effectiveness at killing game animals. You’ll need to find the right balance between arrow weight and velocity to ensure accuracy. The idea is to experiment with different weights until you find the one that gives you the best results. For more, check out How Far Can a Bow Shoot? | Ranges by Draw Weight (With Chart). Jim James is a published author and expert on the outdoors and survivalism. Through avid research and hands-on experience, he has gained expertise on a wide variety of topics. His time spent at college taught him to become really good at figuring out answers to common problems. Often through extensive trial and error, Jim has continued to learn and increase his knowledge of a vast array of topics related to firearms, hunting, fishing, medical topics, cooking, games/gaming, and other subjects too numerous to name. Jim has been teaching people a wide variety of survivalism topics for over five years and has a lifetime of experience fishing, camping, general survivalism, and anything in nature. In fact, while growing up, he often spent more time on the water than on land! He has degrees in History, Anthropology, and Music from the University of Southern Mississippi. He extensively studied Southern History, nutrition, geopolitics, the Cold War, and nuclear policy strategies and safety as well as numerous other topics related to the content on survivalfreedom.com.
{"url":"https://survivalfreedom.com/the-best-bolt-grain-for-crossbows/","timestamp":"2024-11-05T06:33:50Z","content_type":"text/html","content_length":"480434","record_id":"<urn:uuid:bff185a5-7a2c-4870-993d-abc46c47bcc7>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00755.warc.gz"}
Using a Median To Predict - Do My GRE Exam In probability and statistical analysis, a median is an arbitrary value dividing the extremes of a data collection, such as a probability distribution or population. For a given data set, it can be described as the center point of that data. A median in probability is the middle value between two data sets with the same standard deviation. Statistical analysis is based on the concept that the mean, or average value of a set of data points is equal to the median. This is known in statistics as the normal distribution. The most common types of distributions are mean-on-target (MOT), median-on-target (MED), and mixed distributions (MLR). A normal distribution produces regular distribution, so the data does not lie anywhere else except at the median. Using a sample size of n, the data is normally distributed in a normal (bell-shaped) curve with the mean being equal to the population’s probability distribution, but with no deviation of one side from the mean. When the sample size is sufficiently large to generate a normal distribution, then the median is the value in the middle of this distribution. For example, the mean of a normal distribution is 100. Therefore, the median would be the value at 50. When the sample size is small, there is an uneven distribution of the data. This is known as a biased distribution because it is not normal, and it may have a high or low mean, which means that a particular value is likely to occur more often than average. When the sample size is large enough to generate a normal distribution, then the mean and the median are more likely to lie somewhere near each other. In this case, the median is the value just below the mean, while the mean is the value at the very top of the distribution. In this situation, the median will be close to zero, but above the mean. The number of values that lie between the median and the mean, or between the middle of the sample size and the mean of the sample size, will be less than 1.5%. If the sample size is large enough to produce a normal distribution, then the mean is likely to be greater than or equal to the median, and the median lies between the mean. The number of values that lie within the range of the mean and median will be greater than the sample size. These values will fall outside the range of the mean and the range of the median, but are outside the range of the mean of the data. For example, if the sample size is five and the mean is 50, then the mean is likely to be between two and three standard deviations away from the mean. However, when the sample size is five, and the mean is five, then the mean is likely to lie between three and four standard deviations from the mean. With a larger sample size, the distribution of the data will lie closer to the mean of the sample size. This is called a normal distribution with the mean being smaller than the median. With a smaller sample size, the range of the data will lie closer to the median than to the mean. This is called a normal distribution with the mean being larger than the median. Because of these differences in distributions, the median and the mean do not always coincide, even for a large sample size. A relatively small sample can still lie close to one of the mean values, while the mean is farther from the mean. For example, the sample size can be large enough so that the mean is equal to the median or even slightly higher, while still remaining too small to show any sign of a mean line on the curve. A person who uses a median to predict the value of something is using a normal distribution, where there is a normal distribution of the mean and the range. Using a median to predict something else is more difficult, where there is an irregular distribution of the mean and the range.
{"url":"https://domygreexam.com/using-a-median-to-predict/","timestamp":"2024-11-07T15:55:39Z","content_type":"text/html","content_length":"106437","record_id":"<urn:uuid:504a95de-9c23-4f59-8f6f-73d7d9161afa>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00400.warc.gz"}
Milind M. What do you want to work on? About Milind M. Algebra, Algebra 2 Math - Algebra Math - Algebra II the tutor I had was awesome! Math - Algebra thank you Math - Algebra HEs super helpful or she thanks man
{"url":"https://ws.princetonreview.com/academic-tutoring/tutor/milind%20m--9360795","timestamp":"2024-11-12T02:31:58Z","content_type":"application/xhtml+xml","content_length":"185786","record_id":"<urn:uuid:6acf43b8-05b3-47c0-a4e6-f6d17e1e5f75>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00276.warc.gz"}
Multiplying Mixed Numbers Worksheets Multiplying Mixed Numbers Worksheets serve as fundamental devices in the world of mathematics, offering a structured yet functional platform for students to check out and grasp numerical principles. These worksheets provide a structured approach to comprehending numbers, nurturing a strong foundation whereupon mathematical effectiveness prospers. From the easiest counting workouts to the intricacies of sophisticated estimations, Multiplying Mixed Numbers Worksheets deal with learners of varied ages and skill levels. Unveiling the Essence of Multiplying Mixed Numbers Worksheets Multiplying Mixed Numbers Worksheets Multiplying Mixed Numbers Worksheets - Multiplying Mixed Numbers Turn the two mixed numbers into fractions and proceed to find the product as you would normally do Some mixed numbers have common factors so don t forget to simplify your product Grab the Worksheet Multiplying Mixed Numbers With Word Problems Fractions worksheets Multiplying mixed numbers by mixed numbers Below are six versions of our grade 5 math worksheet on multiplying mixed numbers together These worksheets are pdf files Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 5 More At their core, Multiplying Mixed Numbers Worksheets are vehicles for conceptual understanding. They envelop a myriad of mathematical concepts, leading learners through the maze of numbers with a series of interesting and deliberate exercises. These worksheets transcend the limits of typical rote learning, motivating active interaction and fostering an user-friendly understanding of numerical Supporting Number Sense and Reasoning Multiplying Mixed Numbers J Worksheet For 4th 6th Grade Lesson Planet Multiplying Mixed Numbers J Worksheet For 4th 6th Grade Lesson Planet Multiplying Mixed Numbers with Whole Numbers Worksheets Recalibrate kids practice with our free multiplying mixed numbers with whole numbers worksheets While multiplying a mixed number with a whole number can trip up many children it s not as big of a deal as they think it is These multiplying mixed numbers by mixed numbers worksheets will help to visualize and understand place value and number systems 5th and 6th grade students will learn basic multiplication methods with mixed numbers and can improve their basic math skills with our free printable multiplying mixed numbers by mixed The heart of Multiplying Mixed Numbers Worksheets lies in growing number sense-- a deep comprehension of numbers' definitions and interconnections. They encourage exploration, inviting learners to dissect math operations, figure out patterns, and unlock the enigmas of series. Via thought-provoking challenges and logical challenges, these worksheets end up being entrances to developing reasoning skills, supporting the logical minds of budding mathematicians. From Theory to Real-World Application Multiply Fractions With Mixed Numbers Worksheet Kidsworksheetfun Multiply Fractions With Mixed Numbers Worksheet Kidsworksheetfun These multiply mixed numbers worksheets engage children s cognitive processes and help enhance their creativity and memory retention skills Get started now to make learning interesting for your child Convert Mixed to Improper Fractions 1 12 22 12 32 2 15 105 15 115 Multiply the fractions multiply the top numbers multiply bottom numbers 32 115 3 112 5 3310 Convert to a mixed number 3310 3 310 If you are clever you can do it all in one line like this 1 12 2 15 32 115 3310 3 310 One More Multiplying Mixed Numbers Worksheets act as channels bridging academic abstractions with the apparent facts of day-to-day life. By infusing sensible circumstances right into mathematical exercises, students witness the importance of numbers in their surroundings. From budgeting and dimension conversions to recognizing statistical data, these worksheets empower trainees to wield their mathematical expertise beyond the confines of the class. Diverse Tools and Techniques Versatility is inherent in Multiplying Mixed Numbers Worksheets, employing a collection of instructional tools to deal with diverse discovering designs. Aesthetic aids such as number lines, manipulatives, and electronic sources act as friends in imagining abstract principles. This diverse strategy makes sure inclusivity, fitting learners with various preferences, toughness, and cognitive styles. Inclusivity and Cultural Relevance In a progressively varied globe, Multiplying Mixed Numbers Worksheets welcome inclusivity. They go beyond cultural boundaries, incorporating instances and problems that reverberate with students from varied backgrounds. By incorporating culturally appropriate contexts, these worksheets cultivate an environment where every student feels stood for and valued, improving their connection with mathematical ideas. Crafting a Path to Mathematical Mastery Multiplying Mixed Numbers Worksheets chart a program in the direction of mathematical fluency. They impart perseverance, vital reasoning, and problem-solving abilities, necessary attributes not just in mathematics yet in various facets of life. These worksheets encourage students to browse the intricate surface of numbers, nurturing a profound gratitude for the beauty and reasoning inherent in Accepting the Future of Education In a period marked by technological innovation, Multiplying Mixed Numbers Worksheets flawlessly adapt to digital systems. Interactive interfaces and digital sources augment traditional knowing, offering immersive experiences that transcend spatial and temporal limits. This amalgamation of standard techniques with technical developments advertises an encouraging era in education and learning, promoting a more vibrant and appealing knowing environment. Conclusion: Embracing the Magic of Numbers Multiplying Mixed Numbers Worksheets characterize the magic inherent in mathematics-- a charming journey of exploration, discovery, and proficiency. They transcend standard pedagogy, functioning as stimulants for sparking the flames of inquisitiveness and inquiry. Through Multiplying Mixed Numbers Worksheets, learners embark on an odyssey, unlocking the enigmatic world of numbers-- one problem, one remedy, at once. Multiply Mixed Numbers Worksheets Multiplying Mixed Numbers Worksheets Check more of Multiplying Mixed Numbers Worksheets below Fractions Worksheets Printable Fractions Worksheets For Teachers 16 Multiplying Mixed Numbers Worksheet Reginalddiepenhorst Writing Fractions As Mixed Numbers Grade 5 Fractions Worksheets Multiplying Mixed Numbers K5 Learning Multiplying And Dividing Multiplying Mixed Numbers By Fractions 5th Grade Maths Worksheets Multiplying Mixed Numbers Worksheets Teaching Resources Grade 5 Fractions Worksheets Multiplying Mixed Numbers K5 Learning Fractions worksheets Multiplying mixed numbers by mixed numbers Below are six versions of our grade 5 math worksheet on multiplying mixed numbers together These worksheets are pdf files Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 5 More Multiplying Mixed Numbers K5 Learning Multiplying mixed numbers Grade 5 Fractions Worksheet Find the product 1 2 4 1 5 6 3 2 1 6 1 6 12 2 3 1 2 2 4 5 3 4 1 3 3 4 10 1 5 3 4 3 2 9 3 6 5 6 3 1 2 2 7 1 2 1 1 2 3 8 8 12 1 2 10 3 9 2 6 3 2 3 Fractions worksheets Multiplying mixed numbers by mixed numbers Below are six versions of our grade 5 math worksheet on multiplying mixed numbers together These worksheets are pdf files Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 5 More Multiplying mixed numbers Grade 5 Fractions Worksheet Find the product 1 2 4 1 5 6 3 2 1 6 1 6 12 2 3 1 2 2 4 5 3 4 1 3 3 4 10 1 5 3 4 3 2 9 3 6 5 6 3 1 2 2 7 1 2 1 1 2 3 8 8 12 1 2 10 3 9 2 6 3 2 3 Grade 5 Fractions Worksheets Multiplying Mixed Numbers K5 Learning Multiplying And Dividing 16 Multiplying Mixed Numbers Worksheet Reginalddiepenhorst Multiplying Mixed Numbers By Fractions 5th Grade Maths Worksheets Multiplying Mixed Numbers Worksheets Teaching Resources Multiplying Mixed Numbers Worksheet Pdf Juvxxi Multiplying Mixed Numbers Worksheet Multiplying Mixed Numbers Worksheet Worksheets For Fraction Multiplication
{"url":"https://alien-devices.com/en/multiplying-mixed-numbers-worksheets.html","timestamp":"2024-11-08T08:34:04Z","content_type":"text/html","content_length":"26598","record_id":"<urn:uuid:b0054cb7-68f7-4b2b-bc03-55d533fa564d>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00243.warc.gz"}
Goat Farm Profit Calculator [Updated 2024] - BizarreMoney Goat Farm Profit Calculator [Updated 2024] Free Online Goat Farm Profit Calculator Free Goat Farm Profit Calculator Online Welcome to our free goat farm profit calculator! This comprehensive tool is designed to help goat farmers estimate their annual profits by inputting various parameters related to milk production, meat production, kid production, and costs. Whether you’re a seasoned goat farmer or just starting out, this calculator will provide valuable insights into the profitability of your farm. How to Use the Goat Farm Profit Calculator? Using our goat farm profit calculator is simple and straightforward. Follow these steps to get an accurate estimate of your annual profit: 1. Enter the Number of Goats: Input the total number of goats on your farm. 2. Enter the Number of Bucks: Specify the number of bucks (male goats) on your farm. 3. Milk Production per Goat: Enter the average daily milk production per goat in liters. On average, goats produce between 1 to 3 liters per day. 4. Milk Price per Liter: Enter the price at which you sell your milk per liter. 5. Feed Cost per Goat per Day: Enter the daily feed cost per goat. 6. Labor Cost per Month: Specify your monthly labor costs. 7. Veterinary Cost per Year: Enter your annual veterinary costs. 8. Other Costs per Month: Include any other monthly costs associated with your farm. 9. Weight Gain per Day (kg): Enter the average daily weight gain of your goats in kilograms. 10. Meat Price per kg: Specify the price at which you sell goat meat per kilogram. 11. Kidding Rate (kids per doe per year): Enter the average number of kids per doe per year. Typically, this ranges from 1.5 to 2 kids. 12. Kid Price (per kid): Enter the price at which you sell each kid. 13. Kid Survival Rate (%): Input the survival rate of kids in percentage. 14. Other Revenue Sources (annual): Include any other annual revenue sources. 15. Initial Investment Cost: Enter the total initial investment cost for setting up your goat farm. 16. Depreciation Period (years): Specify the period over which you depreciate your initial investment. Average Values for Fields To help you get started, here are some average values you can use for each field: • Number of Goats: 50 • Number of Bucks: 2 • Milk Production per Goat (liters/day): 2 • Milk Price per Liter: $1.50 • Feed Cost per Goat per Day: $0.50 • Labor Cost per Month: $500 • Veterinary Cost per Year: $150 • Other Costs per Month: $100 • Weight Gain per Day (kg): 0.2 • Meat Price per kg: $10 • Kidding Rate (kids per doe per year): 1.8 • Kid Price (per kid): $100 • Kid Survival Rate (%): 90 • Other Revenue Sources (annual): $500 • Initial Investment Cost: $10,000 • Depreciation Period (years): 10 These values are based on typical averages and can be adjusted according to your specific farm conditions. Frequently Asked Questions (FAQs) What is the lactation period for goats? The lactation period for goats typically ranges from 180 to 300 days. For the purpose of this calculator, we use an average value of 240 days. How is the annual milk revenue calculated? The annual milk revenue is calculated by multiplying the total milk production by the milk price per liter. Total milk production is determined by the number of goats, average milk production per goat per day, and the lactation period. How do you calculate the annual meat revenue? The annual meat revenue is calculated by multiplying the total weight gain by the meat price per kilogram. Total weight gain is determined by the number of goats, average weight gain per day, and the number of days in a year. What factors are considered in the annual kid revenue? The annual kid revenue is calculated by multiplying the total number of kids by the kid price. The total number of kids is determined by the number of does, kidding rate, and kid survival rate. How are the total annual costs calculated? The total annual costs are the sum of feed costs, veterinary and medicine costs, labor costs, and other miscellaneous costs. These are calculated based on the number of goats and the respective costs per goat or per month. What is the depreciation of initial investment? The annual depreciation is calculated by dividing the initial investment cost by the depreciation period in years. This helps in spreading out the cost of the initial investment over its useful life. How is the annual profit determined? The annual profit is determined by subtracting the total costs (including annual depreciation) from the total annual revenue. By following these guidelines and using our free goat farm profit calculator, you can get a clear understanding of your farm’s profitability and make informed decisions to optimize your operations. Leave a Comment
{"url":"https://bizarremoney.com/goat-farm-profit-calculator/","timestamp":"2024-11-11T05:05:03Z","content_type":"text/html","content_length":"69804","record_id":"<urn:uuid:3bd6444e-a316-4e02-ad14-0722ea9b01dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00212.warc.gz"}
Definitions of Probability Probability is a measure of uncertainty. There are three different approaches to define the probability. Mathematical Probability (Classical / a priori Approach) If the sample space S, of an experiment is finite with all its elements being equally likely, then the probability for the occurrence of any event, A, of the experiment is defined as The above definition of probability was used until the introduction of the axiomatic method. Hence, it is also known as classical definition of probability. Since this definition enables to calculate the probability even without conducting the experiment but using the prior knowledge about the experiment, it is also called as a priori probability. Example 8.5 What is the chance of getting a king in a draw from a pack of 52 cards? In a pack there are 52 cards [n(s) = 52] which is shown in fig. 8.2 Let A be the event of choosing a card which is a king In which, number of king cards n(A) = 4 Therefore probability of drawing a card which is king is = P(A) Example 8.6 A bag contains 7 red, 12 blue and 4 green balls. What is the probability that 3 balls drawn are all blue? From the fig. 8.3 we find that: Total number of balls = 7+12+14=23 balls Out of 23 balls 3 balls can be selected in = n(s)= 23C[3] ways Let A be the event of choosing 3 balls which is blue Number of possible ways of drawing 3 out of 12 blue balls is = n(A)=12C[3] ways Example 8.7 A class has 12 boys and 4 girls. Suppose 3 students are selected at random from the class. Find the probability that all are boys. From the fig 8.4, we find that: Total number of students = 12+4=16 Three students can be selected out of 16 students in 16C[3] ways Statistical Probability (Relative Frequency/a posteriori Approach) If the random experiment is repeated n times under identical conditions and the event A occurred in n(A) times, then the probability for the occurrence of the event A can be defined (Von Mises) as Since computation of probability under this approach is based on the empirical evidences for the occurrence of the event, it is also knows as relative frequency or a posteriori probability.
{"url":"https://www.brainkart.com/article/Definitions-of-Probability_35114/","timestamp":"2024-11-15T00:13:18Z","content_type":"text/html","content_length":"39275","record_id":"<urn:uuid:1c50b5a2-4f91-428d-af09-d1961eeaacd5>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00354.warc.gz"}
Excel Formula for Finding Value Based on Age and Score In this tutorial, we will learn how to write an Excel formula in Python that finds the value on a scoring array based on age and score. This formula is useful when you have a table of scores associated with different ages and you want to retrieve the score for a specific age. We will use the VLOOKUP function, which is commonly used in Excel to perform lookups. The formula we will use is =VLOOKUP(A2, scoring_array, 2, TRUE). This formula takes the age as the input and searches for a matching age in the first column of the scoring array. Once a match is found, it returns the corresponding value from the second column of the scoring array. To use this formula, you need to have a scoring array that contains two columns: the first column should contain the ages, and the second column should contain the corresponding scores. It is important to note that the scoring array should be sorted in ascending order by age for the VLOOKUP function to work correctly. If an exact match for the age is not found in the scoring array, the VLOOKUP function will return the value for the closest age that is less than the specified age. This is known as an approximate match. This behavior is controlled by the last argument of the VLOOKUP function, which is set to TRUE. Let's consider an example to understand how the formula works. Suppose we have a scoring array with the following data: A B If we have the age 35 in cell A2, the formula =VLOOKUP(A2, scoring_array, 2, TRUE) would return the value 8, which is the score corresponding to the closest age that is less than 35 (30). Similarly, if we have the age 55 in cell A2, the formula would return the value 7, which is the score corresponding to the closest age that is less than 55 (50). In conclusion, the VLOOKUP function in Excel can be used in Python to find the value on a scoring array based on age and score. By understanding how to use this formula, you can efficiently retrieve scores for specific ages in your data analysis tasks. An Excel formula =VLOOKUP(A2, scoring_array, 2, TRUE) Formula Explanation This formula uses the VLOOKUP function to find the value on a scoring array based on age and score. Step-by-step explanation 1. The VLOOKUP function searches for a value in the first column of a table (scoring_array) and returns a value in the same row from a specified column (2 in this case). 2. The value to search for is specified as A2, which represents the age. 3. The scoring_array is the range of cells that contains the scoring data. It should have two columns: the first column should contain the ages, and the second column should contain the corresponding scores. 4. The last argument, TRUE, indicates that the VLOOKUP function should find an approximate match for the age if an exact match is not found. This means that if an exact match is not found, it will return the value for the closest age that is less than the specified age. For example, let's say we have the following scoring array: | A | B | | 20 | 5 | | 30 | 8 | | 40 | 9 | | 50 | 7 | | 60 | 6 | If we have the age 35 in cell A2, the formula =VLOOKUP(A2, scoring_array, 2, TRUE) would return the value 8, which is the score corresponding to the closest age that is less than 35 (30). Similarly, if we have the age 55 in cell A2, the formula would return the value 7, which is the score corresponding to the closest age that is less than 55 (50). Note: The scoring_array should be sorted in ascending order by age for the VLOOKUP function to work correctly.
{"url":"https://codepal.ai/excel-formula-generator/query/vvze2iB2/excel-formula-for-finding-value-based-on-age-and-score","timestamp":"2024-11-03T13:38:51Z","content_type":"text/html","content_length":"95808","record_id":"<urn:uuid:896a86ac-7cfc-4eee-a4ed-a16abcf30475>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00745.warc.gz"}
Course after 12th standard for specific fields Confused of choosing a specific filed after 12th standard? Check this ISC page to have a better idea what to do from our expertrs. I would like to know the bachelor's course (or dual bachelor's + master's) to undertake after twelfth standard (field: Physics, chemistry, and maths) for the following fields - 1. Cosmology 2. Nanotechnology 3. Quantum physics/mechanics 4. Robotics If I were to go for research in the fields mentioned above, how should I proceed after the bachelor's course (or bachelor's + master's)? it would be very kind of you if you also mention a few of the top colleges/universities I could attend for a successful career. Almost all the branches you are mentioned are specialisations in Physics. So you should have Physics in your Graduation. In addition to Physics, you have to have mathematics also as a subject. You can go to chemistry or any other subject available with the above two subjects in your graduation. After completing your graduation you have to select M.Sc Physics or Applied Physics and in these courses, you can look for the specialisation you prefer. You can select one subject from the list you have mentioned above and you can get a masters degree in that subject. You should have a good hold on physics and once you finish your PG, you can go for your Doctorate degree in the subject of specialisation you choose. Please keep in mind that for all the above subjects Physics is a basic subject and you should have sufficient hold on that physics. If you choose cosmology, you will be a cosmologist. For that, you can have a master degree in applied physics with specialisation in cosmology. The following institutes are good for cosmology. 1.IISc Bangalore, 2. Andhra University, Visakhapatnam, AP 3, Osmania University, Hyderabad. 4. Punjab University, Patiala. 5. Indian Institute of Science and Technology, Trivandrum 6. Institute of Astrophysics, Bangalore The above institutes are good for almost all the specialisation you have mentioned. always confident As you studied PCM in 12th grade and not sure about the forth main subject whether it is Biology or Computer science. For the fields, you mentioned Physics knowledge is very important. 1. Cosmology is the study of the universe. To get into Cosmology, you need to study Physics and Maths in your UG. 2. Nanotechnology involves natural science and again Physics Chemistry and Maths are to be in your UG. 3. Robotics is a good option as you have various degree options in it like Unified Robotics, Robotics Foundation, Robotic Engineering, etc. You can complete your UG with Physics and Maths as main subjects and then pursue your PG with any one specialization. When you are in UG, you will get an idea based out of the current technology and you can choose your filed accordingly. Jagannathan S ISC Member Since you are interested to have advanced knowledge in any one of the domains Robotics, Quantum Mechanics, Nanotechnology and Cosmology at the later stage, I would advise you to show your inner passion for both Physics and Mathematics. To understand the complexities of Physics, you should have conceptual clarities in Mathematics especially in the areas of Calculas, Matrix, Linear Algebra, Complex Number, Vector Algebra, Infinite vector, Trigonometry etc and you would be required to apply these while dealing with the advanced studies of Physics. The streams discussed above are the extensive studies of Physics in the different domains. In order to clarify these domains, you should be familiar with their definitions. Let us discuss the term one by one - Cosmology is the detailed study of the branch of Astronomy involving the origin and evolution of the universe. It can be better understood with the application of higher Mathematics. Nanotechnology relates to the products using tiny materials being used in Electronic devices, Catalysis , Sensors etc Quantum Physics would deal with extensive studies of Physics in the field of Black Body Radiation, Bohr Model for explaining spectral lines, Stern Gerlach Experiment etc with the application of different branches of Mathematics as discussed above. However, Robotics is entirely a different branch in the sense that this branch employs the basics of Computer Engineering, Mechanical and Electronic Enginnering. If you are interested to go in the specialised course relating to Cosmolology or Quantum Mechanics, take up the course of B.Sc with the combination of Physics, Chemistry and Mathematics or still better take up the course of Physics ( Honours) from a reputed university and for post graduation you can seek admission in the following institutions- 1) Tata Institute of Fundamental Research, Mumbai 2) Raman Research Institute, Bengaluru. 3) Indian Institute of Astrophysics,Bengaluru. 4) Physical Research Institute, Ahmedabad. 5) IIT Mumbai 6) IIT Chennai 7) IIT Kanpur etc. IIT provides five year integrated course for the persuasion of both undergraduate and post graduate level. You will have to qualify the entrance test of IIT to avail this integrated course. During your post graduation stage, you need to take up the specialisation course either in cosmology or in the quantum mechanics. Other institutes too have their own entrance tests to screen the deserving aspirants. You have mentioned some of the niche areas of science. Cosmology and Quantum Physics/Mechanics are relatively older ones though a lot of research options are there in them as the field of their investigations is quite big. Nanotechnology and Robotics are modern streams where there is immense scope of research as well as career making in the near future. It is good that you are having interest and inclination in these advanced areas of present day scientific pursuits and for a science students it makes sense to try in these lines for a robust scientific growth leading to a successful profession. There are some apex institutes in our country like TIFR (Tata Institute of Fundamental Research), IITs (Indian Institute of Technology), IISc (Indian Institute of Science), PRL (Physical Research laboratory), RRI (Raman Research Institute), IIA (Indian Institute of Astrophysics), IIST (India Institutes of Space Science and Technology) etc where one can try to get admission in the relevant courses either graduation or combined graduation and post graduation. These institutions have foreign collaboration also and have research association programs in many advanced countries. Some of these institutes have B.Sc. & M.Sc. integrated programs. Most of them take the students through JEE Advanced but some also have their own tests. For example TIFR takes its own entrance test. Incidentally TIFR has two other institutes under its fold that are ICTS and IISER which are engaged in specific scientific research. One thing which I want to share with you is that after completing your dream education you might be working in a particular scientific line but the research and higher education in these advanced fields are very much interrelated and many things are commonly applied everywhere. So, at that stage you cannot differentiate between a particular topic with other one as certain basic understandings required would be same in all streams. Essence of this understanding is that we should not ignore or neglect any scientific observation or theory as it could be of immense use at a later stage. When you actively engage in scientific research then you might need inputs and crucial elements from everywhere whether it is Mathematics or Physics or Chemistry. A true scientist has to respect all sort of scientific data from different disciplines. Only then he would be successful in his research projects. For cosmology and quantum physics most of the above institutes are ideal. For nanotechnology Amity Institute of Nanotechnology, Amrita Centre for Nanosciences, Jawaharlal Nehru Technological University etc are well known. For robotics one can try in Manipal Institute of Technology, Amity University, Osmania University, Guru Gobind Singh Indraprastha University, National Institute of Engineering, Mysore etc. Knowledge is power.
{"url":"https://www.indiastudychannel.com/experts/47271-course-after-12th-standard-for-specific-fields.aspx","timestamp":"2024-11-03T18:30:38Z","content_type":"text/html","content_length":"45551","record_id":"<urn:uuid:f831053a-a28f-4495-b1a7-380f32d88d91>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00451.warc.gz"}
8 square meter to square feet Understanding the Conversion between Square Meters and Square Feet Square meters and square feet are two commonly used units of area measurement. Understanding the conversion between these two units is crucial for various purposes, such as international real estate transactions, construction projects, and interior design. To convert square meters to square feet, you need to know the conversion ratio. One square meter is equal to approximately 10.764 square feet. This means that if you have the area measurement in square meters, you can multiply it by 10.764 to obtain the equivalent measurement in square feet. Conversely, to convert square feet to square meters, you divide the area measurement in square feet by 10.764. The Importance of Knowing the Conversion Ratio One of the key reasons why it is vital to understand the conversion ratio between square meters and square feet is for accurate measurement and comparison. In today’s globalized world, where information is readily available at our fingertips, it is crucial to be well-informed and knowledgeable about various units of measurements. This is particularly true for professionals in fields such as architecture, construction, interior design, and real estate, where precise measurements are essential. Knowing the conversion ratio between square meters and square feet allows professionals to accurately understand and interpret measurements from different sources. It enables them to seamlessly communicate and collaborate with colleagues or clients who may use different measurement systems. Without this understanding, misinterpretations, errors, and confusion can arise, leading to potentially costly mistakes in design, construction, or property transactions. Given the high stakes involved in these industries, it becomes imperative to grasp the conversion ratio and its significance in ensuring accurate measurement and calculation. Exploring the Concept of Square Meters Square meters, also known as the metric unit of area, is a common measurement used internationally. It is a versatile unit that is prominently used in many fields such as architecture, construction, and interior design. The concept of square meters refers to the area occupied by a two-dimensional figure with sides measuring one meter in length. This unit provides a precise and standardized way of calculating the size of a space or an object, making it an essential tool for accurate measurements in a variety of industries. When exploring the concept of square meters, it is crucial to understand that it is a metric unit derived from the International System of Units (SI). Unlike square feet, which is commonly used in the United States and other countries that follow the imperial system, square meters provide a more rational approach to measuring area. By utilizing a base unit of measurement, namely the meter, square meters eliminate the complexities and irregularities associated with the imperial system, making calculations more straightforward and consistent. This standardization is particularly valuable when dealing with scientific or technical measurements, where accuracy and precision are paramount. Understanding the Definition of a Square Foot The conversion between square meters and square feet is a crucial skill to possess, especially for those in the field of construction and real estate. To fully comprehend this conversion, it is essential to have a clear understanding of what a square foot represents. Defined as the area of a square with sides measuring one foot in length, a square foot is a unit of measurement primarily used in countries like the United States and Canada, where the imperial system is prevalent. In simpler terms, it symbolizes the area that can be covered by a square-shaped object with each side measuring one foot. When delving into the intricacies of square feet, it is important to note that this unit of measurement is not restricted to squares alone. Rectangular areas can also be accurately measured in square feet. The calculation involves multiplying the length of one side in feet by the length of the adjacent side in feet, resulting in the total area expressed in square feet. Understanding this concept is fundamental as it forms the basis for conversion between square meters and square feet. Comparing the Size Difference between Square Meters and Square Feet When it comes to measuring the size of a space, whether it be a room, a house, or a piece of land, it is essential to have a clear understanding of the unit of measurement being used. Two commonly used units for measuring area are square meters and square feet. While both units provide an indication of the size of an area, there are significant differences between them. Square meters, often abbreviated as m², is the metric unit for measuring area. It is commonly used in most countries around the world, except for the United States. Square meters provide a more uniform and consistent measurement system as it is based on the metric system, making it easier to make accurate comparisons between different areas. Additionally, square meters are often used in scientific and engineering fields due to its compatibility with the metric system. On the other hand, square feet, abbreviated as sq ft, is the unit commonly used in the United States and a few other countries. Unlike square meters, square feet is based on the imperial system of measurement, which includes measurements such as inches, feet, and yards. This system can be a bit more complex and less standardized, particularly for international audiences. However, it remains the preference for many Americans due to familiarity and historical usage. Common Scenarios where the Conversion is Required One common scenario where the conversion between square meters and square feet is required is in the real estate industry. When potential buyers are looking for a new home, it is important for them to understand the size of the property they are interested in. In some countries, square meters are the preferred unit of measurement for property size, while in others, such as the United States, square feet are used. Therefore, being able to convert between the two units is crucial for accurately assessing the size of a property. Another scenario where the conversion is commonly needed is in construction and remodeling projects. Architects, contractors, and homeowners often rely on blueprints and floor plans that are measured in one unit of measurement, but then need to communicate those measurements in the other unit. Whether it is determining the square footage of a room for flooring installation or calculating the square meters of a property for landscaping purposes, knowing how to convert between square meters and square feet is indispensable in these scenarios.
{"url":"https://convertertoolz.com/sq-m-to-sq-ft/8-square-meter-to-square-feet/","timestamp":"2024-11-09T13:46:43Z","content_type":"text/html","content_length":"46329","record_id":"<urn:uuid:1a0f4224-76fc-4fec-9728-1176a1881fd5>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00407.warc.gz"}
How Long Is 1 Million Seconds (And Why)?How Long Is 1 Million Seconds (And Why)? Exact Answer: 11 Days 13 Hours 46 Minutes And 40 Seconds Time is represented in many forms like centuries, decades, years, months, weeks, days, hours, minutes, seconds, microseconds, and milliseconds. The biggest unit of time is light years, while the smallest one is zeptoseconds. Though all these units hold different values and purposes, being under the same category, that is, time, they can all be converted from one unit to another. How Long Is 1 Million Seconds? All humans have a perception of everything in this world. When we think of a word, an image or anything related to that word pops up in our minds. For example, if we think of an apple, an image of an apple would get designed in our mind, and that’s actually common because it is a normal human tendency to frame something about a word in our minds. The same thing goes with numbers as well. When we think of zero, people perceive it as a median number. When thinking of number 1, it is perceived as the smallest positive number on the number line. However, it is quite the contrary when we think of numbers like a million, one billion, and a trillion. Although these numbers have a huge difference between them, our mind perceives all these numbers to be really big and huge in the count. However, it is not exactly like what we perceive when we talk about 1 million seconds. The number seems big, but when this same amount, that is, 1 million seconds is converted to other units of time, it comes out to be really small. Here is a quick insight of how long is 1 million seconds when converted into different units of time using the conversion constant and conversion formulas: Different measuring and calculative standard units of time Time Minutes (when divided by 60) 16666.7 minutes Hours (when divided by 3600) 277.8 days Days (when divided by 86400) 11.57 days Weeks (when divided by 604800) 1.65 weeks 1 million seconds can be said as equal to 16666.7 minutes (when divided by a value of 60) or 277.8 days (when divided by a value of 3600) or 11.57 days (when divided by a value of 86400) or 1.65 weeks (when divided by a value of 604800). Why Is 1 Million Seconds That Long? As we discussed earlier, the time has several units, and one unit of time can be converted to another by a simple conversion method. To convert one unit from another, the primary unit (that is to be converted) should either be divided or multiplied by a conversion constant of that particular unit. If you are converting a smaller unit into a bigger unit, then the primary unit should be divided with the conversion constant. However, on the other hand, if you are converting the bigger unit to a smaller unit, then the primary unit should be multiplied with the conversion constant. These conversion rates vary from unit to unit. A conversion unit can depend upon the following calculations. For example, one year is equal to 365 days (366 days for a leap year), and 12 months, one month is equal to 30/31/28/29 days, one week is equal to 7 days, 1 day is equal to 24 hours, one hour is equal to 60 minutes, one minute is equal to 60 seconds and the conversion rates keep going. Based upon these conversion rates, 1 million seconds can be calculated and converted to many other units which tells that though one million is a huge number, 1 million seconds is equal to just 11 days 13 hours 46 minutes and 40 seconds. 1 million seconds is equal to 11 days 13 hours 46 minutes and 40 seconds. The reason behind that is because the clock reads time that way. Time is periodic and is based upon the periodic events that occur in the universe. The reason why we have 365.25 days in a year is that it takes Earth 365.25 days to complete a whole revolution. Moreover, the reason behind why one day has 24 hours is because it takes Earth 24 hours to complete a rotation. Based on such theories, time is calculated.
{"url":"https://exactlyhowlong.com/how-long-is-1-million-seconds-and-why/","timestamp":"2024-11-04T08:18:14Z","content_type":"text/html","content_length":"166663","record_id":"<urn:uuid:08b774d3-8a26-4e44-ab8e-ec518db91c36>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00663.warc.gz"}
How to calculate the circumferenceHow to calculate the circumference 🚩 how to disable slipanie keys on the computer 🚩 Math. You will need • You will need the following values: • R - length of radius of the circle; • D - the length of the diameter of a circle; • π is a constant (π = 3.14) Method 1. Let the plane of a circle. Its radius is equal to R. Then the length L of this is considered thus: L = 2nr. Example: the Radius of the circle R = 5 cm Then the length of L = 2*3.14*5 = 31.4 cm Method 2. Given a circle with diameter D. Then L is: L = πD. Example: the Diameter of a circle D = 10 cm Then L is calculated as: L = 3.14*10=31.4 cm The answers in the first and second method are equal, because the length of the radius equal to half the length of the diameter of the circle.
{"url":"https://eng.kakprosto.ru/how-15296-how-to-calculate-the-circumference","timestamp":"2024-11-09T22:55:26Z","content_type":"text/html","content_length":"32821","record_id":"<urn:uuid:daf9e218-b595-4352-9132-90755fa77d94>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00654.warc.gz"}
MATHEMATICS (MATH 27.01) - Catalog 2022-2023 MATH 003. Developmental Mathematics II.3-6-0. Prerequisite: C or better in MATH 002 or satisfactory score on placement test. The real numbers and their properties, linear equations and inequalities, systems of equations, polynomials, fractional expressions and equations, exponents and radicals, quadratic equations, and functions and their graphs. (Credit earned in this course cannot be applied toward a degree.) (32.0104) MATH 100. College Algebra. 3-3-0. Co-registration is required in MATH 100L. Prerequisites: C or better in MATH 003 or C or better in MATH 115, or MATH ACT subscore of 19 or better or satisfactory score on placement test. Degree credit will be granted in only one of the following courses: Math 100, Math 101. Linear equations and inequalities, linear applications, systems of linear equations, quadratic equations and inequalities, absolute value equations and inequalities, radical equations, functions and graphs, polynomial and exponential and logarithmic functions. Credit in MATH 100 is equivalent to MATH 101. [LCCN: CMAT 1213] (27.0101) MATH 100L. College Algebra Lab. 2-0-3. Co-registration is required in MATH 100. A supplementary instruction/laboratory course to accompany MATH 100. S or U assigned upon completion of course. (Credit earned in this course cannot be applied toward a degree.) (27.0101) MATH 101. College Algebra.3-3-0. Prerequisite: Grade of C or better in MATH 003, C or better in MATH 115, or Grade of D in MATH 100, or Math ACT subscore of 21 or better, or satisfactory score on placement test. Linear equations and inequalities, linear applications, systems of linear equations, quadratic equations and inequalities, absolute-value equations and inequalities, radical equations, functions and graphs, polynomial and exponential and logarithmic functions. For MATH 101 WWW (web), priority is given to students enrolling in MATH 101 for the first time. Credit in MATH 100 is equivalent to MATH 101. [LCCN: CMAT 1213] (27.0101) MATH 102. Trigonometry.3-3-0. Prerequisite: C or better in MATH 100 or MATH 101. Trigonometric ratios, circular functions and graphs, solutions of triangles, logarithmic and exponential functions, inverse functions, identities and equations, complex numbers, introduction to analytical geometry. [LCCN: CMAT 1223] (27.0101) MATH 106. Calculus with Business and Economic Applications. 3-3-0. Prerequisite: C or better in MATH 100 or MATH 101. Algebraic, exponential, and logarithmic functions, intuitive limits, derivatives, and applications of the derivative. (27.0101) MATH 108. Pre-Calculus. 4-4-0. Prerequisite: C or better in MATH 102 or minimum Math ACT score of 23. Inequalities, functions, theory of equations, exponential and logarithmic functions; trigonometric functions, analytic geometry. [LCCN: CMAT 1234] (27.0101) MATH 110. Mathematics for Elementary Teachers I. 3-3-0. Restricted to College of Education majors only. Prerequisite: C or better in MATH 100 or MATH 101. Logic and deductive reasoning; patterns, sequences, functions, and problem solving; introductory number theory; the real number system, informal and formal solutions of equations and inequalities. (27.0101) MATH 113. Honors Pre-Calculus. 4-4-0. Prerequisite: ACT math sub-score of 24 or higher. Permission of Honors Director and department head. (27.0101) MATH 114. Honors Trigonometry. 3-3-0. Prerequisite: 24 MATH ACT. Honors based investigation of trigonometric ratios, circular functions and graphs, solutions of triangles, inverse functions, identities and equations. (27.0101) MATH 115. Essentials of College Mathematics. 3-3-0. Co-registration is required in MATH 115L. Prerequisite: ACT mathematics sub-score of 18 or grade of C or better in MATH 002. Essential elements of algebra and statistics, including exponents, radicals, algebraic expressions, ratio, proportion, linear equations, quadratic equations, descriptive statistics, combinations, permutations, and linear regression. (27.0101) MATH 115L. Essentials of College Mathematics Lab. 2-0-3. Co-registration is required in MATH 115. A supplemental instruction/laboratory course to accompany MATH 115. S or U assigned upon completion of course. (Credit earned in this course cannot be applied toward a degree.) (27.0101) MATH 116. Contemporary Mathematics and Quantitative Analysis I. 3-3-0. Coregistration is required in MATH 116L. Prerequisite: MATH ACT subscore of 18 or grade of C or better in MATH 002. Degree credit will be granted in only ONE of the following courses: MATH 116, MATH 117. This course applies basic college-level mathematics to real-life problems and is appropriate for students whose majors do not require college algebra. This course covers selected topics in reasoning, data analysis, financial mathematics, measurement, and applications of mathematics to everyday problem-solving. Credit in MATH 116 is equivalent to MATH 117. (27.0101) MATH 116L. Contemporary Mathematics and Quantitative Analysis I Lab. 2-0-3. Coregistration is required in MATH 116. A supplemental instruction/laboratory course to accompany MATH 116. S or U assigned upon completion of course. (Credit earned in this course cannot be applied toward a degree.) (27.0101) MATH 117. Contemporary Mathematics and Quantitative Analysis I. 3-3-0. Prerequisite: Grade of C or better in MATH 003, or 115, grade of D or better in MATH 100 or 101, or MATH ACT subscore of 19 or better, or satisfactory score on placement test. Degree credit will be granted in only one of the following courses: MATH 116, MATH 117. This course applies basic college-level mathematics to real-life problems and is appropriate for students whose majors do not require college algebra. This course covers selected topics in reasoning, data analysis, financial mathematics, measurement, and applications of mathematics to everyday problem-solving. Credit in MATH 116 is equivalent to MATH 117. [LCCN: CMAT 1103] (27.0101) MATH 118. Contemporary Mathematics and Quantitative Analysis II. 3-3-0. Prerequisite: Grade of D or better in MATH 116 or MATH 117. Continuation, extension, and applications of topics from MATH 117, including ratio, proportion, percent and percentages, modeling with algebraic functions, consumer mathematics, elementary graph theory, and probability/statistics. (27.0101) MATH 165. Calculus I. 5-6-0. Prerequisite: C or better in MATH 102 or MATH 108. Limits, derivatives and integrals of algebraic functions, applications of derivatives and integrals. [LCCN: CMAT 2115] MATH 166. Calculus II. 4-5-0. Prerequisite: C or better in MATH 165. Transcendental functions, derivatives, integrals, analytical geometry, infinite series, polar coordinates and vectors in the plane. [LCCN: CMAT 2124] (27.0101) MATH 210. Mathematics for Elementary Teachers II. 3-3-0. Prerequisite: C or better in MATH 110. Introductory probability, introductory statistics, plane figures, measurement, geometric constructions, area, perimeter, tessellations, similarity, congruence, coordinate geometry, mappings and transformations, space figures, volume, surface area, right triangle trigonometry. (27.0101) MATH 214. Introductory Statistics. 3-3-0. Prerequisites: C or better in MATH 100, 101, 115, 116, or 117. Organizing data, averages and variations, stem-and-leaf and box plots and other graphical presentations of data, conducting experiments, elementary probability theory, distributions, estimations, hypothesis testing, regression and correlation. [LCCN: CMAT 1303] (27.0101) MATH 261. Discrete Mathematics. 3-3-0. Prerequisite: MATH 106, 165, or permission of department head. Introduction to logic, set theory, number theory, graph theory, mathematical induction and recursion, groups and semi-groups, and Boolean algebra. MATH 261 cannot be used in place of MATH 358 or for satisfying prerequisite requirements for other mathematics courses. (27.0101) MATH 265. Calculus III. 4-4-0. Prerequisite: C or better in MATH 166. Vectors and parametric equations, partial derivatives, multiple integrals, derivatives and integrals of vector functions, introduction to linear algebra. (27.0101) MATH 301. Elementary Statistical Methods I. 3 3 0. Prerequisite: ENGL 102 and eligibility for MATH 165; or ENGL 102, and C or better in MATH 101, and C or better in at least one of MATH 102 or 106 or 108 or 214. Descriptive statistics, graphical presentation of data, trend and relationship, some probability distributions, central limit theorem, estimation, confidence interval, hypothesis testing, regression and correlation analyses, and non parametric tests. Emphasis on applications and statistical computer packages. (27.0501) MATH 313. Topics in Mathematics for the Humanities. 3-3-0. Prerequisite: Six hours of non-developmental MATH with C or better in each course. Selected mathematical excursions and topics in elementary number theory, algebra, geometry, and probability, with emphasis on liberal arts applications, appreciation, inductive thinking and discovery, mathematical modeling, pattern recognition, current technology, and the history of mathematics. Class discussion and exercises. Especially for the non-mathematics major. (27.0101) MATH 320. Mathematics for Middle School Teachers. 3-3-0. Prerequisites: C or better in MATH 101, 110, and 210. Number systems, number sense, operations, quantitative literacy, measurement; representation of functions and other algebraic structures; geometric modeling; elementary game theory; inductive, deductive, and inferential methods of problem-solving; elementary analysis. School site visits required. (27.0101) MATH 321. Mathematics for Middle School Teachers Laboratory. 1-0-2. Co-requisite: MATH 320. Reinforces and applies concepts learned in MATH 320; emphasis on technology, communication, and the use of mathematics in diverse contexts. School site visits required. (27.0101) MATH 355. Differential Equations. 3-3-0. Prerequisite: C or better in MATH 166. Theory and application of ordinary differential equations. (27.0101) MATH 358. Foundations of Mathematics. 3-3-0. Prerequisite: C or better in MATH 166. Logic, sets, methods of mathematical proofs, relations, functions, mappings, ordered fields and their properties, axiomatization of number systems. (27.0101) MATH 360. Linear Algebra. 3-3-0. Prerequisites: C or better in both MATH 265 and MATH 358. The real number system, vectors, matrices, and linear equations, determinants, polynomials and complex numbers, vector spaces and linear transformations. (27.0101) *MATH 401. Theory of Probability. 3-3-0. Prerequisite: C or better in MATH 166. Elementary probability theory, random variables, discrete and continuous probability distributions, moments and moment generating functions, functions of random variables, sampling distributions, and the central limit theorem. Fa only. (27.0501) *MATH 402. Mathematical Statistics. 3-3-0. Prerequisites: C or better in Math 265, and C or better in MATH 401. Bivariate probability distributions, marginal and conditional distributions, conditional expectations, estimation, point estimators and methods of estimation, confidence interval, hypothesis testing, likelihood ratio tests, comparison of two means and two variances, linear models and estimation by method of least squares, non parametric tests. Sp only. (27.0501) *MATH 405. Numerical Analysis I. 3-3-0. Prerequisites: C or better in all of the following: MATH 265, 355, and 360. Numerical solution of equations and systems, convergence theorems, eigenvalue and eigenvector methods, interpolation and extrapolation. Attention to theory with emphasis on methods applicable to high-speed computation. Fa only. (27.0101) MATH 407. Mathematical Probability and Statistics. 3-4-0. Prerequisites: C or better in MATH 265. Course in the theory of statistics and probability based on set theory and calculus. Includes data analysis, discrete and continuous probability distributions, random sampling, and sampling distributions, regression analysis, parameter estimation, hypothesis testing, and analysis of variance. *MATH 423. Geometry. 3-3-0. Prerequisite: C or better in MATH 265 and 358. A development of traditional Euclidean and non Euclidean geometries. Fa only. (27.0101) *MATH 461. Optimization. 3-3-0. Prerequisite: C or better in MATH 360. Classical and modern techniques in constrained and unconstrained optimization of functions of several variables. Mathematical programming methods and an introduction to calculus of variations. (27.0301) *MATH 465. Modern Algebra I. 3-3-0. Prerequisites: C or better in MATH 358 and C or better in Math 360. Introductory concepts, axiomatic approach to the number system, general algebraic systems, groups. Sp only. (27.0101) *MATH 471. Elementary Topology. 3-3-0. Prerequisite: C or better in MATH 360. An information and introductory study of topological spaces. (27.0101) MATH 481. Principles of Mathematical Analysis I. 3-3-0. Prerequisites: C or better in MATH 265 and 360. Number systems; completeness axiom; sequences and series of real numbers; functions of a single real variable; continuity and uniform continuity; differentiation; a systematic development of the Riemann integral. (27.0101) *MATH 482. Principles of Mathematical Analysis II. 3-3-0. Prerequisite: C or better in MATH 481. Three dimensional theory and applications; infinite series; conformal mappings; partial differential equations. (27.0101) MATH 485. Complex Analysis. 3-3-0. Prerequisites: C or better in all of the following: MATH 265, 355 and 358. Complex numbers, analytic functions, elementary functions, mapping by elementary functions, integrals, power series. (27.0101) MATH 488. Topics in Mathematics. 3-3-0. Prerequisite: Permission of department head. Selected current topics in mathematics, especially relevant to educators. May be repeated for credit if content differs. (27.0101) MATH 491. Mathematical Models. 3-3-0. Prerequisites: C or better in all of the following: MATH 265, 355 and CMPS 135. The study of various types of mathematical models which arise in biology, management, economics, and physical and social sciences. (27.0301) MATH 495. Topics in Advanced Mathematics. 3-3-0. Prerequisite: Permission of department head. Selected current topics in mathematics. May be repeated for credit if content differs. (27.0101) MATH 499. Undergraduate Major Examination. 0-0-1. Must be scheduled during the final year. S is assigned upon taking the examination; otherwise the student receives a grade of U. (27.999) MATH 500. Preparation for Teaching Developmental Mathematics. 1-1-0. Prerequisite: Graduate assistant in the Department of Mathematics or permission of department head. This seminar course is designed to prepare graduate students, especially those with little formal training as educators, to assume instructional roles as teaching assistants and/or tutors in selected university mathematics courses. Areas of emphasis include facilitation of student learning, effective small-group teaching, instructional etiquette and management, and teaching portfolio development. S or U will be earned upon completion. (Credit earned n this course cannot be applied toward a degree.) (27.0501) MATH 507. Biostatistics. 3-3-0. Prerequisite: MATH 301, 402, or 407. The application of statistical methods and techniques to the study of living organisms and biological systems. Includes experimental design and data analysis, projection methods, descriptive and inferential statistics, and specific computer applications. (26.1102) MATH 509. Logic and Foundations of Mathematics. 3-3-0. Prerequisites: MATH 265 and 358. Cornerstone course normally taken in first semester of graduate study. Developing and evaluating arguments and proofs, the use of various types of reasoning, methods of proof, making and investigating conjectures. (27.0101). MATH 510. Number-Theoretic and Discrete Structures. 3-3-0. Prerequisite or co-requisite: MATH 509. Primes, congruences, algebraic number theory, diophantine equations, and theory of algebraic equations. Applications of the theory of number systems to problem solving. Representation of phenomena via finite graphs, recursive relations, and combinatorial structures. (27.0101) MATH 511. Calculus and Analytic Structures. 3-3-0. Prerequisite or co-requisite: MATH 509. Formal exploration of continuity, limits, derivatives, integrals, sequences, series, basic differential equations, and introductory numerical analysis. Applications of concepts. (27.0101) MATH 512. Probability and Statistics. 3-3-0. Prerequisite: MATH 360, and either MATH 402 or MATH 407. Discrete and continuous probability distributions, measures of variability, estimation, hypothesis testing, prediction, introduction to stochastic modeling and operations research, simple and multiple linear regressions, measures of association and correlation, analysis of variance and its relationship regression analysis. (27.0501) MATH 523. Geometric and Algebraic Structures. 3-3-0. Prerequisite or co-requisite: MATH 509. Examination of the complementary relationships between geometry and algebra, and among the structures in each discipline. Focuses on the interdependence among geometric and algebraic properties of objects. Spatial reasoning, non-Euclidean representations of curves and space, fractal geometry, calculus of higher dimensions. Representation of geometric structures and other phenomena via semigroups, groups, rings, and other algebraic constructs. (27.0101) MATH 530. Introduction to Decision Theory. 3-3-0. Prerequisite: MATH 401 or MATH 407. Topics in decision theory with applications to real world problems. (27.0301) MATH 540. Applied Matrix Analysis. 3-3-0. Prerequisite: MATH 360. Vector spaces and transformations, eigensystems, quadratic forms. (27.0301) MATH 557. Applied Analysis I. 3-3-0. Prerequisite: MATH 358. Vectors; matrices; differential and integral calculus of functions of several variables; differential and integral vector calculus. MATH 558. Applied Analysis II. 3-3-0. Prerequisite: MATH 557. Functions of a complex variable; derivatives; integrals; analytic functions; Cauchy Riemann equations; Cauchy’s integral theorem and formula; power series. (27.0301) MATH 570. Mathematical Modeling and Problem Solving. 3-3-0. Prerequisite: MATH 355, and either MATH 402 or MATH 407. Use of previous course work to construct models for various problems in the sciences, managerial sciences, or other related areas. (27.0301) MATH 573. Topics in the History of Mathematics. 3-3-0. Prerequisite or co-requisite: MATH 509. Selected topics in the history of mathematics. A general survey of mathematics normally includes developments in geometry, algebra, number theory, and calculus as well as biographies of significant mathematicians and their contributions to mathematics and society. May be repeated for credit if content differs. No more than six hours may be counted towards a degree. (27.0101) MATH 577. Topics in Mathematics. 3-3-3. Prerequisite: Permission of department head. Selected current topics in mathematics especially relevant to professional development. May be repeated for credit if content differs. No more than a total of six hours from MATH 577 and/or MATH 588 may be counted as graduate semester content hours in the teaching discipline. (27.0101) Math 578. Research in Mathematics Education. 3-3-0. (Not for credit as mathematics content course). Prerequisite or corequisite: MATH 509. Study of basic methods in mathematics education research. Includes experience in research designs, data gathering, analysis, and interpretation. Addresses elements affecting curricular and research agendas in the teaching of mathematics. (27.0199) MATH 580. Topics in the School Mathematics Curriculum. 3-3-0. (Not for credit as mathematics content course). Prerequisite or corequisite: MATH 509. Practices, activities, and delivery methods related to curriculum development, problem solving, and critical thinking. The four focus areas are algebra, geometry, precalculus, and calculus. Standards and guidelines from professional mathematical and educational organizations are examined as rubrics for curriculum development. (27.0101) MATH 584. Technology and Communication in Mathematics. 3-3-0. Prerequisite: MATH 509. Capstone course normally taken in final semester of graduate study. Application of a variety of strategies and use of multiple sources of information and technology to solve problems. Students draw on previous course work as they conduct investigations and present mathematical ideas orally, in writing, and by demonstration. Includes formal and informal presentations in groups or individually. Presentations may occur at off-campus sites. (27.0101) MATH 588. Topics in Mathematics. 6-6-0. Prerequisite: Permission of department head. Selected current topics in mathematics. May be repeated for credit if content differs. No more than a total of six hours from MATH 577 and/or MATH 588 may be counted as graduate semester content hours in the teaching discipline. (27.0101) MATH 589. Topics in Graduate Mathematics. 3-3-0. Prerequisite: Permission of department head. Selected current topics in mathematics. May be repeated for credit if content differs. (27.0101) MATH 590. Topics in Graduate Mathematics. 3-3-0. Prerequisite: Permission of department head. Selected current topics in mathematics. May be repeated for credit if content differs. No student may apply more than six hours toward graduation. (27.0101) MATH 595. Master’s Comprehensive Examination. 0-0-4. Must be scheduled during final semester or session. S or U assigned upon completion of examination. (27.9999)
{"url":"https://www.nicholls.edu/catalog-2022-2023/courses_of_instruction/mathematics/","timestamp":"2024-11-10T08:26:06Z","content_type":"text/html","content_length":"251548","record_id":"<urn:uuid:3968de08-2d83-4505-9ab1-a3288be11043>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00878.warc.gz"}
American Mathematical Society Well-posedness for the Kadomtsev-Petviashvili II equation and generalisations HTML articles powered by AMS MathViewer by Martin Hadac PDF Trans. Amer. Math. Soc. 360 (2008), 6555-6572 Request permission We show the local in time well-posedness of the Cauchy problem for the Kadomtsev-Petviashvili II equation for initial data in the non-isotropic Sobolev space $H^{s_1,s_2}(\mathbb {R}^2)$ with $s_1>-\ frac 12$ and $s_2\geq 0$. On the $H^{s_1,0}(\mathbb {R}^2)$ scale this result includes the full subcritical range without any additional low frequency assumption on the initial data. More generally, we prove the local in time well-posedness of the Cauchy problem for the following generalisation of the KP II equation: \[ (u_t - |D_x|^\alpha u_x + (u^2)_x)_x + u_{yy} = 0, \quad u(0) = u_0, \] for $\frac 43<\alpha \leq 6$, $s_1>\max (1-\frac 34 \alpha ,\frac 14-\frac 38 \alpha )$, $s_2\geq 0$ and $u_0\in H^{s_1,s_2}(\mathbb {R}^2)$. We deduce global well-posedness for $s_1\geq 0$, $s_2=0$ and real valued initial data. References • J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. I. Schrödinger equations, Geom. Funct. Anal. 3 (1993), no. 2, 107–156. MR 1209299, DOI 10.1007/BF01896020 • J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. I. Schrödinger equations, Geom. Funct. Anal. 3 (1993), no. 2, 107–156. MR 1209299, DOI 10.1007/BF01896020 • J. Bourgain, On the Cauchy problem for the Kadomtsev-Petviashvili equation, Geom. Funct. Anal. 3 (1993), no. 4, 315–341. MR 1223434, DOI 10.1007/BF01896259 • Jean Ginibre, Le problème de Cauchy pour des EDP semi-linéaires périodiques en variables d’espace (d’après Bourgain), Astérisque 237 (1996), Exp. No. 796, 4, 163–187 (French, with French summary). Séminaire Bourbaki, Vol. 1994/95. MR 1423623 • Rafael José Iório Jr. and Wagner Vieira Leite Nunes, On equations of KP-type, Proc. Roy. Soc. Edinburgh Sect. A 128 (1998), no. 4, 725–743. MR 1635416, DOI 10.1017/S0308210500021740 • Pedro Isaza and Jorge Mejía, Local and global Cauchy problems for the Kadomtsev-Petviashvili (KP-II) equation in Sobolev spaces of negative indices, Comm. Partial Differential Equations 26 (2001), no. 5-6, 1027–1054. MR 1843294, DOI 10.1081/PDE-100002387 • Pedro Isaza, Juan López, and Jorge Mejía, Cauchy problem for the fifth order Kadomtsev-Petviashvili (KPII) equation, Commun. Pure Appl. Anal. 5 (2006), no. 4, 887–905. MR 2246014, DOI 10.3934/ • B.B. Kadomtsev and V.I. Petviashvili, On the stability of solitary waves in weakly dispersing media, Sov. Phys., Dokl. 15 (1970), 539–541 (English. Russian original). • Carlos E. Kenig, Gustavo Ponce, and Luis Vega, Oscillatory integrals and regularity of dispersive equations, Indiana Univ. Math. J. 40 (1991), no. 1, 33–69. MR 1101221, DOI 10.1512/ • C. E. Kenig and S. N. Ziesler, Local well posedness for modified Kadomstev-Petviashvili equations, Differential Integral Equations 18 (2005), no. 10, 1111–1146. MR 2162626 • L. Molinet, J. C. Saut, and N. Tzvetkov, Remarks on the mass constraint for KP type equations, preprint, arXiv:math.AP/0603303, 2006. • Jean-Claude Saut, Remarks on the generalized Kadomtsev-Petviashvili equations, Indiana Univ. Math. J. 42 (1993), no. 3, 1011–1026. MR 1254130, DOI 10.1512/iumj.1993.42.42047 • J. C. Saut and N. Tzvetkov, The Cauchy problem for higher-order KP equations, J. Differential Equations 153 (1999), no. 1, 196–222. MR 1682263, DOI 10.1006/jdeq.1998.3534 • J. C. Saut and N. Tzvetkov, The Cauchy problem for the fifth order KP equations, J. Math. Pures Appl. (9) 79 (2000), no. 4, 307–338 (English, with English and French summaries). MR 1753060, DOI • Hideo Takaoka, Global well-posedness for the Kadomtsev-Petviashvili II equation, Discrete Contin. Dynam. Systems 6 (2000), no. 2, 483–499. MR 1739371, DOI 10.3934/dcds.2000.6.483 • Hideo Takaoka, Well-posedness for the Kadomtsev-Petviashvili II equation, Adv. Differential Equations 5 (2000), no. 10-12, 1421–1443. MR 1785680 • H. Takaoka and N. Tzvetkov, On the local regularity of the Kadomtsev-Petviashvili-II equation, Internat. Math. Res. Notices 2 (2001), 77–114. MR 1810481, DOI 10.1155/S1073792801000058 • Nickolay Tzvetkov, On the Cauchy problem for Kadomtsev-Petviashvili equation, Comm. Partial Differential Equations 24 (1999), no. 7-8, 1367–1397. MR 1697491, DOI 10.1080/03605309908821468 • N. Tzvetkov, Global low-regularity solutions for Kadomtsev-Petviashvili equation, Differential Integral Equations 13 (2000), no. 10-12, 1289–1320. MR 1787069 Similar Articles • Retrieve articles in Transactions of the American Mathematical Society with MSC (2000): 35Q53, 35B30 • Retrieve articles in all journals with MSC (2000): 35Q53, 35B30 Additional Information • Martin Hadac • Affiliation: Mathematical Institute of the University of Bonn, Beringstraße 1, D-53115 Bonn, Germany • Email: hadac@math.uni-bonn.de • Received by editor(s): January 22, 2007 • Published electronically: July 22, 2008 • Additional Notes: The research for this work was mainly carried out while the author was employed at the Department of Mathematics of the University of Dortmund. • © Copyright 2008 American Mathematical Society The copyright for this article reverts to public domain 28 years after publication. • Journal: Trans. Amer. Math. Soc. 360 (2008), 6555-6572 • MSC (2000): Primary 35Q53; Secondary 35B30 • DOI: https://doi.org/10.1090/S0002-9947-08-04515-7 • MathSciNet review: 2434299
{"url":"https://www.ams.org/journals/tran/2008-360-12/S0002-9947-08-04515-7/?active=current","timestamp":"2024-11-09T04:42:38Z","content_type":"text/html","content_length":"68065","record_id":"<urn:uuid:f75d78a7-f736-40b5-99d3-0436ed468cdd>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00653.warc.gz"}
How to Run a Block of Code in Python - Cheer Learn In this example we will show how to run a block of code once when the condition of the while loop is not true in Python. Source Code a = 3 while a < 20: print("'a' is less than 20, it is: ", a) a += 4 print("'a' is no longer less than 20") 'a' is less than 20, it is: 3 'a' is less than 20, it is: 7 'a' is less than 20, it is: 11 'a' is less than 20, it is: 15 'a' is less than 20, it is: 19 'a' is no longer less than 20 0 Comments Inline Feedbacks View all comments
{"url":"https://www.cheerlearn.com/python-code-snippets/how-to-run-a-block-of-code-in-python/","timestamp":"2024-11-12T13:08:00Z","content_type":"text/html","content_length":"71967","record_id":"<urn:uuid:eeff00a1-d8c3-4431-b01f-543d449114ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00060.warc.gz"}
Finding the Distance Covered by a Car Moving with Uniform Acceleration Question Video: Finding the Distance Covered by a Car Moving with Uniform Acceleration Mathematics • Second Year of Secondary School A body was moving in a straight line accelerating uniformly. If it covered 55 meters in the first 4 seconds and 57 meters in the next 4 seconds, determine the total distance covered in the first 10 seconds of its motion. Video Transcript A body was moving in a straight line accelerating uniformly. If it covered 55 meters in the first four seconds and 57 meters in the next four seconds, determine the total distance covered in the first 10 seconds of its motion. In order to solve this problem, we will use some of the equations of motion or SUVAT equations. We will use 𝑠 equals 𝑢𝑡 plus half 𝑎𝑡 squared, 𝑠 equals 𝑣𝑡 minus half 𝑎𝑡 squared, and 𝑣 equals 𝑢 plus 𝑎𝑡, where 𝑠 is the displacement, 𝑢 is the initial velocity, 𝑣 is the final velocity, 𝑎 is the acceleration, and 𝑡 is the time. We can see from the diagram that in the first four seconds the body covered 55 meters. In the next four seconds, it covered 57 meters. We need to calculate the total distance covered in the first 10 seconds. If we let the velocity of the body at 𝑡 equals four be 𝑣 meters per second, then for the first part of the journey we have a displacement of 55, we have a final velocity of 𝑣 and acceleration of 𝑎 and a time of four seconds. We can substitute these values into the equation 𝑠 equals 𝑣𝑡 minus a half 𝑎𝑡 squared. This gives us 55 is equal to 𝑣 times four minus a half multiplied by 𝑎 multiplied by four squared. Simplifying this equation gives us 55 is equal to four 𝑣 minus eight 𝑎. This means that during the time period 𝑡 equals zero to 𝑡 equals four, the motion of the body satisfies the equation four 𝑣 minus eight 𝑎 equals 55. If we now consider the time period between 𝑡 equals four and 𝑡 equals eight, we have a displacement of 57 meters. The initial velocity is equal to 𝑣 as this is the velocity at time equals four. The acceleration is still equal to 𝑎. And our time is four seconds. We can substitute these values into the equation 𝑠 equals 𝑢𝑡 plus a half 𝑎𝑡 squared. This gives us 57 is equal to 𝑣 multiplied by four plus a half multiplied by 𝑎 multiplied by four squared. Simplifying this gives us 57 is equal to four 𝑣 plus eight 𝑎. We now have two simultaneous equations: one for the time period 𝑡 equals zero to 𝑡 equals four and one for the time period between 𝑡 equals four and 𝑡 equals eight. We can use these equations to calculate the acceleration and the velocity at 𝑡 equals four. Adding equation one and equation two gives us eight 𝑣 is equal to 112. Dividing both sides of this equation by eight gives us a value of 𝑣 of 14 meters per second. The velocity of the body at 𝑡 equals four is 14 meters per second. Subtracting equation one from equation two gives us 16𝑎 is equal to two. Dividing both sides of this equation by 16 gives us a value of 𝑎 of one-eighth meter per second squared. The uniform acceleration of the body is one-eighth meter per second squared. Our next step is to calculate the initial velocity of the body. In order to do this, we’ll consider the time between 𝑡 equals zero and 𝑡 equals four. Our displacement is 55 meters. Our initial velocity is 𝑢. Our final velocity is 14 meters per second. Our acceleration is one-eighth meter per second squared. And our time is four seconds. Using the equation 𝑣 equals 𝑢 plus 𝑎𝑡 gives us 14 is equal to 𝑢 plus an eighth multiplied by four. One-eighth multiplied by four is 0.5. Therefore, 14 equals 𝑢 plus 0.5. Subtracting 0.5 from both sides of this equation gives us an initial velocity 𝑢 of 13.5 meters per second. We now have enough information to consider the first 10 seconds of the motion. The initial velocity 𝑢 is 13.5. Our acceleration is one-eighth. 𝑡 equals 10 and 𝑠 — the total displacement — is the unknown. Using the equation 𝑠 equals 𝑢𝑡 plus a half 𝑎𝑡 squared gives us 𝑠 is equal to 13.5 multiplied by 10 plus a half multiplied by an eighth multiplied by 10 squared. This gives us a value of 𝑠 of Therefore, the total distance covered by the body in the first 10 seconds of its motion is 141.25 meters.
{"url":"https://www.nagwa.com/en/videos/760168704374/","timestamp":"2024-11-05T03:12:39Z","content_type":"text/html","content_length":"255141","record_id":"<urn:uuid:8562d1db-a7ab-4b10-9a6f-f447d45492bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00554.warc.gz"}
Near-Wall Modeling For internal wall bounded flows, proper mesh resolution is required in order to calculate the steep gradients of the velocity components, turbulent kinetic energy, dissipation, as well as the Furthermore, the first grid locations from the wall and the stretching ratio between subsequent points are also determinant factors that affect solution accuracy. Since boundary layer thickness is reduced as the Reynolds number increases, the computational cost increases for higher Reynolds number flows due to the dense grid requirements near the walls. Additionally, it is hard to determine the first grid locations for complex three dimensional industrial problems without adequate testing and simulation. For high Reynolds number flows, it is numerically efficient if flows within boundary layers are modeled rather than resolved down to the wall. This suggests that a coarse mesh is used at the expense of numerical accuracy when compared to fully wall resolved approaches. Wall Function Figure 1 shows velocity profiles over a flat plate under the zero pressure gradient. Velocity profile and wall distances are scaled by the friction velocity ( ${u}_{\tau }=\sqrt{\frac{{\tau }_{w}}{\rho }}$ ${\tau }_{w}$ is the wall shear stress and is the fluid density). This is referred to as the log-law velocity profile. As shown in Figure 1 , the velocity profiles are divided into three distinguished regions, the viscous sublayer, the buffer layer and the logarithmic layer. Figure 1. Log-law Velocity Profile The wall function utilizes the universality of the log-law velocity profile to obtain the wall shear stress. This wall function approach allows the reduction of mesh requirement near the wall as the first mesh location is placed outside the viscous sublayer. Overall, the log-law based wall model is economical and reasonably accurate in most flow conditions, especially for high Reynolds number flows. However, it tends to display poor performance in situations with low Reynolds number flows, strong body forces (rotational effect, buoyancy effect), and massively separated flows with adverse pressure gradients. Velocity Profile in the Viscous Sublayer (y+ < 5) In the viscous sublayer, the normalized velocity profile ( ${U}^{+}$) has a linear relationship with the normalized wall distance. where ${U}^{+}=\frac{\overline{u}}{{u}_{\tau }}$ is the velocity ( $\overline{u}$) parallel to the wall, normalized by the friction velocity. The friction velocity is defined as ${u}_{\tau }=\sqrt{\ frac{{\tau }_{w}}{\rho }}$, ${\tau }_{w}$ is the wall shear stress and $\rho$ is the fluid density. ${y}^{+}=\frac{\rho {u}_{\tau }y}{\mu }$ is the normalized wall distance (or wall unit). The distance from the wall is $y$ and $\mu$ is the fluid dynamic viscosity. Velocity Profile in the Logarithmic Layer (30 < y+ < 500) In the logarithmic layer the velocity profile can be given by a logarithmic function ${U}^{+}=\frac{1}{\kappa }\mathrm{log}\left({y}^{+}\right)+B$ where $\kappa$= 0.4 is the Von Kármán constant and B = 5.5 is a constant. The wall shear stress can be estimated from the above equation via the iterative solution procedure. Velocity Profile in the Buffer Layer (5 < y+ < 30) Since two equations mentioned before are not valid in the buffer layer a special function is needed to bridge the viscous sublayer and the logarithmic layer. Details are not covered here. Kinematic Eddy Viscosity The kinematic eddy viscosity can be obtained as ${u }_{t}=\kappa y{u}_{\tau }$. Two Layer Wall Model The two equation turbulence models based on eddy frequency (ω) and the Spalart-Allmaras (SA) model do not require special wall treatments to solve the boundary layer as these models are valid through the viscous sub layer. However, the two equation turbulence models based on turbulence dissipation rate (ε) need additional functions to simulate the near wall effects. The two layer wall model is one of them. In the two layer model, turbulent kinetic energy is determined from the turbulent kinetic energy transport equation, while dissipation rate is resolved with a one equation turbulence model (Wolfstein, 1969) in the near wall, viscous-affected regions where $R{e}_{y}=\frac{\rho y\sqrt{k}}{\mu }<200$. $\epsilon =\frac{{k}^{3/2}}{{l}_{\epsilon }}$ • ${l}_{\epsilon }={C}_{l}y\left(1-\mathrm{exp}\left[-\frac{R{e}_{y}}{{A}_{\epsilon }}\right]\right)$, • ${C}_{l}=\kappa {C}_{\mu }{}^{-3/4}$, • $\kappa$ is the Von Kármán constant, • ${A}_{\epsilon }$ = 5.08 (Chen and Patel, 1988). The eddy viscosity can be written as ${u }_{t}={C}_{\mu }{l}_{\mu }\sqrt{k}$ • ${l}_{\mu }={C}_{l}y\left(1-\mathrm{exp}\left[-\frac{R{e}_{y}}{{A}_{\mu }}\right]\right)$, • ${A}_{\mu }$ = 70.
{"url":"https://help.altair.com/hwcfdsolvers/acusolve/topics/acusolve/training_manual/near_wall_modeling_r.htm","timestamp":"2024-11-08T18:06:38Z","content_type":"application/xhtml+xml","content_length":"67105","record_id":"<urn:uuid:76813ddc-e06a-4ba0-8750-7092d4376d08>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00473.warc.gz"}
The term refers to the requirement of an irreflexive relation ( !comp(x, x) for all ), and the term to requirements that are not as strong as those for a total ordering, but stronger than those for a partial ordering If we define equiv(a, b) !comp(a, b) && !comp(b, a) , then the requirements are that both be transitive relations: • comp(a, b) && comp(b, c) implies comp(a, c) • equiv(a, b) && equiv(b, c) implies equiv(a, c)
{"url":"http://www.eel.is/c++draft/concept.strictweakorder","timestamp":"2024-11-08T20:45:14Z","content_type":"text/html","content_length":"7643","record_id":"<urn:uuid:3ff4b2c2-7a31-4a6c-bd07-6906c288478d>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00393.warc.gz"}
Dynamics and rheology of concentrated, finite-Reynolds-number suspensions in a homogeneous shear flow for Physics of Fluids Physics of Fluids Dynamics and rheology of concentrated, finite-Reynolds-number suspensions in a homogeneous shear flow View publication We present the lubrication-corrected force-coupling method for the simulation of concentrated suspensions under finite inertia. Suspension dynamics are investigated as a function of the particle-scale Reynolds number Reγ and the bulk volume fraction φ{symbol} in a homogeneous linear shear flow, in which Reγ is defined from the density ρf and dynamic viscosity μ of the fluid, particle radius a, and the shear rate γ as Reγ=ρfγa2/μ. It is shown that the velocity fluctuations in the velocity-gradient and vorticity directions decrease at larger Reγ. However, the particle self-diffusivity is found to be an increasing function of Reγ as the motion of the suspended particles develops a longer auto-correlation under finite fluid inertia. It is shown that finite-inertia suspension flows are shear-thickening and the particle stresses become highly intermittent as Reγ increases. To study the detailed changes in the suspension microstructure and rheology, we introduce a particle-stress-weighted pair-distribution function. The stress-weighted pair-distribution function clearly shows that the increase of the effective viscosity at high Reγ is mostly related to the strong normal lubrication interaction in the compressive principal axis of the shear flow. © 2013 AIP Publishing LLC.
{"url":"https://research.ibm.com/publications/dynamics-and-rheology-of-concentrated-finite-reynolds-number-suspensions-in-a-homogeneous-shear-flow","timestamp":"2024-11-08T18:43:41Z","content_type":"text/html","content_length":"77456","record_id":"<urn:uuid:c12ea814-0b6b-4cbd-b75a-5e4d797c23ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00851.warc.gz"}
Energy-efficient Deployment of Mobile Sensor Networks by PSO - P.PDFKUL.COM Energy-efficient Deployment of Mobile Sensor Networks by PSO Wu Xiaoling, Shu Lei, Wang Jin, Jinsung Cho1, and Sungyoung Lee Department of Computer Engineering, Kyung Hee University, Korea {xiaoling, sl8132, wangjin, sylee}@oslab.khu.ac.kr [email protected] Abstract. Sensor deployment is an important issue in designing sensor networks. In this paper, particle swarm optimization (PSO) approach is applied to maximize the coverage based on a probabilistic sensor model in mobile sensor networks and to reduce cost by finding the optimal positions for the clusterhead nodes based on a well-known energy model. During the coverage optimization process, sensors move to form a uniformly distributed topology according to the execution of algorithm at base station. The simulation results show that PSO algorithm has faster convergence rate than genetic algorithm based method while achieving the goal of energy efficient sensor deployment. 1 Introduction Mobile sensor networks consist of sensor nodes that are deployed in a large area, collecting important information from the sensor field. Communication between the nodes is wireless. Since the nodes have very limited energy resources, the energy consuming operations such as data collection, transmission and reception must be kept at a minimum. In most cases, a large number of wireless sensor devices can be deployed in hostile areas without human involved, e.g. by air-dropping from an aircraft for remote monitoring and surveillance purposes. Once the sensors are deployed on the ground, their data are transmitted back to the base station to provide the necessary situational information. The deployment of mobile sensor nodes in the region of interest (ROI) where interesting events might happen and the corresponding detection mechanism is required is one of the key issues in this area. Before a sensor can provide useful data to the system, it must be deployed in a location that is contextually appropriate. Optimum placement of sensors results in the maximum possible utilization of the available sensors. The proper choice for sensor locations based on application requirements is difficult. The deployment of a static network is often either human monitored or random. Though many scenarios adopt random deployment for practical reasons such as 1 Prof. Jinsung Cho is the corresponding author. deployment cost and time, random deployment may not provide a uniform sensor distribution over the ROI, which is considered to be a desirable distribution in mobile sensor networks. Uneven node topology may lead to a short system lifetime. The limited energy storage and memory of the deployed sensors prevent them from relaying data directly to the base station. It is therefore necessary to form a cluster based topology, and the cluster heads (CHs) provide the transmission relay to base station such as a satellite. And the aircraft carrying the sensors has a limited payload, so it is impossible to randomly drop thousands of sensors over the ROI, hoping the communication connectivity would arise by chance; thus, the mission must be performed with a fixed maximum number of sensors. In addition, the airdrop deployment may introduce uncertainty in the final sensor positions. These limitations motivate the establishment of a planning system that optimizes the sensor reorganization process after initial random airdrop deployment, which results in the maximum possible utilization of the available sensors. There are lots of research work [1], [2], [3], [4], [12] related to the sensor nodes placement in network topology design. Most of them focused on optimizing the location of the sensors in order to maximize their collective coverage. However only a single objective was considered in most of the research papers, other considerations such as energy consumption minimization are also of vital practical importance in the choice of the network deployment. Self-deployment methods using mobile nodes [4],[9] have been proposed to enhance network coverage and to extend the system lifetime via configuration of uniformly distributed node topologies from random node distributions. In [4], the authors present the virtual force algorithm (VFA) as a new approach for sensor deployment to improve the sensor field coverage after an initial random placement of sensor nodes. The cluster head executes the VFA algorithm to find new locations for sensors to enhance the overall coverage. They also considered unavoidable uncertainty existing in the precomputed sensor node locations. This uncertainty-aware deployment algorithm provides high coverage with a minimum number of sensor nodes. However they assumed that global information regarding other nodes is available. In [1], the authors examined the optimization of wireless sensor network layouts using a multi-objective genetic algorithm (GA) in which two competing objectives are considered, total sensor coverage and the lifetime of the network. However the computation of this method is not inexpensive. In this paper, we attempt to solve the coverage problem while considering energy efficiency using particle swarm optimization (PSO) algorithm, which can lead to computational faster convergence than genetic algorithm used to solve the deployment optimization problem in [1]. During the coverage optimization process, sensor nodes move to form a uniformly distributed topology according to the execution of algorithm at the base station. To the best of our knowledge, this is the first paper to solve deployment optimization problem by PSO algorithm. In the next section, the PSO algorithm is introduced and compared with GA. Modeling of sensor network and the deployment algorithm is presented in section 3, followed by simulation results in section 4. Some concluding remarks and future work are provided in section 5. 2 Particle Swarm Optimization PSO, originally proposed by Eberhart and Kennedy [5] in 1995, and inspired by social behavior of bird flocking, has come to be widely used as a problem solving method in engineering and computer science. The individuals, called, particles, are flown through the multidimensional search space with each particle representing a possible solution to the multidimensional problem. All of particles have fitness values, which are evaluated by the fitness function to be optimized, and have velocities, which direct the flying of the particles. PSO is initialized with a group of random solutions and then searches for optima by updating generations. In every iteration, each particle is updated by following two "best" factors. The first one, called pbest, is the best fitness it has achieved so far and it is also stored in memory. Another "best" value obtained so far by any particle in the population, is a global best and called gbest. When a particle takes part of the population as its topological neighbors, the best value is a local best and is called lbest. After each iteration, the pbest and gbest (or lbest) are updated if a more dominating solution is found by the particle and population, respectively. The PSO formulae define each particle in the D-dimensional space as Xi = (xi1, xi2, xi3,……,xiD) where i represents the particle number, and d is the dimension. The memory of the previous best position is represented as Pi = (pi1, pi2, pi3……piD), and a velocity along each dimension as Vi = (vi1, vi2, vi3……viD). The updating equation [6] is as vid = ϖ × vid + c1 × rand () × ( pid − xid ) + c2 × rand () × ( p gd − xid ) xid = xid + vid where ϖ is the inertia weight, and c1 and c2 are acceleration coefficients. The role of the inertia weight ϖ is considered to be crucial for the PSO’ s convergence. The inertia weight is employed to control the impact of the previous history of velocities on the current velocity of each particle. Thus, the parameter ϖ regulates the trade-off between global and local exploration ability of the swarm. A large inertia weight facilitates global exploration, while a small one tends to facilitate local exploration, i.e. fine-tuning the current search area. A suitable value for the inertia weight ϖ balances the global and local exploration ability and, consequently, reduces the number of iterations required to locate the optimum solution. Generally, it is better to initially set the inertia to a large value, in order to make better global exploration of the search space, and gradually decrease it to get more refined solutions. Thus, a time-decreasing inertia weight value is used [7]. PSO shares many similarities with GA. Both algorithms start with a group of a randomly generated population, have fitness values to evaluate the population, update the population and search for the optimum with random techniques. However, PSO does not have genetic operators like crossover and mutation. Particles update themselves with the internal velocity. They also have memory, which is important to the algorithm [8]. Compared with GA, PSO is easy to implement, has few parameters to adjust, and requires only primitive mathematical operators, computationally inexpensive in terms of both memory requirements and speed while comprehensible. It usually results in faster convergence rates than GA. This feature suggests that PSO is a potential algorithm to optimize deployment in a sensor network. 3 The Proposed Algorithm First of all, we present the model of mobile sensor network. We assume that each node knows its position in the problem space, all sensor members in a cluster are homogeneous and cluster heads are more powerful than sensor members. Communication coverage of each node is assumed to have a circular shape without any irregularity. The design variables are 2D coordinates of the sensor nodes, {(x1, y1), (x2, y2), ……}. Sensor nodes are assumed to have certain mobility. Many research efforts into the sensor deployment problem in mobile sensor networks [4, 9] make this sensor mobility assumption reasonable. 3.1 Optimization of Coverage We consider coverage as the first optimization objective. It is one of the measurement criteria of QOS of a sensor network. Fig. 1. Sensor coverage models (a) Binary sensor and (b) probabilistic sensor models The coverage of each sensor can be defined either by a binary sensor model or a probabilistic sensor model as shown in Fig. 1. In the binary sensor model, the detection probability of the event of interest is 1 within the sensing range, otherwise, the probability is 0. Although the binary sensor model is simpler, it is not realistic as it assumes that sensor readings have no associated uncertainty. In reality, sensor detections are imprecise, hence the coverage needs to be expressed in probabilistic terms. In many cases, cheap sensors such as omnidirectional acoustic sensors or ultrasonic sensor are used. Some practical examples [4] include AWAIRS at UCLA/RSC Smart Dust at UC Berkeley, the USC-ISI network, the DARPA SensIT systems/networks, the ARL Advanced Sensor Program systems/networks, and the DARPA Emergent Surveillance Plexus (ESP). For omnidirectional acoustic sensors or ultrasonic sensors, a longer distance between the sensor and the target generally implies a greater loss in the signal strength or a lower signal-to-noise ratio. This suggests that we can build an abstract sensor model to express the uncertainty in sensor responses. In other words, a sensor node that is closer to a target is expected to have a higher detection probability about the target existence than the sensor node that is further away from the target. In this paper, the probabilistic sensor model given in Eq (3) is used, which is motivated in part by [11]. ⎧0, if ⎪ − λa β cij(x, y) = ⎨e , if ⎪1, if ⎩ r + re ≤ d ij ( x, y ); r − re < d ij ( x, y ) < r + re ; r + re ≥ dij ( x, y ). The sensor field is represented by an m× n grid. An individual sensor node s on the sensor field is located at grid point (x, y). Each sensor node has a detection range of r. For any grid point P at (i, j), we denote the Euclidean distance between s at (x, y) and P at (i, j) as dij(x, y), i.e., dij(x, y)= ( x − i ) 2 + ( y − j ) 2 . Eq (3) expresses the coverage cij(x, y) of a grid point at (i, j) by sensor s at (x, y), in which re(re ETS (l , dis ) = lEelec + lε fs dis 2 For cluster head, however, to transmit an l-bit message a distance Dis to base station, the radio expends ETH (l , Dis ) = lEelec + lε mp Dis 4 In both cases, to receive the message, the radio expends: ER (l ) = lEelec The electronics energy, Eelec, depends on factors such as the digital coding, modulation, filtering, and spreading of the signal, here we set as Eelec=50nJ/bit, whereas the amplifier constant, is taken as ε fs =10pJ/bit/m2, ε mp = 0.0013pJ/bit/m2. So the energy loss of a sensor member in a cluster is Es (l , dis ) = l (100 + 0.01dis 2 ) The energy loss of a CH is ECH (l , Dis) = l (100 + 1.3 ×10 −6 × Dis 4 ) Since the energy consumption for computation is much less than that for communication, we neglect computation energy consumption here. Assume m clusters with nj sensor members in the jth cluster Cj. The total energy loss Etotal is the summation of the energy used by all sensor members and all the m cluster heads: m Etotal = l ∑ j =1 −6 4 100 1.3 × 10 Dis j (100 + 0.01dis + ) + ∑ nj nj i =1 nj 2 ij Because only 2 terms are related to distance, we can just set the fitness function as: m f =∑ j =1 ∑ (0.01disij2 + i =1 1.3 × 10 −6 Dis 4j nj Performance evaluation The PSO starts with a “swarm” of sensors randomly generated. As shown in Fig. 3 is a randomly deployed sensor network with coverage value 0.31 calculated using approximate method mentioned in section 3.1. A linear decreasing inertia weight value from 0.95 to 0.4 is used, decided according to [6]. Acceleration coefficients c1 and c2 both are set to 2 as proposed in [6]. For optimizing coverage, we have used 20 parti- cles, which are denoted by all sensor nodes coordinates, for our experiment in a 50×50 square sensor network, and the maximum number of generations we are running is 500. The maximum velocity of the particle is set to be 50. The other parameters of sensor models are set to be r=5, re=3, λ=0.5, β=0.5, cth=0.7. The coverage is calculated as a fitness value in each generation. After optimizing the coverage, all sensors move to their final locations in setup phase. Now the coordinates of potential cluster heads are set as particles in the sensor network. The communication range of each sensor node is 15 units with a fixed remote base station at (25, 80). We start with a minimum number of clusters acceptable in the problem space to be 4. The node, which will become a cluster head, will not have any restriction on the transmission range. The nodes are organized into clusters by the base station. Each particle will have a fitness value, which will be evaluated by the fitness function (10) in each generation. Our purpose is to find the optimal location of cluster heads. Once the position of the cluster head is identified, if there is no node in that position then a potential cluster head nearest to the cluster head location will become a cluster head. We also optimized the placement of cluster head in the 2-D space using GA. We used a simple GA algorithm with single-point crossover and selection based on a roulette-wheel process. The coordinates of the cluster head are the chromosomes in the population. For our experiment we are using 10 chromosomes in the population. The maximum number of generations allowed is 500. In each evolution we update the number of nodes included in the clusters. The criterion to find the best solution is that the total fitness value should be minimal. Fig. 3. Randomly deployed sensor network with r=5 (Coverage value=0.31) 0.39 0.38 Sensor field coverage 0.37 0.36 0.35 0.34 0.33 0.32 0.31 0.3 0 10 15 20 Number of iterations of PSO algorithm Fig. 4. Optimal coverage achieved using PSO algorithm (probabilistic sensor detection model) 80 PSO GA Fitness value No. of iterations Fig. 5. Comparison of convergence rate between PSO and GA based on Eq. (10) Fig. 4 shows the improvement of coverage during the execution of the PSO algorithm. Note that the upper bound for the coverage for the probabilistic sensor detection model (roughly 0.38) is lower than the upper bound for the case of binary sensor detection model (roughly 0.628). This due to the fact that the coverage for the binary sensor detection model is the fraction of the sensor field covered by the circles. For the probabilistic sensor detection model, even though there are a large number of grid points that are covered, the overall number of grid points with coverage probability greater than the required level is fewer. Fig. 5 shows the convergence rate of PSO and GA. We ran the algorithm for both approaches several times and in every run PSO converges faster than GA, which was used in [1] for coverage and lifetime optimization. The main reason for the fast convergence of PSO is due to the velocity factor of the particle. Fig. 6 shows the final cluster topology in the sensor network space after coverage and energy consumption optimization when the number of clusters in the sensor space is 4. We can see from the figure that nodes are uniformly distributed among the clusters compared with the random deployment as shown in Fig 3. The four red stars denote cluster heads, the blue diamonds are sensor members, and the dashed circles are communication range of sensor nodes. The energy saved is the difference between the initial fitness value and the final minimized fitness value. In this experiment, it is approximately 16. 50 45 40 Fig. 6. Energy efficient cluster formation using PSO Conclusions and Future Work The application of PSO algorithm to optimize the coverage in mobile sensor network deployment and energy consumption in cluster-based topology is discussed. We have used coverage as the first optimization objective to place the sensors uniformly based on a realistic probabilistic sensor model, and energy consumption as the second objective to find the optimal cluster head positions. The simulation results show that PSO algorithm has faster convergence rate than GA based layout optimization method while demonstrating good performance. In the future work, we will take sensor movement energy consumption into account. Moreover, other objectives, such as time and distance for sensor moving will be further studied. Acknowledgement This research was supported by the Kyung Hee University Research Fund in 2005 (KHU-20050370). References 1. Damien B. Jourdan, Olivier L. de Weck: Layout optimization for a wireless sensor network using a multi-objective genetic algorithm. IEEE 59th Vehicular Technology Conference (VTC 2004-Spring), Vol.5 (2004) 2466-2470 2. K. Chakrabarty, S. S. Iyengar, H. Qi and E. Cho: Grid coverage for surveillance and target location in distributed sensor networks. IEEE transactions on computers, Vol.51 (2002) 1448-1453 3. A. Howard, M.J. Mataric and G. S. Sukhatme: Mobile sensor network deployment using potential fields: a distributed, scalable solution to the area coverage problem. Proc. Int. Conf. on distributed Autonomous Robotic Systems (2002) 299-308 4. Y. Zou and K. Chakrabarty: Sensor deployment and target localization based on virtual forces. Proc. IEEE Infocom Conference, Vol. 2 (2003) 1293-1303 5. Kennedy and R. C. Eberhart: Particle Swarm Optimization. Proceedings of IEEE International Conference on Neural Networks, Perth, Australia (1995) 1942-1948 6. Yuhui Shi, Russell C. Eberhart: Empirical study of Particle Swarm Optimization. Proceedings of the 1999 Congress on Evolutionary Computation, Vol. 3 (1999) 1948-1950 7. K.E. Parsopoulos, M.N. Vrahatis: Particle Swarm Optimization Method in Multiobjective Problems. Proceedings of the 2002 ACM symposium on applied computing, Madrid, Spain (2002) 603- 607 8. http://www.swarmintelligence.org/ tutorials.php 9. Nojeong Heo and Pramod K. Varshney: Energy-Efficient Deployment of Intelligent Mobile Sensor Networks. IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems And Humans, Vol. 35, No. 1 (2005) 78 - 92 10. Wendi B. Heinzelman, Anantha P. Chandrakasan, and Hari Balakrishnan: An ApplicationSpecific Protocol Architecture for Wireless Microsensor Networks. IEEE Transactions on Wireless Communications, Vol. 1, No. 4 (2002) 660 - 670 11. A. Elfes: Sonar-based real-world mapping and navigation. IEEE Journal of Robotics and Automation, Vol. RA-3, No. 3 (1987) 249–265 12. Archana Sekhar, B. S. Manoj and C. Siva Ram Murthy: Dynamic Coverage Maintenance Algorithms for Sensor Networks with Limited Mobility. Proc. PerCom (2005) 51-60
{"url":"https://p.pdfkul.com/energy-efficient-deployment-of-mobile-sensor-networks-by-pso_5a1bf82e1723dd6e57a8683e.html","timestamp":"2024-11-14T04:15:04Z","content_type":"text/html","content_length":"75862","record_id":"<urn:uuid:32657dde-2703-49d2-8bb3-c3585039a8df>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00426.warc.gz"}
De Broglie Equation – Hypothesis, Equation, Significance and Important FAQs The wave nature of light was the only aspect that was considered until Neil Bohr’s model. Later, however, Max Planck in his explanation of quantum theory hypothesized that light is made of very minute pockets of energy which are in turn made of photons or quanta. It was then considered that light has a particle nature and every packet of light always emits a certain fixed amount of energy. By this, the energy of photons can be expressed as: E = hf = h * c/λ Here, h is Plank’s constant F refers to the frequency of the waves Λ implies the wavelength of the pockets Therefore, this basically insinuates that light has both the properties of particle duality as well as wave. Louis de Broglie was a student of Bohr, who then formulated his own hypothesis of wave-particle duality, drawn from this understanding of light. Later on, when this hypothesis was proven true, it became a very important concept in particle physics. ⇒ Don't Miss Out: Get Your Free JEE Main Rank Predictor 2024 Instantly! 🚀 What is the De Broglie Equation? Quantum mechanics assumes matter to be both like a wave as well as a particle at the sub-atomic level. The De Broglie equation states that every particle that moves can sometimes act as a wave, and sometimes as a particle. The wave which is associated with the particles that are moving are known as the matter-wave, and also as the De Broglie wave. The wavelength is known as the de Broglie For an electron, de Broglie wavelength equation is: λ = \[\frac{h}{mv}\] Here, λ points to the wave of the electron in question M is the mass of the electron V is the velocity of the electron Mv is the momentum that is formed as a result It was found out that this equation works and applies to every form of matter in the universe, i.e, Everything in this universe, from living beings to inanimate objects, all have wave particle Significance of De Broglie Equation De Broglie says that all the objects that are in motion have a particle nature. However, if we look at a moving ball or a moving car, they don’t seem to have particle nature. To make this clear, De Broglie derived the wavelengths of electrons and a cricket ball. Now, let’s understand how he did this. De Broglie Wavelength 1. De Broglie Wavelength for a Cricket Ball Let’s say,Mass of the ball = 150 g (150 x 10⁻³ kg), Velocity = 35 m/s, and h = 6.626 x 10⁻³⁴ Js Now, putting these values in the equation λ = \[\frac{h}{mv}\] λ = (6.626 * 10 to power of -34)/ (150 * 10 to power of -3 *35) This yields λBALL = 1.2621 x 10 to the power of -34 m, Which is 1.2621 x 10 to the power of -24 Å. We know that Å is a very small unit, and therefore the value is in the power of 10−24−24^{-24}, which is a very small value. From here, we see that the moving cricket ball is a particle. Now, the question arises if this ball has a wave nature or not. Your answer will be a big no because the value of λBALL is immeasurable. This proves that de Broglie’s theory of wave-particle duality is valid for the moving objects ‘up to’ the size (not equal to the size) of the electrons. De Broglie Wavelength for an Electron We know that me = 9.1 x 10 to power of -31 kg and ve = 218 x 10 to power of -6 m/s Now, putting these values in the equation λ = h/mv, which yields λ = 3.2 Å. This value is measurable. Therefore, we can say that electrons have wave-particle duality. Thus all the big objects have a wave nature and microscopic objects like electrons have wave-particle E = hν = \[\frac{hc}{\lambda }\] The Conclusion of De Broglie Hypothesis From de Broglie equation for a material particle, i.e., λ = \[\frac{h}{p}\]or \[\frac{h}{mv}\], we conclude the following: i. If v = 0, then λ = ∞, and If v = ∞, then λ = 0 It means that waves are associated with the moving material particles only. This implies these waves are independent of their charge. FAQs on De Broglie Equation 1.The De Broglie hypothesis was confirmed through which means? De Broglie had not proved the validity of his hypothesis on his own, it was merely a hypothetical assumption before it was tested out and consequently, it was found that all substances in the universe have wave-particle duality. A number of experiments were conducted with Fresnel diffraction as well as a specular reflection of neutral atoms. These experiments proved the validity of De Broglie’s statements and made his hypothesis come true. These experiments were conducted by some of his students. 2.What exactly does the De Broglie equation apply to? In very broad terms, this applies to pretty much everything in the tangible universe. This means that people, non-living things, trees and animals, all of these come under the purview of the hypothesis. Any particle of any substance that has matter and has linear momentum also is a wave. The wavelength will be inversely related to the magnitude of the linear momentum of the particle. Therefore, everything in the universe that has matter, is applicable to fit under the De Broglie equation. 3.Is it possible that a single photon also has a wavelength? When De Broglie had proposed his hypothesis, he derived from the work of Planck that light is made up of small pockets that have a certain energy, known as photons. For his own hypothesis, he said that all things in the universe that have to matter have wave-particle duality, and therefore, wavelength. This extends to light as well, since it was proved that light is made up of matter (photons). Hence, it is true that even a single photon has a wavelength. 4.Are there any practical applications of the De Broglie equation? It would be wrong to say that people use this equation in their everyday lives, because they do not, not in the literal sense at least. However, practical applications do not only refer to whether they can tangibly be used by everyone. The truth of the De Broglie equation lies in the fact that we, as human beings, also are made of matter and thus we also have wave-particle duality. All the things we work with have wave-particle duality. 5.Does the De Broglie equation apply to an electron? Yes, this equation is applicable for every single moving body in the universe, down to the smallest subatomic levels. Just how light particles like photons have their own wavelengths, it is also true for an electron. The equation treats electrons as both waves as well as particles, only then will it have wave-particle duality. For every electron of every atom of every element, this stands true and using the equation mentioned, the wavelength of an electron can also be calculated. 6.Derive the relation between De Broglie wavelength and temperature. We know that the average KE of a particle is: K = 3/2 kbT Where kb is Boltzmann’s constant, and T = temperature in Kelvin The kinetic energy of a particle is ½ mv² The momentum of a particle, p = mv = √2mK = √2m(3/2)KbT = √2mKbT de Broglie wavelength, λ = h/p = h√2mkbT 7.If an electron behaves like a wave, what should determine its wavelength and frequency? Momentum and energy determine the wavelength and frequency of an electron. 8. Find λ associated with an H2 of mass 3 a.m.u moving with a velocity of 4 km/s. Here, v = 4 x 10³ m/s Mass of hydrogen = 3 a.m.u = 3 x 1.67 x 10⁻²⁷kg = 5 x 10⁻²⁷kg On putting these values in the equation λ = h/mv we get λ = (6.626 x 10⁻³⁴)/(4 x 10³ x 5 x 10⁻²⁷) = 3 x 10⁻¹¹ m. 9. If the KE of an electron increases by 21%, find the percentage change in its De Broglie wavelength. We know that λ = h/√2mK So, λi = h/√(2m x 100) , and λf = h/√(2m x 121) % change in λ is: Change in wavelength/Original x 100 = (λfi - λf)/λi = ((h/√2m)(1/10 - 1/21))/(h/√2m)(1/10) On solving, we get % change in λ = 5.238 %
{"url":"https://www.vedantu.com/jee-main/physics-de-broglie-equation","timestamp":"2024-11-02T04:47:02Z","content_type":"text/html","content_length":"220578","record_id":"<urn:uuid:40be8670-02f2-4da3-9521-47767689c953>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00620.warc.gz"}
Next: HELP FACILITIES Up: A Short Maple Primer Previous: STARTING MAPLE The prompter is > Each command is terminated by a semi colon (;) or a colon (:). Maple is case sensitive. Comments start with # and run to the end of the line. > k!; denotes k factorial. Exponentiation is written ^ or ** . The logarithm function is called ln. infinity. For numerical evaluation apply evalf. The repetition operator, $, is very useful. The dot-operator (.) denotes concatenation. The composition operator is written @. ``The latest expression'' is called by ". The latest expression but one is "" etc. For a history encompassing more than the three latest expressions returned, use the command history. Here are some > ln(3^2); > evalf("); > [0$4]; [0, 0, 0, 0] > {$1..5}; {1, 2, 3, 4, 5} > diff(x(t),t$6): #Sixth derivative of x(t) w.r.t. t > x.(1..4); x1, x2, x3, x4 > (sin@arcsin)(x); The function convert(expr,type) converts between different data types. If you run Maple under UNIX, clear the screen by typing > !clear; (Thus ! means ``escape to host''.) For translation of an expression into The second argument, which is optional, is a file name. There are similar functions for C and FORTRAN.
{"url":"http://thproxy.jinr.ru/Documents/MapleV/primer/section3_3.html","timestamp":"2024-11-03T06:27:38Z","content_type":"text/html","content_length":"3235","record_id":"<urn:uuid:9f69321d-bc84-4a70-a65b-026d6a4bad6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00696.warc.gz"}
Interpolation of arbitrary quantities Interpolation of arbitrary quantities¶ Once a solution has been computed, it is quite easy to extract any quantity of interest on it with the interpolation functions for instance for post-treatment. Basic interpolation¶ The file getfem/getfem_interpolation.h defines the function getfem::interpolation(...) to interpolate a solution from a given mesh/finite element method on another mesh and/or another Lagrange finite element method: getfem::interpolation(mf1, mf2, U, V, extrapolation = 0); where mf1 is a variable of type getfem::mesh_fem and describes the finite element method on which the source field U is defined, mf2 is the finite element method on which U will be interpolated. extrapolation is an optional parameter. The values are 0 not to allow the extrapolation, 1 for an extrapolation of the exterior points near the boundary and 2 for the extrapolation of all exterior points (could be expensive). The dimension of U should be a multiple of mf1.nb_dof(), and the interpolated data V should be correctly sized (multiple of mf2.nb_dof()). … important: ``mf2`` should be of Lagrange type for the interpolation to make sense but the meshes linked to ``mf1`` and ``mf2`` may be different (and this is the interest of this function). There is no restriction for the dimension of the domain (you can interpolate a 2D mesh on a line etc.). If you need to perform more than one interpolation between the same finite element methods, it might be more efficient to use the function: getfem::interpolation(mf1, mf2, M, extrapolation = 0); where M is a row matrix which will be filled with the linear map representing the interpolation (i.e. such that V = MU). The matrix should have the correct dimensions (i.e. mf2.nb_dof()``x ``mf1.nb_dof()). Once this matrix is built, the interpolation is done with a simple matrix multiplication: Interpolation based on the generic weak form language (GWFL)¶ It is possible to extract some arbitrary expressions on possibly several fields thanks to GWFL and the interpolation functions. This is specially dedicated to the model object (but it can also be used with a ga_workspace object). For instance if md is a valid object containing some defined variables u (vectorial) and p (scalar), one can interpolate on a Lagrange finite element method an expression such as p*Trace(Grad_u). The resulting expression can be scalar, vectorial or tensorial. The size of the resulting vector is automatically adapted. The high-level generic interpolation functions are defined in the file getfem/getfem_generic_assembly.h. There is different interpolation functions corresponding to the interpolation on a Lagrange fem on the same mesh, the interpolation on a cloud on points or on a getfem::im_data object. Interpolation on a Lagrange fem: void getfem::ga_interpolation_Lagrange_fem(workspace, mf, result); where workspace is a getfem::ga_workspace object which aims to store the different variables and data (see Compute arbitrary terms - high-level generic assembly procedures - Generic Weak-Form Language (GWFL)), mf is the getfem::mesh_fem object reresenting the Lagrange fem on which the interpolation is to be done and result is a beot::base_vector which store the interpolatin. Note that the workspace should contain the epression to be interpolated. void getfem::ga_interpolation_Lagrange_fem(md, expr, mf, result, rg=mesh_region::all_convexes()); where md is a getfem::model object (containing the variables and data), expr (std::string object) is the expression to be interpolated, mf is the getfem::mesh_fem object reresenting the Lagrange fem on which the interpolation is to be done, result is the vector in which the interpolation is stored and rg is the optional mesh region. Interpolation on a cloud of points: void getfem::ga_interpolation_mti(md, expr, mti, result, extrapolation = 0, rg=mesh_region::all_convexes(), nbpoints = size_type(-1)); where md is a getfem::model object (containing the variables and data), expr (std::string object) is the expression to be interpolated, mti is a getfem::mesh_trans_inv object which stores the cloud of points (see getfem/getfem_interpolation.h), result is the vector in which the interpolation is stored, extrapolation is an option for extrapolating the field outside the mesh for outside points, rg is the optional mesh region and nbpoints is the optional maximal number of points. Interpolation on an im_data object (on the Gauss points of an integration method): void getfem::ga_interpolation_im_data(md, expr, im_data &imd, base_vector &result, const mesh_region &rg=mesh_region::all_convexes()); where md is a getfem::model object (containing the variables and data), expr (std::string object) is the expression to be interpolated, imd is a getfem::im_data object which refers to a integration method (see getfem/getfem_im_data.h), result is the vector in which the interpolation is stored and rg is the optional mesh region.
{"url":"https://getfem.readthedocs.io/en/latest/userdoc/interMM.html","timestamp":"2024-11-09T18:42:21Z","content_type":"text/html","content_length":"22558","record_id":"<urn:uuid:abfe0a18-8c18-4a0b-9166-9c894b6cc761>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00524.warc.gz"}
Composing springs dynamical systems Physical systems are often composed of many interacting subsystems. In this post, we take a peek at the math and the software implementation for composing systems of springs using decorated cospans. 1 Physics is hard In my 16+ years of formal education, the lowest grade I’ve ever received was in General Physics I, which covered “vectors, kinematics, Newton’s laws and dynamics, conservation laws, work and energy, oscillatory motion,…” There’s a textbook diagram that stands out in my memory as example of why this course was so challenging for me. The diagram contains a bunch of pulleys (where “a bunch” means like “3”) all connected to each other. One pulley made sense. Two pulleys, okay. But once there were many pulleys, many masses, and hence many forces to track, I would invariably lose my way. I understood each of the components, but in my physics education there was not a practice of explicitly representing their composition. Instead the composition pattern was implicit to the solving process, and so computing the composition of many pulleys was largely heuristic rather than a formal and legible process. Thankfully applied category theory is here to help me out. Consider for example the problem of determining the oscillation of a spring on Earth. Here’s how undergraduate-me would have solved that problem: That this system has two components (the spring and the effects of gravity) with a specified interaction (the mass in the two systems are the same) is hidden in the sum F_\text{total} = F_\text {spring} + F_\text{gravity}. Roughly, We can formalize the interaction using an undirected wiring diagram (UWD) like this: where m represents the mass and x represents the position of the mass. That the ports labeled x in each component system are wired together means that they are identified in the composite system. We can formalize the component systems as differential equations using the operad algebra of resource sharers: To solve this physics problem, I provide the components and interactions on the left-hand side, and the resource sharing machinery defines the composite system on the right. Then I can solve, \ddot x = \dot v_1 + \dot v_2 = \frac{kx}{m} - 9.8. Using the same strategy of explicitly representing the components and their interactions, we can also understand two springs in parallel and in series: 2 Implementation in AlgebraicDynamics In AlgebraicDynamics we can translate our whiteboard diagrams into compilable code in order to compute and then simulate the composite systems. Let’s do this for the spring on Earth. First we implement the anchored spring as a ContinuousResourceSharer which specifies the interface and a particular choice of system with that interface. function anchored_spring_dynamics(k) (u,p,t) -> begin mass, pos, vel = u [0., vel, -k*pos/mass] anchored_spring(k) = ContinuousResourceSharer{Float64}( [:mass, :pos], # interface 3, # number of state variables anchored_spring_dynamics(k), # dynamics [1,2] # port map Let’s break down this specification: • [:mass, :pos] indicates that the interface of this system has two exposed ports labeled :mass and :pos. • 3 indicates that there are three state variables. • anchored_spring_dynamics(k) returns a Julia function that defines the vector field for an anchored spring with spring constant k. • [1,2] defines the port map, which indicates that the first port exposes the first state variable and the second port exposes the second state variable. Given an initial mass, position, and velocity for the spring, we can use the DifferentialEquations Julia package to solve the system and plot the trajectories of the exposed state variables over As expected the mass remains constant throughout the run while the position oscillates. While this plot is indicative, it is more fun to use Javis to animate our spring. Now we’re ready for composition. While ContinuousResourceSharers represent single systems, UndirectedWiringDiagrams represent an interaction pattern. Once the primitives systems and the interaction patterns are defined, we can compose using the oapply method. # Define the gravitational system gravity_model = ContinuousResourceSharer{Float64}([:pos], 2, (u,p,t) -> [u[2], -9.8], 1:1) # Define the interaction pattern as a UWD interaction = @relation (mass, pos) begin spring(mass, pos) # Compose spring_on_earth = oapply(interaction, Dict(:spring => spring, :gravity => gravity)) Catlab automatically generates a visual of the UWD interaction. The animation of a solution for the composite system spring_on_earth is exactly what we expect: the spring on Earth oscillates with the same frequency as the original spring but with a lower equilibrium point. 2.1 Springs in parallel As shown in the introduction, we can construct springs in parallel as the composition of two anchored springs whose positions and masses are identified. This interaction pattern is encoded by the following UWD. Now the implementation of springs in parallel is just a few lines of code. k1 = 10.0; k2 = 20.0 spring1 = anchored_spring(k1) spring2 = anchored_spring(k2) interaction = @relation (mass, pos) begin spring1(mass, pos) spring2(mass, pos) parallel_springs = oapply(interaction, Dict(:spring1 => spring1, :spring2 => spring2)) A nice test of this example is to compare our springs in parallel to a single spring with spring constant {k_1 + k_2}. According to physics, these have the same behavior. And indeed that’s what we 2.2 Springs in series For springs in series, we have an anchored spring connected to a free spring^1 in which one anchor point of the free spring is the mass of the anchored spring. This interaction pattern is encoded by the following UWD. Again, the implementation of springs in series is now just a few lines of code: k1 = 10.0; k2 = 20.0 spring1 = anchored_spring(k1) spring2 = free_spring(k2) interaction = @relation (m1, m2, p1, p2) begin anchored_spring(m1, p1) free_spring(m1, p1, m2, p2) series_springs = oapply(interaction, Dict(:anchored_spring => spring1, :free_spring => spring2)) When both masses are 5 grams, we can simulate the springs in series and animate the results: If the mediating mass is negligible, then our springs in series have the same behavior to a single spring with spring constant \frac{k_1 k_2}{k_1 + k_2}. (Left) Two springs in series with spring constants k_1 and k_2 and where the intermediate point mass is trivial. (Right) An anchored spring with spring constant \frac{k_1k_2}{k_1 + k_2}. Pretty cool! Baez, John C., and Brendan Fong. 2018. “A Compositional Framework for Passive Linear Networks.” arXiv:1504.05625 , November. Baez, John C., and Blake S. Pollard. 2017. “A Compositional Framework for Reaction Networks.” Reviews in Mathematical Physics 29 (09): 1750028. Baez, John C., David Weisbart, and Adam M. Yassine. 2021. “Open Systems in Classical Mechanics.” Journal of Mathematical Physics 62 (4): 042902. Fong, Brendan, and David I. Spivak. 2019. “Hypergraph Categories.” Journal of Pure and Applied Algebra 223 (11): 4746–77. Libkind, Sophie, Andrew Baas, Evan Patterson, and James Fairbanks. 2021. “Operadic Modeling of Dynamical Systems: Mathematics and Computation.” arXiv:2105.12282 , May. Willems, Jan. 2007. “The Behavioral Approach to Open and Interconnected Systems.” IEEE Control Systems 27 (6): 46–99. 1. The free spring has a mass, position, and velocity for both ends of the springs, so it has 6 state variables and 4 exposed ports.↩︎ 2. Every decorated cospan category is a hypergraph category and every hypergraph category is 1-equivalent to a cospan algebra (Fong and Spivak 2019).↩︎ 3. You can find more details about other of the algebras implemented in AlgebraicDynamics.jl in (Libkind et al. 2021).↩︎ Leaving a comment will set a cookie in your browser. For more information, see our cookies policy.
{"url":"https://topos.site/blog/2024-01-18-composing-springs/index.html","timestamp":"2024-11-08T14:12:04Z","content_type":"application/xhtml+xml","content_length":"66427","record_id":"<urn:uuid:b16692d0-56f1-4e27-b5a8-bd558f99f9fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00059.warc.gz"}
Design Of Electrical Analogy Apparatus For Drawing Flownet And Studying Uplift Pressure Draw their V-I characteristics and explain them? In this instance the load current flow s through the series field winding so that the load current and series field current are one and same. In long shunt connection the voltage across the shunt field is the same as the terminal voltage of the generator and current in the armature will be the current in the series field. The armature current equals the shunt field current plus the load current. A shunt generator has the field circuit connected directly across the armature. As more devices connected in parallel the load on the generator increases that the generator current increases which results decrease in terminal voltage of the generator. The disturbing force at any point is proportional to the gradient of pressure of water at that point (i.e. dp/dl). • 8 Properties of Flow Net The flow lines and equipotential lines meet at right angles to each other. • This method requires a lot of erasing to get the proper shape of a flow net and also consume a lots of time. • It is also commonly referred to as the hydraulic conductivity of a soil. • Flow nets are drawn based on the boundary conditions only. • As more devices connected in parallel the load on the generator increases that the generator current increases which results decrease in terminal voltage of the generator. • Computer groundwater models are based on the geologic and hydrologic field data collected during drilling, geotechnical sample analysis and aquifer testing. • Calculate the transmissivity, storativity and hydraulic conductivity using methods and references listed in Subsection 6.7, Aquifer Data Collection Methods. Anisotropy, which is the opposite of “isotropy,” is a term used to denote preferential flow direction in soils and other geologic materials. If soil consisted of perfectly spherical grains, flow rates would be isotropic – the same in all directions, other factors being equal. V. The equipotential lines must start and end at right angles to the first and last flow lines respectively. • A grid obtained by drawing a series of streamlines ψ and equipotential lines Φ is known as a flow net. Sketching a plausible family of equipotential lines before starting to sketch flow lines. It is curvilinear in nature and formed by the combination of the flow lines and the equipotential lines. Why A Dc Generator Fails To Build Up Voltage? In hydrology, seepage flow refers to the flow of a fluid in permeable soil layers such as sand. The fluid fills the pores in the unsaturated bottom layer and moves into the deeper layers as a result of the effect of gravity. The soil has to be permeable so that the seepage water is not stored. Contours of total head and flow vectors can be plotted. An option is also available for computing flow potential values at the nodes. Together with the equipotential lines , the flow lines can be used to plot a flow net. Each square obtained by intersection of flow lines and equipotential lines is called a field. Time-dependent flow net equations are limited in engineering applications to simple boundary conditions. The geometry of transient flow nets does not change with time, as only the numerical values assigned to equipotential lines and flow lines change with time. Construction of a flownet is often used for solving groundwater flow problems where the geometry makes analytical solutions impractical. The method is often used in civil engineering, hydrogeology or soil mechanics as the first check for problems of flow under hydraulic structures like dams or sheet pile walls. Flow nets are drawn based on the boundary conditions only. They are independent of the permeability of soil and the head causing flow. The space formed between two flow lines and two equipotential lines is called a flow field. This line separates a saturated soil mass from an unsaturated soil mass. It is not an equipotential line, but a flow line. For an earthen dam, the phreatic line approximately assumes the shape of a These types of points often do make other types of solutions to these problems difficult, while the simple graphical technique handles them nicely. The second flow net pictured here (modified from Ferris, et al., 1962) shows a flow net being used to analyze map-view flow , rather than a cross-section. Note that this problem has symmetry, and only the left or right portions of it needed to have been done. To create a flow net to a point sink , there must be a recharge boundary nearby to provide water and allow a steady-state flowfield to develop. Too many flow channels to distract the attraction from the essential features. Normally, three to five flow channels are sufficient. The accuracy of the computation of hydraulic quantities, such as discharge and pore water pressure, does not depend much on the exactness of the flow net. To construct a flow net for a site, measure the hydraulic head in wells across the site following the groundwater gauging procedures detailed previously in this section. Interpolate the hydraulic head between wells assuming that the change in head is linear between neighboring wells. Connect points of equal hydraulic head to depict the equipotential lines. Choose equipotential line intervals such that the drop in head between adjacent lines is constant. The equipotential lines represent the height of the water table or potentiometric surface above mean sea level or other datum plane. The line must be at right angles to the upstream and downstream beds. Also, let Δq represent the discharge passing through the flow channel, per unit length of structure . SEEP2D is a two-dimensional finite element groundwater model developed by Fred Tracy of the U.S. SEEP2D is designed to be used on profile models such as cross-sections of earth dams or levees. There is also a pressure in downward direction due to submerged weight of soil. What Is The Recommended Formula For Top Width Of A Very Low Dam? Total head is lost by the time it reaches downstream end. Naturally downstream ground surface represents equipotential line with zero head. Damnasht is a small-sized hydraulics application whose main purpose is to aid individuals in drawing flow nets for sheet piles, as the name hints at. SEEP2D can be used for either confined or unconfined steady state flow models. For unconfined models, both saturated and unsaturated flow is simulated. The phreatic surface can be displayed by plotting the contour line at where pressure head equals zero. Equipotential lines are like contour lines on a map which trace lines of equal altitude. In this case the “altitude” is electric potential or voltage. Equipotential lines are always perpendicular to the electric field. The flow net is not applied to sharply diverging flow , as the actual flow pattern is not represented by the flow net. Internal erosion is the formation of voids within a soil caused by the removal of material by seepage. Piping is induced by regressive erosion of particles from downstream and along the upstream line towards an outside environment until a continuous https:// personal-accounting.org/ pipe is formed. The coefficient of permeability of a soil describes how easily a liquid will move through a soil. It is also commonly referred to as the hydraulic conductivity of a soil. This factor can be affected by the viscosity, or thickness of a liquid and its density. V. In case more point to be located say P, from vertical line QP at any distance x from Varnish: Types, Advantages & Disadvantages F) Name a field instrument that you could use to monitor the pore pressure at any point. First identify the hydraulic boundary conditions. 8.3, the upstream bed level GDA represents 100% potential line and the downstream bed level CFJ, 0% potential line. Portions of this flow net have been subdivided one or more times to give greater detail. Apparent directions of needed corrections of this flow net are indicated by the arrows. Directions of these intersecting lines are shown at several random points in Figure 4.2a. Unconfined flow in single permeability sections. Confined flow in single permeability sections. B) Label the recharge and discharge areas of the flow net. Differentiate among progressive die, compound die and combination die. It is only applied to problems with simple and ideal boundaries conditions. Streamlines can be traced by injecting a dye in a seepage model or Heleshaw apparatus. Gravel and sand are both porous and permeable, making them good aquifer materials. The flow net can be understood as the graphical representation of the flow of water through a mass of soil. The uplift pressure at any point within the soil mass can be found using the undermentioned formula. These points are mathematical artifacts of the equation used to solve the real-world problem, and do not actually mean that there is infinite or no flux at points in the subsurface. The equipotential lines are further extended downward, and one more flow line GHJ is drawn, representing the step . Starting from the upstream end, divide the first flow channel into approximate squares. The size of the squares should change gradually. Some of the squares may, however, be quite irregular. The flow line and equipotential lines should be orthogonal and form approximate squares. The horizontal and vertical component of the hydraulic gradient are, respectively. Flow nets must satisfy the boundary conditions of flow field. Quantity of water flowing through each flow channel is the same. The potential draw flow nets drop in any two consecutive equal potential lines is same/constant. Flow lines and equal potential lines are smooth curves. A It Is Only Applied To Problems With Simple And Idealboundaries Conditions From the drawn flow net, Nf and Nd can be easily counted, and hence, the seepage discharge can be easily computed by using Eqn. 3 can be solved if the boundary conditions at the inlet and exit are known. It is pore water pressure and it acts vertically upward direction due to residual pressure head. Make second trial adjustment of constructed flownet. Is parallel to impermeable boundaries or to constant head boundaries. In a matter of moments, the chart is going to be displayed in the main window along with the value of the shape factor and the critical point . If required more trials may be taken to draw the flownet finally. • The stream lines in flow net show the direction of flow and the equinoctial lines join the points the equal velocity potential Φ. • The streamlines ψ and equipotential lines Φ are mutually perpendicular to each other. The flow net in Figure 4.4c with 1.2 flow channels is an example. Various Methods Of Drawing Flow Nets The long flow line is indicated by the impervious stratum NP. The groundwater flow direction is along the flow lines. Depict flow lines as arrows pointing in the direction of groundwater flow, i.e., in the direction of declining hydraulic head. A flow net consists of two sets of lines, flowlines and equipotential lines. Flowlines or streamlines are the loci of the paths of flow of individual water particles. Equipotential lines pass through points of equal pressure. All intersections between the streamlines and equipotential lines are at right angles. The flow lines and equipotential line are intersecting lines at 90 degree to each other. A) What is the vertical permeability of the dam, if the horizontal permeability has been determined to be 3.6 x 10-7 m/s? B) Using the incomplete sketch given to you as a separate sheet, construct a flownet to enable the seepage losses to be determined. C) Hence determine the seepage losses (i.e. flow rate) in m3 / day, through the dam if the elevation of water behind the dam is 16.8 m above the toe drain. D) Find the pore pressure at an elevation of 5 m from the base of the dam along its centreline. E) Name a field instrument that would allow you to measure the horizontal permeability of soil. What Is Use Of Flow Net? Weirs designed and constructed on the basis of Bligh’s theory also failed due to undermining of the subsoil. As a result it was thought essential to study the problem of weirs on the permeable foundations more elaborately. Theory of flownets provides a remarkable solution to the problem. Equipotentials are established by the boundary conditions before the flow net is started. Either flow lines or equipotential lines are smoothly drawn curves. They indicate the path followed by the seepage water. Other set of curves are called equipotential lines. As an example, suppose that it is necessary to draw the flow net for the conditions shown in Fig. The boundary conditions for this problem are shown in Fig. 2.9b, and the sketching procedure for the flow net is illustrated in Figs c, d, e and f of Fig. If the flow net is correct the following conditions will apply.
{"url":"https://poltekkesternate.ac.id/2020/06/02/design-of-electrical-analogy-apparatus-for-drawing/","timestamp":"2024-11-13T13:35:15Z","content_type":"text/html","content_length":"200475","record_id":"<urn:uuid:8e7d0035-13d9-4da1-8d0c-56a160dcd45c>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00419.warc.gz"}
The Möbius Strip Any strip of paper joined at the ends to form a continuous round band has two edges and, as one would expect, two surfaces: an exterior surface and an interior surface. However, giving the strip of paper a half-twist before joining the ends produces a band with a single surface and a single side, known as a Möbius strip. This strange phenomenon was first described by the nineteenth-century German mathematician August Ferdinand Möbius, for whom the strip has been named. You may want to experiment with other possibilities. What happens if you cut the Möbius strip ^1/[3] of the way from the edge, instead of in the middle? What if you create strips with two half-twists, or three half-twists? The Möbius strip is an interesting amusement, but it also has practical uses as well. For example, a belt connecting pulleys might be made to be a Möbius strip to ensure that it will wear evenly. Sources used (see bibliography page for titles corresponding to numbers): 64.
{"url":"http://mathlair.allfunandgames.ca/mobius.php","timestamp":"2024-11-13T08:02:49Z","content_type":"text/html","content_length":"4719","record_id":"<urn:uuid:860f88c7-8df2-41e4-8cff-7584d082bad9>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00199.warc.gz"}
The Adventures of Topology Man (2005) Alex Kasman Parody is easy....topology is hard! In this short story, I made use of (and made fun of) the classic superhero comic book genre to illustrate some ideas from topology. So, we end up seeing a battle... (more) Another New Math (2005) Alex Kasman A mathematician and his young daughter try to convince a school board to consider teaching advanced mathematics to elementary school children in this short story that appeared in the collection Reality... (more) The Center of the Universe (2005) Alex Kasman This short story was intended to serve two different purposes. On the one hand it is a glimpse into the lives and interactions of mathematics graduate students. And, on the other, it addresses the philosophical... (more) The Exception (2005) Alex Kasman Written in the form of a dialogue between a man in a nursing home and his grandchild, this short story describes an undergraduate research project that produces a surprising answer to one of the most famous... (more) Eye of the Beholder (2005) Alex Kasman Shortly after a stunning success in her research, personal tragedy forces a math professor to change careers and begin work at the NSA where her work on cryptography involves some difficult ethical decisions.... (more) The Legend of Howard Thrush (2005) Alex Kasman I always have enjoyed the American folk tale, a medium in which one pretends to be speaking earnestly and in all sincerity about a history so ridiculous that it it simply cannot be taken seriously. There... (more) The Math Code (2005) Alex Kasman A friend of mine once told me that he believes that mathematicians invented intentionally confusing notations to keep others from understanding what they were saying. I'm sure this is not true. We mathematicians... (more) Maxwell's Equations (2005) Alex Kasman James Clerk Maxwell was the 19th century theoretician who discovered electro-magnetic waves. He is often described as a "physicist", but I would argue that he was a mathematician. Certainly some of his... (more) Monster (2005) Alex Kasman A story about group theory, plagiarism, the untapped potential of a collaboration between mathematics and marketing, the bleak financial future of academia, and the Monster. This story talks about... (more) Murder, She Conjectured (2005) Alex Kasman A police psychologist attending a conference in Cambridge, England is pulled into an unsolved murder mystery by her mathematician boyfriend. An important theme of the story is the oppresive sexism that... (more) The Object (2005) Alex Kasman This is a mathematical horror story, written by someone who doesn't like horror stories. Since I'm the author, I can honestly (and humbly) admit that the result is kind of weird. The plot concerns... (more) On the Quantum Theoretic Implications of Newton's Alchemy (2007) Alex Kasman A postdoc at the mysterious "Institute for Mathematical Analysis and Quantum Chemistry" is surprised to learn that his work on Riemann-Hilbert Problems is being used as part of his employer's crazy alchemy... (more) Pop Quiz (2005) Alex Kasman An algebraic geometer is called in when messages from an alien spacecraft appear to be asking questions about projective varieties. Though it may at first appear to be another "mathematics as a common... (more) Progress (2005) Alex Kasman The mathematics of ancient Egypt can look very strange to us today. For example, although they did not have many fractions, they did know about the number 2/3. Strangely, however, it took a page of computation... (more) Reality Conditions (2005) Alex Kasman The title story in the collection of the same name, this short story follows a mathematics grad student to a workshop at the Mathematical Sciences Research Institute. Although the story contains no supernatural... (more) Reality Conditions: short mathematical fiction (2005) Alex Kasman The stories in this collection of 16 original short works of mathematical fiction are different from each other in many ways: some are serious and some funny, some are realistic and some fantastical,... (more) Unreasonable Effectiveness (2003) Alex Kasman "Unreasonable Effectiveness" reminds me of a classic Arthur C. Clarke style short story. It has exactly enough mathematics done correctly and a twist that boggles the mind at the end. To be fair...
{"url":"https://kasmana.people.charleston.edu/MATHFICT/search.php?go=yes&author=Alex%20Kasman&orderby=title","timestamp":"2024-11-04T12:12:48Z","content_type":"text/html","content_length":"16391","record_id":"<urn:uuid:d2cf5668-ed8c-4280-b96c-ef322633ba55>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00417.warc.gz"}
Topological Dynamics and Applicationssearch Item Successfully Added to Cart An error was encountered while trying to add the item to the cart. Please try again. Please make all selections above before adding to cart Topological Dynamics and Applications eBook ISBN: 978-0-8218-7807-1 Product Code: CONM/215.E List Price: $125.00 MAA Member Price: $112.50 AMS Member Price: $100.00 Click above image for expanded view Topological Dynamics and Applications eBook ISBN: 978-0-8218-7807-1 Product Code: CONM/215.E List Price: $125.00 MAA Member Price: $112.50 AMS Member Price: $100.00 • Contemporary Mathematics Volume: 215; 1998; 334 pp MSC: Primary 54; 28; 34 This book is a very readable exposition of the modern theory of topological dynamics and presents diverse applications to such areas as ergodic theory, combinatorial number theory and differential equations. There are three parts: 1) The abstract theory of topological dynamics is discussed, including a comprehensive survey by Furstenberg and Glasner on the work and influence of R. Ellis. Presented in book form for the first time are new topics in the theory of dynamical systems, such as weak almost-periodicity, hidden eigenvalues, a natural family of factors and topological analogues of ergodic decomposition. 2) The power of abstract techniques is demonstrated by giving a very wide range of applications to areas of ergodic theory, combinatorial number theory, random walks on groups and others. 3) Applications to non-autonomous linear differential equations are shown. Exposition on recent results about Floquet theory, bifurcation theory and Lyapanov exponents is given. Graduate students and research mathematicians working in ergodic theory, topological dynamics, differential equations and dynamical systems. • Permission – for use of book, eBook, or Journal content • Book Details • Table of Contents • Requests Volume: 215; 1998; 334 pp MSC: Primary 54; 28; 34 This book is a very readable exposition of the modern theory of topological dynamics and presents diverse applications to such areas as ergodic theory, combinatorial number theory and differential equations. There are three parts: 1) The abstract theory of topological dynamics is discussed, including a comprehensive survey by Furstenberg and Glasner on the work and influence of R. Ellis. Presented in book form for the first time are new topics in the theory of dynamical systems, such as weak almost-periodicity, hidden eigenvalues, a natural family of factors and topological analogues of ergodic decomposition. 2) The power of abstract techniques is demonstrated by giving a very wide range of applications to areas of ergodic theory, combinatorial number theory, random walks on groups and others. 3) Applications to non-autonomous linear differential equations are shown. Exposition on recent results about Floquet theory, bifurcation theory and Lyapanov exponents is given. Graduate students and research mathematicians working in ergodic theory, topological dynamics, differential equations and dynamical systems. Permission – for use of book, eBook, or Journal content Please select which format for which you are requesting permissions.
{"url":"https://bookstore.ams.org/CONM/215","timestamp":"2024-11-04T01:11:59Z","content_type":"text/html","content_length":"95875","record_id":"<urn:uuid:f36f139b-cce4-49eb-a297-d1085c36601d>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00770.warc.gz"}
ACTA SCIENTIARUM MATHEMATICARUM (Szeged) Rota--Baxter operators on involutive associative algebras Apurba Das Abstract. In this paper, we consider Rota--Baxter operators on involutive associative algebras. We define cohomology for Rota--Baxter operators on involutive algebras that governs the formal deformation of the operator. This cohomology can be seen as the Hochschild cohomology of a certain involutive associative algebra with coefficients in a suitable involutive bimodule. We also relate this cohomology with the cohomology of involutive dendriform algebras. Finally, we show that the standard Fard--Guo construction of the functor from the category of dendriform algebras to the category of Rota--Baxter algebras restricts to the involutive case. DOI: 10.14232/actasm-020-616-0 AMS Subject Classification (1991): 16E40, 16S80, 16W99 Keyword(s): involutive algebras, Hochschild cohomology, Rota--Baxter operators, deformations, dendriform algebras received 16.6.2020, accepted 21.5.2021. (Registered under 616/2020.) On lattice isomorphisms of orthodox semigroups Simon M. Goberstein Abstract. Two semigroups are lattice isomorphic if the lattices of their subsemigroups are isomorphic, and a class of semigroups is lattice closed if it contains every semigroup which is lattice isomorphic to some semigroup from that class. An orthodox semigroup is a regular semigroup whose idempotents form a subsemigroup. We prove that the class of all orthodox semigroups in which every nonidempotent element has infinite order is lattice closed. DOI: 10.14232/actasm-020-558-7 AMS Subject Classification (1991): 20M15, 20M18, 20M19; 08A30 Keyword(s): torsion-free semigroups, orthodox semigroups, monogenic orthodox semigroups, inverse semigroups, monogenic inverse semigroups, lattice isomorphisms of semigroups, lattice determined semigroups, lattice closed classes of semigroups received 27.10.2020, revised 4.8.2021, accepted 13.8.2021. (Registered under 58/2020.) Lamps in slim rectangular planar semimodular lattices Gábor Czédli Abstract. A planar (upper) semimodular lattice $L$ is \emph {slim} if the five-element nondistributive modular lattice $M_3$ does not occur among its sublattices. (Planar lattices are finite by definition.) \emph {Slim rectangular lattices} as particular slim planar semimodular lattices were defined by G. Grätzer and E. Knapp in 2007. In 2009, they also proved that the congruence lattices of slim planar semimodular lattices with at least three elements are the same as those of slim rectangular lattices. In order to provide an effective tool for studying these congruence lattices, we introduce the concept of \emph {lamps} of slim rectangular lattices and prove several of their properties. Lamps and several tools based on them allow us to prove in a new and easy way that the congruence lattices of slim planar semimodular lattices satisfy the two previously known properties. Also, we use lamps to prove that these congruence lattices satisfy four new properties including the \emph {Two-pendant Four-crown Property} and the \emph {Forbidden Marriage Property}. DOI: 10.14232/actasm-021-865-y AMS Subject Classification (1991): 06C10 Keyword(s): rectangular lattice, slim semimodular lattice, multifork extension, lattice diagram, edge of normal slope, precipitous edge, lattice congruence, two-pendant four-crown property, lamp, congruence lattice, forbidden marriage property received 15.1.2021, revised 5.3.2021, accepted 11.3.2021. (Registered under 115/2021.) [Open Access VIEW] (1+1+2)-generated lattices of quasiorders Delbrin Ahmed, Gábor Czédli Abstract. A lattice is $(1+1+2)$-generated if it has a four-element generating set such that exactly two of the four generators are comparable. We prove that the lattice $\Quo n$ of all quasiorders (also known as preorders) of an $n$-element set is $(1+1+2)$-generated for $n=3$ (trivially), $n=6$ (when $\Quo 6$ consists of $209\,527$ elements), $n=11$, and for every natural number $n\geq 13$. In 2017, the second author and J. Kulin proved that $\Quo n$ is $(1+1+2)$-generated if either $n$ is odd and at least $13$ or $n$ is even and at least $56$. Compared to the 2017 result, this paper presents twenty-four new numbers $n$ such that $\Quo n$ is $(1+1+2)$-generated. Except for $\Quo 6$, an extension of Zádori's method is used. DOI: 10.14232/actasm-021-303-1 AMS Subject Classification (1991): 06B99 Keyword(s): quasiorder lattice, lattice of preorders, minimum-sized generating set, four-generated lattice, $(1+1+2)$-generated lattice, Zádori's method received 3.5.2021, revised 19.5.2021, accepted 19.5.2021. (Registered under 53/2021.) [Open Access VIEW] Commuting row contractions with polynomial characteristic functions Monojit Bhattacharjee, Kalpesh J. Haria, Jaydeb Sarkar Abstract. A characteristic function is a special operator-valued analytic function defined on the open unit ball of $\mathbb {C}^n$ associated with an $n$-tuple of commuting row contraction on some Hilbert space. In this paper, we continue our study of the representations of $n$-tuples of commuting row contractions on Hilbert spaces, which have polynomial characteristic functions. Gleason's problem plays an important role in the representations of row contractions. We further complement the representations of our row contractions by proving theorems concerning factorizations of characteristic functions. We also emphasize the importance and the role of noncommutative operator theory and noncommutative varieties to the classification problem of polynomial characteristic DOI: 10.14232/actasm-020-303-x AMS Subject Classification (1991): 47A45, 47A20, 47A48, 47A56 Keyword(s): characteristic functions, analytic model, nilpotent operators, operator-valued polynomials, Gleason's problem, factorizations received 22.10.2020, revised 28.6.2021, accepted 3.7.2021. (Registered under 53/2020.) Wold-type decomposition for bi-regular operators H. Ezzahraoui, M. Mbekhta, E. H. Zerouali Abstract. We show in this paper that a Wold-type decomposition holds for the class of regular operators with regular Moore--Penrose inverse. We also give several examples and investigate various properties of such class of operators. DOI: 10.14232/actasm-020-399-2 AMS Subject Classification (1991): 47A15; 47B37 Keyword(s): Wold-type decomposition, regular and bi-regular operators, Moore--Penrose inverse, Cauchy dual received 18.11.2020, revised 10.7.2021, accepted 30.7.2021. (Registered under 149/2020.) Pusz--Woronowicz's functional calculus revisited Kanae Hatano, Yoshimichi Ueda Abstract. This note is a complement to Pusz--Woronowicz's works on functional calculus for two positive forms from the viewpoint of operator theory. Based on an elementary, self-contained and purely Hilbert space operator explanation of their functional calculus, we show that any operator connection type operations (including any operator perspectives) are captured by their functional calculus. DOI: 10.14232/actasm-021-263-6 AMS Subject Classification (1991): 47A60; 47A64 Keyword(s): functional calculus, operator connection, operator perspective, convexity received 3.1.2021, revised 28.8.2021, accepted 31.8.2021. (Registered under 13/2021.) Lebesgue points and Cesàro summability of higher dimensional Fourier series over a cone Ferenc Weisz Abstract. We introduce a new concept of Lebesgue points, the so-called $\omega $-Lebesgue points, where $\omega >0$. As a generalization of the classical Lebesgue's theorem, we prove that the Cesàro means $\sigma _n^{a}f$ of the Fourier series of a multi-dimensional function $f\in L_1(\T ^d)$ converge to $f$ at each $\omega $-Lebesgue point $(0<\omega <\alpha )$ as $n\to \infty $. DOI: 10.14232/actasm-021-614-3 AMS Subject Classification (1991): 42B08, 42A38, 42A24, 42B25 Keyword(s): Cesàro summability, Hardy--Littlewood maximal function, Lebesgue points received 14.1.2021, revised 29.8.2021, accepted 31.8.2021. (Registered under 114/2021.) Characterization of Schauder basis property of Gabor systems in local fields Biswaranjan Behera, Md. Nurul Molla Abstract. Let $K$ be a totally disconnected, locally compact and nondiscrete field of positive characteristic and $\D $ be its ring of integers. We characterize the Schauder basis property of the Gabor systems in $K$ in terms of $A_2$ weights on $\D \times \D $ and the Zak transform $Zg$ of the window function $g$ that generates the Gabor system. We show that the Gabor system generated by $g$ is a Schauder basis for $L^2(K)$ if and only if $|Zg|^2$ is an $A_2$ weight on $\D \times \D $. Some examples are given to illustrate this result. Moreover, we construct a Gabor system which is complete and minimal, but fails to be a Schauder basis for $L^2(K)$. DOI: 10.14232/actasm-021-120-8 AMS Subject Classification (1991): 43A70; 42B25, 43A25 Keyword(s): local field, Gabor system, Zak transform, $A_p$-weight, Schauder basis received 20.1.2021, accepted 20.3.2021. (Registered under 120/2021.) Decay of the elements of the inverses of some triangular Toeplitz matrices Roksana Krystyna Słowik Abstract. Banded lower triangular $\mathbb N\times \mathbb N$ Toeplitz matrices $A$ are considered. A sufficient condition for the elements of $A^{-1}$ to decay to $0$ fast is given. Moreover, some bounds of the norms of these inverses are also found. DOI: 10.14232/actasm-021-028-7 AMS Subject Classification (1991): 15B05, 15A99 Keyword(s): triangular Toeplitz matrices, matrix inverse, decay of elements, norm of a Toeplitz matrix received 8.2.2021, revised 31.5.2021, accepted 13.6.2021. (Registered under 28/2021.) A generalization of spin factors Anil Kumar Karn Abstract. Using the technique of adjoining an order unit to a normed linear space, we have characterized strictly convex spaces among normed linear spaces and Hilbert spaces among strictly convex Banach spaces, respectively. This leads to a generalization of spin factors and provides a new class of absolute order unit spaces. DOI: 10.14232/actasm-021-785-5 AMS Subject Classification (1991): 46B40; 46B20 Keyword(s): adjoining an order unit, strictly convex space, absolutely ordered space, absolute order unit space, $JB$-algebra, spin factor received 5.3.2021, revised 19.4.2021, accepted 30.7.2021. (Registered under 35/2021.) Positive Desch--Schappacher perturbations of bi-continuous semigroups {on $\mathrm {AM}$-spaces} Christian Budde Abstract. In this paper, we consider positive Desch--Schappacher perturbations of bi-continuous semigroups on $\mathrm {AM}$-spaces with an additional property concerning the additional locally convex topology. As an example, we discuss perturbations of the left-translation semigroup on the space of bounded continuous functions on the real line and on the space of bounded linear operators. DOI: 10.14232/actasm-021-914-5 AMS Subject Classification (1991): 47D03, 47A55, 34G10, 46A70, 46A40 Keyword(s): bi-continuous semigroups, positivity, Desch--Schappacher perturbation, Gamma function received 14.4.2021, revised 8.7.2021, accepted 10.7.2021. (Registered under 414/2021.) [Open Access VIEW] Orthonormal polynomial basis in local Dirichlet spaces Emmanuel Fricain, Javad Mashreghi Abstract. We provide an orthogonal basis of polynomials for the local Dirichlet space $\mathcal {D}_\zeta $. These polynomials have numerous interesting features and a very unique algebraic pattern. We obtain the recurrence relation, the generating function, a simple formula for their norm, and explicit formulae for the distance and the orthogonal projection onto the subspace of polynomials of degree at most $n$. The latter implies a new polynomial approximation scheme in local Dirichlet spaces. Orthogonal polynomials in a harmonically weighted Dirichlet space, created by a finitely supported singular measure, are also studied. DOI: 10.14232/actasm-021-465-4 AMS Subject Classification (1991): 30H05, 33C45, 33C47, 42B35 Keyword(s): harmonically weighted Dirichlet spaces, orthogonal polynomials, polynomial approximation received 15.7.2021, accepted 12.9.2021. (Registered under 715/2021.) Maximum parametric soft density of lattice configurations of balls Sami Mezal Almohammad Abstract. In 2018, Edelsbrunner and Iglesias-Ham defined a notion of density, called first soft density, for lattice packings of congruent balls in Euclidean $3$-space, which penalizes gaps and multiple overlaps. In their paper, they showed that this density is maximal in a $1$-parameter family of lattices, called diagonal family, for a configuration of congruent balls whose centers are the points of a face-centered cubic lattice. In this note we extend their notion of density, which we call first soft density of weight $t$, and show that it is maximal in the diagonal family for some family of congruent balls centered at the points of a face-centered cubic lattice, for every $t \geq 1$, and at the points of a body-centered cubic lattice for $t=0.5$. DOI: 10.14232/actasm-020-483-y AMS Subject Classification (1991): 52C17, 52A38, 52A15 Keyword(s): packing and covering, soft density of weight $t$, lattice configurations, Voronoi domains, Brillouin zones received 2.12.2020, revised 23.4.2021, accepted 17.8.2021. (Registered under 233/2020.) Regression estimators for the tail index Amenah AL-Najafi, László L. Stachó, László Viharos Abstract. We propose a class of weighted least squares estimators for the tail index of a distribution function with a regularly varying tail. Our approach is based on the method developed by Holan and McElroy (2010) for the Parzen tail index. We prove asymptotic normality and consistency for the estimators under suitable assumptions. These and earlier estimators are compared in various models through a simulation study using the mean squared error as criterion. The results show that the weighted least squares estimator has good performance. DOI: 10.14232/actasm-020-361-6 AMS Subject Classification (1991): 60F05, 62G32 Keyword(s): tail index, Pareto model, weighted least squares estimators, quantile process received 11.6.2020, revised 31.8.2021, accepted 2.9.2021. (Registered under 611/2020.) [Open Access VIEW] On the strong $(C,\alpha )$ laws of large numbers Takeshi Yoshimoto Abstract. We give a necessary and sufficient condition for the strong $(C,\alpha )$ law of large numbers with real order $\alpha >0$ for weighted sums of independent random variables satisfying the property $\alpha $-WH analogous to, though weaker than, the Hartman's type property. In particular, if a sequence of random variables is two-sided, then the strong $(C,\alpha )$ law of large numbers for the sequence can also be characterized by the ergodic Hilbert transform. DOI: 10.14232/actasm-021-271-y AMS Subject Classification (1991): 60F15; 47A35 Keyword(s): strong law of large numbers, weak homogeneity, Bourgain's return time theorem, sampling scheme, Doob scheme, ergodic Hilbert transform, universal sequence of weights received 1.2.2021, revised 15.6.2021, accepted 30.6.2021. (Registered under 21/2021.)
{"url":"http://pub.acta.hu/acta/showCustomerVolume.action?noDataSet=true","timestamp":"2024-11-11T00:50:37Z","content_type":"text/html","content_length":"89677","record_id":"<urn:uuid:44402073-bc51-47b1-939d-c2eaf83995e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00603.warc.gz"}
A toy is in the shape of a right circular cylinder with a hemis-Turito Are you sure you want to logout? A Toy is in the shape of a right circular cylinder with a hemisphere on one end and a cone on the other end. The height and radius of the cylindrical part are 13 cm and 5 cm respectively. The radii of hemispherical and conical parts are the same as that of the cylindrical part. Calculate the surface area of the toy if the height of the conical part is 12 cm Volume of cylinder The correct answer is: 770cm • We have given A Toy is in the shape of a right circular cylinder with a hemisphere on one end and a cone on the other end. The height and radius of the cylindrical part are 13 cm and 5 cm respectively. The radii of hemispherical and conical parts are the same as that of the cylindrical part • We have to find the surface area of the toy if the height of the conical part is 12 cm. Step 1 of 1: We have Radius of base of the cylinder 5cm Radius of base of the cone 5cm Height of hemisphere = its radius = 5cm Total area of the toy = CSA of cone + CSA of cylinder + CSA of Hemisphere Get an Expert Advice From Turito.
{"url":"https://www.turito.com/ask-a-doubt/Maths-a-toy-is-in-the-shape-of-a-right-circular-cylinder-with-a-hemisphere-on-one-end-and-a-cone-on-the-other-end-qe0bb99d6","timestamp":"2024-11-08T19:19:30Z","content_type":"application/xhtml+xml","content_length":"312158","record_id":"<urn:uuid:2baa02a9-59e3-4bc4-9a10-843bf5ca814e>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00747.warc.gz"}
Furnace Cost Calculator - Calculator Wow Furnace Cost Calculator When considering the installation or replacement of a furnace, it’s essential to evaluate the total cost involved. This includes not only the purchase price of the furnace itself but also various associated expenses such as installation, labor, and ongoing maintenance. To streamline this process, a Furnace Cost Calculator can prove invaluable. Formula: The total cost of a furnace (FC) can be calculated using the following formula: FC = Purchase Price + Installation Cost + Labor Cost + Maintenance and Repair Costs How to Use: 1. Enter the purchase price of the furnace in dollars. 2. Input the installation cost required for setting up the furnace. 3. Specify the cost of labor associated with the installation process. 4. Enter the anticipated expenses for maintenance and repairs. 5. Click the “Calculate” button to obtain the total furnace cost. Example: Let’s consider a scenario where: • Purchase Price = $2000 • Installation Cost = $500 • Labor Cost = $300 • Maintenance and Repairs Cost = $150 Using the Furnace Cost Calculator: 1. Enter the respective values. 2. Click “Calculate.” 3. The total furnace cost would be displayed, which is $2950. 1. What factors should be considered while estimating furnace costs? Factors to consider include purchase price, installation charges, labor costs, and ongoing maintenance expenses. 2. Are there any additional costs not accounted for in the calculator? Depending on the situation, there might be additional costs such as permits, ductwork modifications, or disposal fees. 3. Is the calculated cost accurate for all types of furnaces? While the calculator provides a good estimate, costs may vary based on the type and efficiency of the furnace, as well as regional differences in labor and materials. 4. Can I use this calculator for commercial furnace installations? Yes, the calculator can be used for both residential and commercial furnace cost estimations. 5. Does the calculator include taxes in the total cost? No, the calculator only sums up the specified expenses and does not include taxes. 6. How often should maintenance costs be accounted for? Maintenance costs should be estimated based on the expected lifespan of the furnace and the manufacturer’s recommendations for servicing 7. Can I calculate the cost in currencies other than dollars? Currently, the calculator operates in dollars; however, you can manually convert the values if needed. 8. Is the calculator suitable for estimating furnace replacement costs? Yes, the calculator can be used to estimate both installation costs for new furnaces and replacement costs for existing ones. 9. Does the calculator consider energy efficiency ratings? No, the calculator focuses solely on the monetary aspects of furnace installation and maintenance. 10. Are there any hidden costs associated with furnace installation? It’s essential to account for any potential hidden costs such as unforeseen repairs or modifications required during installation. Conclusion: A Furnace Cost Calculator simplifies the process of estimating total expenses involved in acquiring and installing a furnace. By considering all relevant costs upfront, homeowners and businesses can make informed decisions regarding their heating systems, ensuring both comfort and financial prudence.
{"url":"https://calculatorwow.com/furnace-cost-calculator/","timestamp":"2024-11-06T13:50:37Z","content_type":"text/html","content_length":"64548","record_id":"<urn:uuid:abc40bb9-66c8-4ab2-a73b-1cfe76ae48d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00473.warc.gz"}
The string edit distance matching problem with moves The edit distance between two strings S and R is defined to be the minimum number of character inserts, deletes, and changes needed to convert R to S. Given a text string t of length n, and a pattern string p of length m, informally, the string edit distance matching problem is to compute the smallest edit distance between p and substrings of t.We relax the problem so that: (a) we allow an additional operation, namely, substring moves; and (b) we allow approximation of this string edit distance. Our result is a near-linear time deterministic algorithm to produce a factor of O(log n log* n) approximation to the string edit distance with moves. This is the first known significantly subquadratic algorithm for a string edit distance problem in which the distance involves nontrivial alignments. Our results are obtained by embedding strings into L[1] vector space using a simplified parsing technique, which we call edit-sensitive parsing (ESP). • Approximate pattern matching • Data streams • Edit distance • Embedding • Similarity search • String matching ASJC Scopus subject areas • Mathematics (miscellaneous) Dive into the research topics of 'The string edit distance matching problem with moves'. Together they form a unique fingerprint.
{"url":"https://nyuscholars.nyu.edu/en/publications/the-string-edit-distance-matching-problem-with-moves","timestamp":"2024-11-09T04:59:40Z","content_type":"text/html","content_length":"51011","record_id":"<urn:uuid:5356a674-8318-431e-899b-b8105caeee38>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00786.warc.gz"}
A Brief Introduction to Biostatistics PDF Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download. CS4220: Knowledge Discovery Methods for Bioinformatics Unit 1: Essence of Biostatistics Wong Limsoon 2 Outline • Basics of biostatistics • Statistical estimation • Hypothesis testing – Measurement data: z-test, t-test – Categorical data: 2-test, Fisher’s exact test – Non-parametric methods • Ranking and rating • Summary Copyright 2013 © Limsoon Wong, 3 Why need biostatistics? Intrinsic & extrinsic noise Measurement errors Nat Rev Genet, 9:583-593, 2008 J Comput Biol, 8(6):557-569, 2001 Copyright 2013 © Limsoon Wong, 4 Why need to learn biostatistics? • Essential for scientific method of investigation – Formulate hypothesis – Design study to objectively test hypothesis – Collect reliable and unbiased data – Process and evaluate data rigorously – Interpret and draw appropriate conclusions • Essential for understanding, appraisal and critique of scientific literature Copyright 2013 © Limsoon Wong, 5 Type of statistical variables • Descriptive (categorical) variables – Nominal variables (no order between values): gender, eye color, race group, … – Ordinal variables (inherent order among values): response to treatment: none, slow, moderate, fast • Measurement variables – Continuous measurement variable: height, weight, blood pressure … – Discrete measurement variable (values are integers): number of siblings, the number of times a person has been admitted to a hospital … Copyright 2013 © Limsoon Wong, 6 Statistical variables • It is important to be able to distinguish different types of statistical variables and the data they generate as the kind of statistical indices and charts and the type of statistical tests used depend on knowledge of these basics Copyright 2013 © Limsoon Wong, 7 Types of statistical methods • Descriptive statistical methods – Provide summary indices for a given data, e.g. arithmetic mean, median, standard deviation, coefficient of variation, etc. • Inductive (inferential) statistical methods – Produce statistical inferences about a population based on information from a sample derived from the population, need to take variation into account sample Population 7 Estimating population values from sample values Copyright 2013 © Limsoon Wong, 8 Summarizing data • Statistic is “making sense of data” • Raw data have to be processed and summarized before one can make sense of data • Summary can take the form of – Summary index: using a single value to summarize data from a study variable – Tables – Diagrams 8 Copyright 2013 © Limsoon Wong, 9 Summarizing categorical data • A Proportion is a type of fraction in which the numerator is a subset of the denominator – proportion dead = 35/86 = 0.41 • Odds are fractions where the numerator is not part of the denominator – Odds in favor of death = 35/51 = 0.69 • A Ratio is a comparison of two numbers – ratio of dead: alive = 35: 51 • Odds ratio: commonly used in case-control study – Odds in favor of death for females = 12/25 = 0.48 – Odds in favor of death for males = 23/26 = 0.88 – Odds ratio = 0.88/0.48 = 1.84 Copyright 2013 © Limsoon Wong, 10 Summarizing measurement data • Distribution patterns – Symmetrical (bell-shaped) distribution, e.g. normal distribution – Skewed distribution – Bimodal and multimodal distribution • Indices of central tendency – Mean, median • Indices of dispersion – Variance, standard deviation, coefficient of variance Copyright 2013 © Limsoon Wong, See more
{"url":"https://www.zlibrary.to/dl/a-brief-introduction-to-biostatistics","timestamp":"2024-11-09T01:30:09Z","content_type":"text/html","content_length":"93964","record_id":"<urn:uuid:df70baaa-dd3c-475c-ab89-917a683b15cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00732.warc.gz"}
Does the [ search_type ] difference in MATCH affect performance? I am trying to consider why I would use 1 instead of 0, and I can only think that it must be a performance issue. This assumes that search_type 1 uses divide and conquer, where as 0 searches linearly. I have had situations were using 1 gets a bad result if the lookup range gets sorted, so I make sure I use 0 to mitigate that. And I want to make sure that I am not overlooking someone other downside to using 0. • 1 and -1 give an approximate match whereas 0 gives an exact match. Smartsheet calculates the relative position of a search value by counting cells from left to right (across columns), then top to bottom (across rows). In a lookup table consisting of two columns, the cell in the top row of the leftmost column is the first position, 1. For the optional [search_type] argument: □ 1 (the default value) finds the largest value less than or equal to search_value (requires that the range be sorted in ascending order). □ 0 find the first exact match (the range may be unordered). □ -1 finds the smallest value greater than or equal to search_value (requires that the range be sorted in descending order). Help Article Resources
{"url":"https://community.smartsheet.com/discussion/77881/does-the-search-type-difference-in-match-affect-performance","timestamp":"2024-11-11T16:19:58Z","content_type":"text/html","content_length":"388900","record_id":"<urn:uuid:06367b8d-0f68-45b1-8576-65742e85e3af>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00498.warc.gz"}
Traders can add just one moving average or have many different time frames on one chart. For example, a day moving average of CL WTI futures would be the. $0 online equity trade commissions + Satisfaction Guarantee. See our To create a simple moving average chart, start by choosing a time frame. A. Use the Exponential Moving Average Chart Maker to display a chart of exponential moving averages values for any stock, exchange-traded fund (ETF) and mutual. Learn how to add a trendline in Excel, PowerPoint, and Outlook to display visual data trends. Format a trend or moving average line to a chart. Chart Range. 1D 5D 10D 1M 3M 6M YTD 1Y 2Y 3Y 5Y 10Y All. Pre-Market After Compare. Restore Defaults Store Settings. US:SPX. Simple Moving Average Edit. Learn how to use moving averages at Quality America! Get information on when to use a moving average sigma chart, along with other SPC practices. A moving average (MA) is a technical analysis indicator that helps level price action by filtering out the noise from random price fluctuations. The “Moving Average” indicator is calculated by adding all closing prices over a certain period of days and dividing them by the durations on the drop down list. You have this question tagged with stocks. There are several websites that will plot stock charts for you with some basic technical. Trading with the SMA shows the average price of a security over a certain length of time and is plotted as a single line on a candlestick chart. Because it is. What do the charts say might be next for US stocks? One of the more popular chart indicators—moving averages—suggests there's not much in their way. What are. Exponential Moving Average is a variation on Simple Moving Average. · Calculation First determine the weighting multiplier or percentage as 2 / (Period + 1). Use the Two Simple Moving Average Chart Makers to display a chart of two simple moving averages for any stock, exchange-traded fund (ETF) and mutual fund. Simple Moving Average is just the average of the Close Price over the specified Period. This helps to smooth out the effect of any price spikes. The Moving Average Indicator smooths price data to create a powerful measure of trend direction. Includes popular MA indicator types and trading signals. A Moving Average Sigma Chart from SPC IV Excel software. Moving Average online SPC Concepts short course (only $39), or his online SPC. SMA is calculated by, adding the closing price of time period and then divide it by number of time period. Use the Two Simple Moving Average Chart Makers to display a chart of two simple moving averages for any stock, exchange-traded fund (ETF) and mutual fund. The MA chart use a moving average, where the previous (N-1) sample values of the process variable are averaged together along with the current process value to. I know IBD leaderboards chart has this option to show the moving averages but I am looking for a free alternative since I am starting to invest. A simple moving average (MA) is the unweighted mean of the previous n data points. For example, a day moving average of closing price is the mean of the. The Golden and Death Cross are signals that occur when the and period moving average cross and they are mainly used on the daily charts. In the chart. 1. Calculate 3 year Simple Moving Average calculator 2. Calculate 5 year Simple Moving Average calculator 3. Calculate 4 year Simple Moving Average calculator. Use this straightforward simple moving average (SMA) calculator to calculate the moving average of a data set. Best Online Brokers · Best Savings Rates · Best CD Rates · Best Life Insurance A moving average helps cut down the amount of noise on a price chart. Look. In statistics, a moving average is a calculation to analyze data points by creating a series of averages of different selections of the full data set. Technical Performance Moving Averages, Fundamental Mini-Chart View, Pre-Post Market Custom screen flipcharts, BETA flipcharts download. The average is called "moving" because it is plotted on the chart bar by bar, forming a line that moves along the chart as the average value changes. Image: SMA. day Moving Average (MA) is a popular near term technical indicator. Graphically, you find it as a trend line on the price chart that represents the averages. They are especially well-suited for price charts and other indicators. Some of the advantages of using moving averages include: Moving average is used for. Best Moving Average Trading Strategy (MUST KNOW) 1. Calculate 3 year Simple Moving Average calculator 2. Calculate 5 year Simple Moving Average calculator 3. Calculate 4 year Simple Moving Average calculator. $0 online equity trade commissions + Satisfaction Guarantee. See our To create a simple moving average chart, start by choosing a time frame. A. Moving averages are widely recognized by many traders as being indicators of potentially significant support and resistance price levels. Create charts and graphs online with Excel, CSV, or SQL data Averages and RSI Graph Click to enter X axis title RSI Moving Average Oversold Overbought. To add a moving average to your chart, simply click on 'indicators' at the Trading through an online platform carries additional risks. Refer to. A Moving Average Sigma Chart from SPC IV Excel software. Moving Average online SPC Concepts short course (only $39), or his online SPC. SMA is calculated by, adding the closing price of time period and then divide it by number of time period. The indicator marks the frequent patterns on the chart, which provide traders with potential trade opportunities. more. Related Articles. Chart Range. 1D 5D 10D 1M 3M 6M YTD 1Y 2Y 3Y 5Y 10Y All. Pre-Market After Compare. Restore Defaults Store Settings. US:SPX. Simple Moving Average Edit. Exponential Moving Average is a variation on Simple Moving Average. · Calculation First determine the weighting multiplier or percentage as 2 / (Period + 1). Median Price; Momentum; Moving Average; Moving Average Cross; Moving Average Deviation; Moving Average Envelope; On Balance Volume; Price Volume Trend; Relative. The average is called "moving" because it is plotted on the chart bar by bar, forming a line that moves along the chart as the average value changes. Image: SMA. A simple moving average (MA) is the unweighted mean of the previous n data points. For example, a day moving average of closing price is the mean of the. Learn how to use moving averages at Quality America! Get information on when to use a moving average sigma chart, along with other SPC practices. Use Moving Average Charts to evaluate process shifts using a simple Moving Average. Use an EWMA chart for Exponentially Moving Averages. day Moving Average (MA) is a popular near term technical indicator. Graphically, you find it as a trend line on the price chart that represents the averages. Moving averages provide an objective measure of trend direction by smoothing price data. Normally calculated using closing prices. You have this question tagged with stocks. There are several websites that will plot stock charts for you with some basic technical. Use the Exponential Moving Average Chart Maker to display a chart of exponential moving averages values for any stock, exchange-traded fund (ETF) and mutual. In statistics, a moving average is a calculation to analyze data points by creating a series of averages of different selections of the full data set. Traders can add just one moving average or have many different time frames on one chart. For example, a day moving average of CL WTI futures would be the. They are especially well-suited for price charts and other indicators. Some of the advantages of using moving averages include: Moving average is used for. Learn how to add a trendline in Excel, PowerPoint, and Outlook to display visual data trends. Format a trend or moving average line to a chart. Trading with the SMA shows the average price of a security over a certain length of time and is plotted as a single line on a candlestick chart. Because it is. The MA chart use a moving average, where the previous (N-1) sample values of the process variable are averaged together along with the current process value to. The Golden and Death Cross are signals that occur when the and period moving average cross and they are mainly used on the daily charts. In the chart. I know IBD leaderboards chart has this option to show the moving averages but I am looking for a free alternative since I am starting to invest. The Moving Average Indicator smooths price data to create a powerful measure of trend direction. Includes popular MA indicator types and trading signals. Technical Performance Moving Averages, Fundamental Mini-Chart View, Pre-Post Market Custom screen flipcharts, BETA flipcharts download. The “Moving Average” indicator is calculated by adding all closing prices over a certain period of days and dividing them by the durations on the drop down list. normal bar chart. Always be on the lookout for price respecting a moving average, divergences between price action and momentum oscillators, and failure. To trade this strategy, traders typically look for a moving average of a specific length, such as a day or day moving average, and plot it on a chart. Essentially, Moving Averages smooth out the “noise” when trying to interpret charts. Noise is made up of fluctuations of both price and volume. Because a Moving. This is the reason that exponential moving averages follow the price closely as compared to simple moving averages as seen in the Bank Nifty chart above. Press Release Pitch Email | What Are Roth Ira Rates
{"url":"https://bkinfo-379.site/overview/moving-average-chart-online.php","timestamp":"2024-11-14T21:04:44Z","content_type":"text/html","content_length":"17435","record_id":"<urn:uuid:07c8e516-3eb2-4417-aa3f-1d7da5dc8442>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00090.warc.gz"}
How data scientists test hypotheses and probability Why hypotheses are important in statistical analysis Hypothesis testing allows researchers and statisticians to develop hypotheses which are then assessed to determine the probability or the likelihood of those findings. This statistics tutorial has been taken from Basic Statistics and Data Mining for Data Science. Whenever you wish to make an inference about a population from a sample, you must test a specific hypothesis. It’s common practice to state 2 different hypotheses: • Null hypothesis which states that there is no effect • Alternative/research hypothesis which states that there is an effect So, the null hypothesis is one which says that there is no difference. For example, you might be looking at the mean income between males and females, but the null hypothesis you are testing is that there is no difference between the 2 groups. The alternative hypothesis, meanwhile, is generally, although not exclusively, the one that researchers are really interested in. In this example, you might hypothesize that the mean income between males and females is different. Read more: How to predict Bitcoin prices from historical and live data. Why probability is important in statistical analysis In statistics, nothing is ever certain because we are always dealing with samples rather than populations. This is why we always have to work in probabilities. The way hypotheses are assessed is by calculating the probability or the likelihood of finding our result. A probability value, which can range from zero to one, corresponding to 0% and 100% in percentages, is essentially a way of measuring the likelihood of a particular event occurring. You can use these values to assess whether the likelihood of any of these differences that you have found are the result of random chance. How do hypotheses and probability interact? It starts getting really interesting once we begin looking at how hypotheses and probability interact. Here’s an example. Suppose you want to know who is going to win the Super Bowl. I ask a fellow statistician, and he tells me that she’s built a predictive model and that he knows which team is going to win. Fine - my next question is how confident he is in that prediction. He says he’s 50% confident - are you going to trust his prediction? Of course you’re not - there are only 2 possible outcomes and 50% is ultimately just random chance. So, say I ask another statistician. He also tells me that he has a prediction and that he has built a predictive model, and he’s 75% confident in the prediction he has made. You’re more likely to trust this prediction - you have a 75% chance of being right and a 25% chance of being wrong. But let’s say you’re feeling cautious - a 25% chance of being wrong is too high. So, you ask another statistician for their prediction. She tells me that she’s also built a predictive model which she has 90% confidence is correct. So, having formally stated our hypotheses we then have to select a criterion for acceptance or rejection of the null hypothesis. With probability tests like the chi-squared test, the t-test, or regression or correlation, you’re testing the likelihood that a statistic of the magnitude that you obtained or greater would have occurred by chance, assuming that the null hypothesis is true. It’s important to remember that you always assess the probability of the null hypothesis as true. You only reject the null hypothesis if you can say that the results would have been extremely unlikely under the conditions set by the null hypothesis. In this case, if you can reject the null hypothesis, you have found support for the alternative/research hypothesis. This doesn’t prove the alternative hypothesis, but it does tell you that the null hypothesis is unlikely to be true. The criterion we typically use is whether the significance level sits above or below 0.05 (5%), indicating that a statistic of the size that we obtained, would only be likely to occur on 5% of occasions. By choosing a 5% criterion you are accepting that you will make a mistake in rejecting the null hypothesis 1 in 20 times. Replication and data mining If in traditional statistics we work with hypotheses and probabilities to deal with the fact that we’re always working with a sample rather than a population, in data mining, we can work in a slightly different way - we can use something called replication instead. In a data mining project we might have 2 data sets - a training data set and a testing data set. We build our model on a training set and once we’ve done that, we take the results of that model and then apply it to a testing data set to see if we find similar results.
{"url":"https://www.packtpub.com/en-us/learning/how-to-tutorials/how-data-scientists-test-hypotheses-and-probability/","timestamp":"2024-11-14T05:09:19Z","content_type":"text/html","content_length":"822564","record_id":"<urn:uuid:114e0f42-b344-4632-8d26-b97ffb67a929>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00056.warc.gz"}
Round off 0.86 to the nearest Round off 0.86 to the nearest 100 Answer :Round off 0.86 to the nearest hundredth is equal to 0.86 . Calculation : Rounding 0.86 to the nearest hundredth involves considering the second decimal place after the decimal point. Let's see how to do it step by step : 1. Identify the hundredths place: This is the 2. decimal place after the decimal point. The hundredth place of 0.86 is 6. 2. Look at the digit immediately to the right of the hundredths place. If it's 5 or more, you round up; if it's 4 or less, you round down. 3. Adjust the tenths place accordingly. So the answer is 0.86 . Decimal Round Off Calculator Types Of Round Off : Decimal Round Off Calculator Decimal round-off calculator simplifies complex numerical values to a specified degree of precision, whether it's rounding to tenths, hundredths, or even thousandths. This precision is not merely a mathematical abstraction but a practical necessity in fields where nuanced measurements or financial accuracies are paramount. Fraction Round Off Calculator Fraction round-off calculator serves as gateways to rendering fractions more accessible and understandable. In mathematical studies and scientific research, precision in fractions holds immense significance. Whether simplifying fractions to common denominators or converting them into decimal equivalents, these calculators ensure clarity without compromising accuracy. Percentage Round Off Calculator Percentage round-off calculators serve as pivotal instruments in rendering percentages more comprehensible and practical. In financial analyses or statistical data interpretation, precision in percentages holds immense value. These calculators streamline complex percentage values, presenting them in simplified forms that retain accuracy without overwhelming complexity.
{"url":"https://roundoffcalculator.com/decimal/round-off-0.86-to-the-nearest-hundredth","timestamp":"2024-11-03T07:32:10Z","content_type":"text/html","content_length":"19832","record_id":"<urn:uuid:7aa4275c-c23d-4cd3-9dbb-0db7c0b6115f>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00438.warc.gz"}
Oliver Dimon Kellogg (10 July 1878 – 27 August 1932) was an American mathematician.^[1] His father, Day Otis Kellogg, was a professor of literature at the University of Kansas and editor of the American edition of the Encyclopædia Britannica. In 1895 Oliver Kellogg began his undergraduate study at Princeton University, where he earned his master's degree in 1900. With a John S. Kennedy stipend he first studied at the Humboldt University of Berlin and then in 1901/1902 at Georg-August-Universität Göttingen. At Göttingen in 1902 he earned his PhD with a thesis Zur Theorie der Integralgleichungen und des Dirichlet'schen Prinzips under the direction of David Hilbert. After completing his thesis, Kellogg became an instructor at Princeton and from 1905 at the University of Missouri, where he became a professor in 1910. In World War I he was a scientific advisor at the Coast Guard Academy in New London, Connecticut, where he worked on submarine detection. Kellogg became a lecturer at Harvard University in 1919, an associate professor in 1920, and a professor in 1927. He died of a heart attack while climbing Doubletop Mountain near Greenville, Maine.^[2]^[3] Kellogg was married and had a daughter. Kellogg is known for his work on potential theory, which was the subject of his dissertation and also his famous 1929 textbook Foundations of Potential Theory.^[4] In 1922 with George David Birkhoff he generalized the Brouwer fixed point theorem to the theorem of Birkhoff–Kellogg. Among his doctoral students was Arthur Copeland. • with Earle Raymond Hedrick, Applications of the calculus to mechanics (Boston: Ginn, 1909) • Foundations of Potential Theory. Grundlehren der Mathematischen Wissenschaften, Springer-Verlag 1967. See also 1. ^ Birkhoff, G. D. (1933). "The mathematical work of Oliver Dimon Kellogg". Bull. Amer. Math. Soc. 39 (3): 171–177. doi:10.1090/s0002-9904-1933-05560-x. MR 1562574. 2. ^ "PROF. KELLOGG DIES CLIMBING MOUNTAIN; Overexertion by Head of Harvard Department of Mathematics Causes Heart Attack". The New York Times. August 28, 1932. 3. ^ "FIND BODY OF HARVARD PROFESSOR IN MONSON". Lewiston Sun Journal. August 29, 1932. (After visiting with friends, Prof. Kellogg went alone on a hike to the summit of Doubletop Mountain on Friday, Aug. 26, 1932. When he failed to return, his friends notified mountain guides who, on Aug. 27, found his body on a mountainside trail.) 4. ^ G. C. Evans (1931). "Kellogg on Potential". Bull. Amer. Math. Soc. 37 (3): 141–144. doi:10.1090/s0002-9904-1931-05098-9. External links
{"url":"https://www.knowpia.com/knowpedia/Oliver_Dimon_Kellogg","timestamp":"2024-11-07T04:17:11Z","content_type":"text/html","content_length":"83391","record_id":"<urn:uuid:dc7d625c-7274-4735-8511-606f9ad2759a>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00498.warc.gz"}
Answers related to Statistics social sciences - Us | SolutionInn Questions and Answers of Statistics Social Sciences • Conduct a one-sample t-test for a dataset where μ = 74.2, X = 75.1, sx = 10.2, and n = 81.a. What are the groups for this one-sample t-test?b. What is the null hypothesis for this one-sample • Using the unstandardized regression equation of ŷi = bxi = aa. Find the values for b and a if r = –0.45, X̅ = 6.9, sx = 0.8, Y = 10.6, and sy = 1.4.b. Find the predicted dependent variable • Using the unstandardized regression equation of ŷi = bxi + aa. Find the values for b and a if r = +0.35, X̅ = 4.2, sx = 1.5, Y = 3.0, and sy = 2.1.b. Find the predicted dependent variable scores • Draw a scatterplot for the following correlations:a. r = –0.82b. r = –0.61c. r = –0.22d. r = 0.00e. r = +0.45f. r = +0.66g. r = +0.79h. r = +1.00 • Which correlation value is the strongest and which is the weakest:(a) +0.42,(c) +0.59, (b) –0.87,(d) –0.34. • Conduct an unpaired two-sample t-test for the following data: X̅1 = 14.8, s1 = 1.99, n1 = 52, X̅1 = 13.2, s2 = 2.89, n2 = 233. For this problem, you should use an α value of .01.a. Explain why an • Steven is an anthropology student who collected data on the behavior of children of immigrants and children of non-immigrants in a playground setting. He found that children of immigrants spent • Jessi conducts a study to determine whether early voting in an election affects how informed the person is when they vote. She theorizes that people who vote early are more motivated to vote and • Gagné and Gagnier (2004) studied the classroom behavior of children who were a year younger than their classmates and those who were a typical age for their grade. The children’s teachers rated • Elias conducts a 3 x 3 ANOVA. His independent variables are socioeconomic status and education level, and his dependent variable is the willingness of a subject to donate to charity. He finds • Emily is a sociology student who uses a Wilcoxon rank-sum test to compare the levels of interest in political activity among college students and non-college students. She finds that college • Aaron has two interval-level independent variables and an interval-level dependent variable. His R2 value is 0.110.a. Aaron adds another independent variable, and this increases the R2 to 0.155. • Ahmed has four variables in his data. Three are independent variables: a nominal-level variable and two ratio-level variables. He also has an interval-level dependent variable. What is the • A nominal variable records the major of students in a sample of (1) sociology students, (2) social work students, and (3) anthropology students.a. How many new variables would have to be created in a • Pablo’s data consist of a nominal independent variable with four groups and a dependent interval-level variable. What is the appropriate statistical procedure for Pablo’s data? • Christine has a single, independent, nominal-level variable with two categories and three dependent variables.a. What is the appropriate multivariate statistical procedure for this situation?b. • When there are two independent variables and a single dependent variable, why is it better to conduct a 2 x 2 ANOVA instead of two separate one-way ANOVAs? • The following questions are about multicollinearity.a. What is multicollinearity?b. Why is multicollinearity a problem in multiple regression?c. What can a data analyst do to reduce the chance of • Kyla and Joan are classmates analyzing the same set of data. There are two nominal independent variables and one interval-level dependent variable. Kyla chooses to conduct a two-way ANOVA, while • A university collected data on the number of scholarship students that donated to the same scholarship fund when they were alumni. There were three groups of alumni: those receiving athletic • Customers of four companies were surveyed and placed into three categories: repeat customers, non-repeat satisfied customers, and non repeat/dissatisfied customers.a. What is the null hypothesis • In a university political science department, there are 90 students enrolled, 36 males and 54 females. Of the 36 males, 28 eventually graduated, while 8 did not. Of the 54 females, 42 graduated, • Logan’s favorite sports team has an 80% chance of winning their next game. Convert this value into an odds ratio where his team is the non baseline group and the outcome of interest is a victory. • In a study of second language learning, Researcher A found that individuals whose parents were bilingual were 2.5 times more likely to be fluent in a second language by age 21 than children whose • Barnsley, Thompson, and Barnsley (1985, p. 24) examined the birth months of National Hockey League players in the United States and Canada. They hypothesized that traditional rules stating that a • In a study of the effectiveness of a new vaccine, a researcher found that the relative risk of developing a disease is 0.31 for individuals who have had the vaccine. In the same study, it was • Samantha found that the correlation between income and self-reported happiness is r = +0.39. She concludes that if more people were wealthy they would be happier. What is the problem with her • Answer the following questions about a scenario where the p-value is small (e.g., < .05 or < .01).a. Why would it be incorrect to say that these results are important?b. Why would it also be • Jim found that the p-value in his study was less than .01. He says, “This means that my null hypothesis is false. My findings are strong enough to replicate.”a. Why are both of Jim’s statements • Maddie is a sociology student who found that individuals with high levels of social support were more likely to graduate from college than individuals with low levels, as indicated by the odds • Kathryn’s sample consists of patients at a psychiatric clinic. In the clinic population, 40% of patients have depression, 28% have anxiety disorders, 15% have eating disorders, 10% have • Dante gave a survey to the anthropology students in his department. He had 42 female respondents and 31 male respondents. In his department the entire population of students is 59% female and 41% • Which odds ratio indicates that the baseline group and the non-baseline group are equally likely to experience the outcome of interest?(a) 0.45,(b)1.00,(c)1.33, or (d) 2.05. • For the unstandardized equation, a is the y-intercept of the regression line. Given the equation for a (which is a = Y̅ - bX̅), explain why the y-intercept of the regression line is always 0 when • For the unstandardized equation, b is the slope of the regression line. Given the equation for b (which is b = r sy/sx), explain why r is the slope of the regression line when the independent and • Camilla collected data on the quality of day care (x) and the child’s grades in kindergarten (y). Her data are below:a. Find the correlation between these two variables.b. Write the standardized • Jane is interested in health psychology, so for a project she collects data on the number of hours that college students watch television per day (x) and the number of hours that they exercise • Using the standardized regression equation of ẑy = rzx, find the predicted dependent variable z-scores if r = +0.10:a. zx = –3.5b. zx = –2.8c. zx = –2.5d. zx = –0.8e. zx = 0.0f. zx = +0.4g. • Using the standardized regression equation of ẑy = rzx, find the predicted dependent variable z-scores of individuals with the following individuals if r = –0.72:a. zx = –4.0b. zx = –2.9c. zx • Many professions (e.g., medicine, law, architecture) require job applicants to pass a licensing test of basic skills and knowledge in order to work in that field. Examinees who do not pass the • Dallin is a sports fan. As he tracks the performance of different teams, he notices that teams that were poor performers 10 years ago tend to be better teams now, whereas teams that were excellent 10 • Every time she makes a regression equation, Jocelyn converts all her data to z-scores and then uses the standardized regression equation to create the line of best fit. She notices that her • Peggy collects data for the restaurant where she works. In a sample of regular customers, she collected data on the number of restaurant visits per week (x) and customer satisfaction (y). Both • Eliza, a family science student, asks married couples to count the number of arguments they have in a month (x) and rate their children’s happiness on a scale from 1 to 7 (y), higher numbers • Angelica collected data on people’s anxiety levels (x) and life satisfaction (y). The scores are below:a. Calculate the correlation for Angelica’s data.b. What is the null hypothesis for these • Leigh is an anthropology student who has measured the diameter of clay pots found at an archeological site (x) and the volume of those pots (y). Below are the scores for both variables for the 12 • Tomás is a sociology student who collected data on two variables: perceived level of racism in society (x) and willingness to sign a political petition (y). Below are the scores from the first 8 • Explain whether the following relationships are positive correlations or negative correlations.a. People with larger families tend to have better support systems in a crisis.b. Individuals who • Mason is a sociology student who is interested in how city size influences how people interact with their neighbors. He has three groups in his dataset: (1) inner city dwellers, (2) suburban • Zachary is a psychology student who is interested in the degree of control that people feel they have in their lives. He has five groups in his dataset: (a) police officers, (b) victims of • Davita is an anthropology student who measures the level of trust in her subjects. In her data higher numbers indicate greater trust of strangers and lower numbers indicate less trust. She • Oi-mon is a sociology student interested in how groups of people choose to travel long distances. He asked four groups of travelers how much they enjoyed long journeys. The four groups were plane • Nicole is a psychology student interested in the social dynamics of solving problems. She divided her sample into three groups: people who solved problems alone (Group A), people forced to solve • Liliana collected data about the happiness level of teenagers, young adults, and elderly individuals, with a score of 1 being “very unhappy” and a score of 5 being “very happy.”a. Why is an • Dave’s university has students from six different Canadian provinces. He surveys a representative sample of students to learn how many hours per week they study. Jim wants toknow if there are any • Dana wants to compare the grade-point averages (GPAs) for freshman, sophomore, junior, and senior sociology students. If she chooses not to use an ANOVA,a. how many unpaired two-sample t-tests • In an ANOVA we can use the information table to make predictions. How do we measurea. individual prediction accuracy?b. overall prediction accuracy? • Explain why labeling effect sizes as “small,” “medium,” or “large” without any further interpretation is problematic. • In an ANOVA the null hypothesis is that all of the group means are equal to one another. If the null hypothesis is rejected, how do we determine which group mean(s) differ from other mean(s)? • Conduct a one-sample t-test for a dataset where μ = 14.1, X = 13.7, sx = 0.8, n = 20.a. What are the groups for this one-sample t-test?b. What is the null hypothesis for this one-sample t-test?c. • Grover, Biswas, and Avasthi (2007) studied people who had long-term delusions, and the authors were interested in whether individuals with another mental disorder had an earlier age of onset than • Calculate the degrees of freedom for the following sample sizes:a. Group 1: 10; Group 2: 15b. Group 1: 10; Group 2: 1,500c. experimental group: 35; control group: 35d. males: 18; females: 12e. • What are the guidelines for ensuring that the data in an unpaired two sample t-test meet the assumptions of homogeneity of variance and similar sample sizes?
{"url":"https://www.solutioninn.com/study-help/mathematics-statistics-social-sciences","timestamp":"2024-11-12T23:54:57Z","content_type":"text/html","content_length":"95725","record_id":"<urn:uuid:5dee8909-1d0f-4707-80c3-d97b54fd57ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00487.warc.gz"}
Eight Times Table (8x) - Times Tables Maths Games for Year 3 (age 7-8) by URBrainy.com Eight Times Table (8x) Practice for the eight times table with the 8 as the first number. Eight Times Table (8x) Practice for the eight times table with the 8 as the first number. Create my FREE account including a 7 day free trial of everything Already have an account? Sign in Free Accounts Include Subscribe to our newsletter The latest news, articles, and resources, sent to your inbox weekly. © Copyright 2011 - 2024 Route One Network Ltd. - URBrainy.com 11.5.3
{"url":"https://urbrainy.com/get/6473/tt08g01-eight-times-table","timestamp":"2024-11-12T05:29:35Z","content_type":"text/html","content_length":"105823","record_id":"<urn:uuid:46a47969-cea7-440e-97da-9f533ae24c6a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00388.warc.gz"}
Several ways to add Hours in Google Sheets- with Examples Table of Contents Google Sheets is emerging as a good options for spreadsheets usage. Spreadsheets are used for making our jobs simpler by creating the automated reports by using functions, formats and many other options present in the Google Sheets. But some of the features can become confusing for some naive users. One such feature is the handling of the time in google sheets. On one side, it makes it very easier for us to handle time in google sheets, once it starts creating problems, it becomes almost impossible to handle unless you have very clear concepts regarding the time handling in Google Sheets. In this article, we’ll particularly focus about handling the hours [ from the time] in various ways. Time and date go side by side. Date is taken as a number starting from Dec 31,1899 whereas time is taken as the decimal part of the same. 1 is divided by the total number of seconds i.e. 1/86400. The decimal result will represent the 1 second. Multiplying this value with 3600 [ i.e. the number of seconds in an hour ] the value comes out to be 0.04166666667. It means, if we type 0.04166666667 in the cell and change the format to TIME, it’ll get converted to 1 AM. In this article, we’d try out different ways of handling the HOURS in Google Sheets. Let us learn the way to add hours in Google Sheets. Adding the hours will create different scenarios which can be simple or complex. 1. Adding Hours to the Time 2. Adding Hours to the Hours 3. Adding hours with result in Hours only We’ll discuss these three scenarios which come across frequently whenever we try to handle time in google sheets. But before that we should know the way to extract hours from the time which might be needed in some cases. HOW TO EXTRACT THE HOURS FROM THE GIVEN TIME IN GOOGLE SHEETS ? The time format is given by the standard HOUR:MINUTES:SECONDS form. There are different formats which we can choose from the options available. There is a dedicated function to help us extract the HOURS from the given time in any format. ALWAYS REMEMBER! FORMAT IS JUST A PRESENTATION. IN THE BACK END , THE VALUES ARE THE SAME IN DIFFERENT FORMATS. SO THE APPLICATION OF ANY FUNCTION ON THE DIFFERENT FORMATS WILL RESULT IN THE SAME • Double click the cell where you want the result as HOURS extracted from the given time. • Enter the formula as =HOUR(CELL CONTAINING TIME/ TIME INSIDE DOUBLE QUOTES). • Press ENTER. • The result will appear as the HOURS in the time. HOW TO ADD HOURS TO THE GIVEN TIME In this case, we are going to add hours to the given Time. [ TIME FORMAT] For example if we want to add 3 hrs to any given time for creating slots, or any schedules etc. To learn this, let us create a template with three hours slot. We’ll change the basic time and the slots will change automatically. The situation is something like as shown in the picture below. ADDING HOURS: WAY 1 We can add the hours in two different ways to the given time. 1. Adding the hours directly. [ As shown in the picture above ]. 2. Adding the hours by taking them in the different cell or cells and then adding them. [ As shown in the picture below ] ADDING HOURS : WAY 2 We’ll discuss both the ways one by one. WAY 1: ADDING HOURS TO THE GIVEN TIME We can see in the sample picture above that we have directly added the hours to the time given in one cell and the result is correct. But look at the formula used. You can see that the formula used is CELL CONTAINING THE TIME+0.125 WE CAN'T SIMPLY ADD HOURS TO THE CELL CONTAINING THE TIME E.G. =15:00+3:00 WILL CAUSE AN ERROR AND IS AN INVALID PROCEDURE. For the correct procedure, we need to derive the numerical value of the 3 hours and then add it to the cell. The result will appear. If the result is absurd, change the format to time and you can have the correct result in the correct format. As we already discussed in the previous sections 1 hr=0.0416666667 Multiply the numerical value by 3. The result is 0.125 which is the value we added. The result will appear as the time after three hours simply. • The first step is to find out the numerical value of the HOURS , you want to add to the given time. [ Simply multiply the number of hours by 0.04166667 ] • Double click the cell where you want the result. • Enter the formula as =CELL CONTAINING THE TIME+ NUMERICAL VALUE EMERGED FROM THE CALCULATION OF HOURS. • For our example the formula will become =E5+0.125. • Press ENTER. • The result will appear as the time after 3 hours from the starting time. • Drag down the formula and it’ll add 3 hours to the time appearing on the left and will help us create the slots. WAY 2: ADDING THE HOURS SEPARATELY IN THE CELLS This is an easier way of adding the hours to the given time. In this style, we simply put the hours in the separate cells and add it to the already given time in another cell. No issues will emerge in this style except that we do need an extra column or cell. The result will appear as the time after three hours simply. The following picture shows the way which can be used to add the hours simply to the given time. • Create an extra column to enter the hours to be added. • Double click the cell where you want the result to appear. • Enter the formula as =cell containing the time + cell containing the time to be added. • For our example, the formula will be • Press ENTER. • The result will appear as the HOURS in the time. Another case arises in the Google Sheets when we want to add the HOURS to the HOURS. This is also one of the most frequent cases which are required to be dealt properly and carefully. Although the solution is pretty simple but we need to be careful about a few things. The data needs to be in the different columns. The picture below shows the data under the heading TIME 1 and TIME 2. We’ll find out the sum of the given hours in both the columns. • Double click the cell where you want the result to appear. • Enter the formula as =cell containing the first time + cell containing the second time. • For our example, the formula will be =D6+E6 for the first line in ROW 6. • Press ENTER. • The result will appear as the TOTAL HOURS in the COLUMN F. • Drag down the formula to get the result of the rest of the cases. CAUTION: This method is very simple but it’ll sum up the total number of hours up to 24 hours only. Have a look at the picture above and ROW NO. 12 This is the third case where we’ll add hours to the hours but we want the result as the total hours only and no changes in the date. For this, we’ll use the second way only but we’ll change the format of the resulting cell so that it shows the total hours without any date change. • After we have added the time in the standard way as discussed here. • Select the RESULT CELL [ where we want to change the format of the cell to show the net total hours ]. • Go to FORMAT CHANGE > MORE FORMATS > CUSTOM NUMBER FORMAT. • The location is shown below in the picture. • As we choose CUSTOM NUMBER FORMAT, the following window will open. • In the custom format type [H]:MM or [h]:mm. This format is telling GOOGLE SHEETS to show the total hours and minutes only. • After inserting the format as mentioned above, click Apply. • The result will simply change to the total hours. ONCE YOU HAVE SET THE FORMAT FOR ONCE CELL, THERE IS NO NEED TO REPEAT THIS PROCEDURE FOR ALL THE CELLS. FOLLOW THE PROCEDURE. There is a dedicated functionality known as PAINT FORMAT in Google Sheets for copying the format. This function not only copies the color, fonts etc. but formatting rules too. • Select the cell whose format needs to be copied. • Select the cell where you want to paste the format. • The formatting of the destination cell will be changed as per the original cell. • The complete process is shown in the picture below. • After making all the changes, we can see that we got the results corrected. • The following picture shows all the results.
{"url":"https://gyankosh.net/googlesheets/how-to-add-hours-in-google-sheets/","timestamp":"2024-11-03T09:13:58Z","content_type":"text/html","content_length":"172052","record_id":"<urn:uuid:757c5775-a384-4891-a8f1-a8d460755a1a>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00430.warc.gz"}
Confidence maps: statistical inference of cryo-EM maps ^aStructural and Computational Biology Unit, European Molecular Biology Laboratory (EMBL), Meyerhofstrasse 1, 69117 Heidelberg, Germany, ^bScientific Computing Department, Science and Technology Facilities Council, Research Complex at Harwell, Didcot OX11 0FA, United Kingdom, ^cErnst-Ruska Centre for Microscopy and Spectroscopy with Electrons 3/Structural Biology, Forschungszentrum Jülich, 52425 Jülich, Germany, and ^dJuStruct: Jülich Center for Structural Biology, Forschungszentrum Jülich, 52425 Jülich, Germany ^*Correspondence e-mail: maximilian.beckers@embl.de, c.sachse@fz-juelich.de (Received 3 December 2019; accepted 3 March 2020; online 25 March 2020) Confidence maps provide complementary information for interpreting cryo-EM densities as they indicate statistical significance with respect to background noise. They can be thresholded by specifying the expected false-discovery rate (FDR), and the displayed volume shows the parts of the map that have the corresponding level of significance. Here, the basic statistical concepts of confidence maps are reviewed and practical guidance is provided for their interpretation and usage inside the CCP-EM suite. Limitations of the approach are discussed and extensions towards other error criteria such as the family-wise error rate are presented. The observed map features can be rendered at a common isosurface threshold, which is particularly beneficial for the interpretation of weak and noisy densities. In the current article, a practical guide is provided to the recommended usage of confidence maps. 1. Introduction The 3D structure obtained from an electron cryo-microscopy (cryo-EM) experiment corresponds to the Coulomb potential in 3D space of the macromolecule of interest in three dimensions (Frank, 2006 ). Electron micrographs of ice-embedded macromolecules are generated by the interaction of elastically scattered and unscattered electrons with the biological specimens in the cryo-microscope (Glaeser, 2016 ). At the core of the structure-determination process is the 3D image-reconstruction procedure, which requires the computational determination of the orientations of thousands of individual particle images with respect to a 3D model. Owing to noise from solvent scattering, optical aberrations, imperfect detectors and other sources, inaccuracies arise in the alignment process and the reconstructed maps contain errors in addition to the electrostatic potential. Moreover, inherent molecular flexibility and heterogeneity such as the incomplete stoichiometry of protein complexes contribute to incoherent averages and systematic variation in map values. Compensation for the resulting decay of amplitudes at high resolution is therefore required. For the interpretation of high-resolution map features, a B-factor sharpening approach is applied and combined with a signal-to-noise based weighting of the amplitudes (Rosenthal & Henderson, 2003 ). While this approach enhances the relevant map signal, it also bears the danger of enhancing noise by oversharpening. Local map variation exacerbates this problem as the optimal sharpening B factor varies across the map. Therefore, recent sharpening approaches take into account local amplitude information from a refined atomic model (Jakobi et al., 2017 ). The interpretation of cryo-EM maps is most challenging initially when atomic reference structures are missing. Regardless of the applied sharpening or filtering routine, the precise map values have to be treated with caution in order to avoid the interpretation of noise artefacts as true density variation. Commonly, maps are visualized by thresholding to create 3D isosurface renderings. These are overlaid with atomic models or used to build polypeptide chains by using a series of interactive map tools (Goddard et al., 2018 ; Emsley & Cowtan, 2004 ). Ideally, the threshold is chosen such that signal is displayed and noise is removed, which is generally expressed as a multiple of the standard deviation σ beyond the map noise. In contrast to crystallographic maps, however, this σ value varies strongly for different cryo-EM maps owing to the dependence on the ratio of particle to volume size, which prevents σ values from being used universally. Moreover, thresholding the cryo-EM map reduces the information content to a binary detection of the voxel and discards more accurate representations of the electrostatic potential of the macromolecule. Once the atomic model has been initially built, atomic coordinate refinement requires the complete dynamic range of the determined cryo-EM densities in order to accurately model different atomic masses, positions and To develop a more robust framework for associating the values of a cryo-EM map with significance, we have recently presented a statistical framework based on multiple hypothesis testing and false-discovery rate (FDR) control, which transforms the cryo-EM map into a new volume that we term a confidence map (Beckers et al., 2019 ). Similar approaches are routinely used in other imaging domains, for example fMRI imaging (Genovese et al., 2002 ; Lohmann et al., 2018 ). Confidence maps contain detection errors with respect to background noise and can be thresholded by controlling the FDR in the detected signal. Confidence maps provide complementary cryo-EM map information that is particularly helpful for the interpretation of weak and ambiguous signal close to background-noise levels. In this CCP-EM Spring Symposium article, we review the basic principles of confidence maps and focus our presentation on practical aspects and extensions of the procedure as it is now integrated in the CCP-EM software suite. 2. Testing for significant signal with respect to background noise We refer to noise in cryo-EM maps as any incorrect modification of the signal from the true electrostatic potential of the structure of interest. Multiple sources of noise that accumulate in a cryo-EM experiment have been discussed previously (Penczek et al., 2006 ). For a concise treatment of cryo-EM noise in the context of confidence maps, we refer readers to Beckers et al. (2019 ). Although noise levels in the solvent region can be assumed to be higher than the noise in the particle region owing to solvent displacement, we show that we can use solvent map values outside the particle to estimate the noise for the following statistical analysis (Beckers et al., 2019 ). For each voxel in the 3D map, we conduct a statistical hypothesis test for positive deviations from background noise. Given an estimate of the cumulative distribution function of the background distribution, which is obtained by assuming a Gaussian distribution or using a nonparametric procedure, p -values are calculated for each voxel as the probability under the null hypothesis of having an intensity at the respective voxel at least as great as the observed background. 3. False-discovery rate control of cryo-EM maps Conducting a statistical test for each voxel within the map results in a multiple testing problem. A consequence that arises from the multiplicity is that many tests can give rise to false-positive detections, and this problem is more severe for large numbers of tested hypotheses. A widely used framework to deal with the large-scale multiple testing problem is false-discovery rate (FDR) control (Benjamini & Hochberg, 1995 ). The approach adjusts the statistical significance level to set an upper bound on the expected proportion of false discoveries. Mathematically, the FDR is defined as where V is the number of false positives and R is the number of true positives. In the statistical literature it is usually stated that the FDR is `controlled at level α' if the true FDR is smaller than α. Typical FDR-controlling approaches take the p-values of the individual tests and transform them to satisfy the FDR criterion. These FDR-adjusted p-values can then be thresholded at a specified FDR level. For confidence maps, we further invert the adjusted p-values for visualization purposes, i.e. thresholding of a confidence map at 0.99 means a maximum FDR of 1%. As an FDR-controlling procedure, we use by default the method of Benjamini and Yekutieli, which has the advantage of controlling the FDR under arbitrary dependencies between the p-values (Benjamini & Yekutieli, 2001 ). Cryo-EM maps possess artificial correlations as a result of the 3D reconstruction and post-processing, which statistically can be considered arbitrary dependencies. 4. Generation of confidence maps The only required input for computing a confidence map is an unmasked and globally sharpened cryo-EM map. The approach is applied to sharpened EM maps, as unsharpened maps lack high-resolution features as well as noise. Therefore, using unsharpened maps will result in underestimated background noise estimation. Typically, the required input is generated by common post-processing procedures in the respective image-processing programs (Rosenthal & Henderson, 2003 ; Scheres, 2012 ; Punjani et al., 2017 ; Desfosses et al., 2014 ). Within this map, the default background estimation uses a total of four map cubes from the solvent area outside the particle (Fig. 1 ). In principle, the size of the map cubes should be maximized to increase the sample size for reliable background noise estimation. At the same time, one should avoid including particle density into the cubes as this will bias noise estimation. Identifying regions outside the structure is straightforward for single-particle maps, whereas cryo-EM densities obtained by subtomogram averaging often do not have a clearly solvent-isolated structure. Therefore, particular care needs to be taken to specify the location of the cube for noise estimation in such nonstandard cases. We implemented the option to specify a manual cube location as well as the volume size (Fig. 1 ). The CCP-EM GUI allows interactive adjustment of the noise cubes; clicking the `Check noise box' button opens an image with three slices through the map with noise areas labelled in white. In this way, the cube parameters can be adjusted to select suitable background regions. Moreover, the slice views can be used to identify cases when noise levels are not uniform over the map and when, therefore, confidence-map generation should be avoided. 5. Local resolution measurements can be included for the generation of confidence maps The statistical power of the FDR thresholding approach can be increased by the incorporation of local resolution information, which is available as an extended option in the CCP-EM GUI. The procedure of specifying the noise cube location and the size of the cubes is identical to that described above. In addition to the cryo-EM map, the user provides a map containing the local resolution values at the respective voxel positions, which is the standard output of a series of programs for local resolution estimation (Heymann & Belnap, 2007 ; Scheres, 2012 ; Hohn et al., 2007 ; Kucukelbir et al., 2014 ). Using local resolution estimates, the cryo-EM map can be locally low-pass filtered, which improves the appearance of the map features as dominant noise in local regions is removed (Cardone et al., 2013 ). Consequently, the signal-to-noise ratios are also increased locally. Tracking the positional background-noise levels after local filtering enables the generation of confidence maps using local resolution information. Very low-resolution artefacts can arise, leading to smeared densities in the confidence maps extending over the whole box. To avoid this, it can be beneficial to restrict the resolution range of the local resolution map to reasonable values, for example from 2 to 20Å for a 3Å map. The incorporation of local resolution information is particularly useful in the presence of substantial local resolution variations, as global sharpening and filtering usually lead to the undersharpening of lower resolution features and the oversharpening of higher resolution features. The resulting confidence maps capture such areas side by side, including the highest resolved parts of the structure, and guide the user through the density-interpretation steps within a single confidence map. Owing to the increase of statistical power, we find that more stringent error levels can be applied for thresholding confidence maps generated using local resolution 6. Case studies: Tobacco mosaic virus, a bacterial ATP synthase and a eukaryotic ribosome In a recent article (Weis et al., 2019 ), confidence maps were used to assign the structural details of the disassembly switch of Tobacco mosaic virus (TMV). Several decades earlier, Don Caspar had proposed a switch mechanism of conformational changes driven by carboxylate interactions (Caspar, 1964 ), but the precise residue location was still missing owing to the flexibility of the respective residues and the absence of two comparative structures in the ON/OFF switch states. Structure determination of two data sets acquired under conditions mimicking the extracellular and intracellular conditions resulted in two maps at 2.0 and 1.9Å resolution in water and at high Ca^2+/acidic pH, respectively. The confidence maps allowed the assignment of significant cryo-EM density for the respective residues and showed that multiple conformations of the involved residues are supported by the map recorded in the water condition. Further analysis and validation by means of an additional Ca^2+/acidic pH cryo-EM map revealed that the switch exists in two distinct structural states. Moreover, using the confidence maps the authors were able to place 71 and 91 water molecules per monomer. In these cases, we placed the water molecules based on the detected confidence map peaks, the expected molecular size and the proximity to the protein structure. Even in high-resolution regular cryo-EM maps this is still a daunting task, as noise peaks can be easily mistaken for waters without further validation (Fig. 2 a, top). The confidence maps, however, enabled placement by means of statistical significance (Fig. 2 a, bottom). To further illustrate the utility of confidence maps including local resolution information, we generated two maps for examples from the EMDB with local resolution variation from near-atomic up to nanometre. Comparison of the map of a locally filtered bacterial ATP synthase map (EMD-9333; Guo et al., 2019 ) with the corresponding confidence map shows improved overall interpretability using the statistical thresholding approach (Fig. 2 b). For the locally filtered map, low-resolution parts such as the stalk domain remain missing at low σ thresholds, while at the same threshold high-resolution parts have already become noisy. The confidence map enables the interpretation of the complete complex at a low FDR of 0.01%, showing the appearance of significant low-resolution density corresponding to a 10×His tag (Fig. 2 b, bottom right). In another example, given by a eukaryotic ribosome (EMD-0194; Juszkiewicz et al., 2018 ), the expansion segments and the ribosomal stalks display lower resolution, as is typical for eukaryotic ribosome structures (Fig. 2 c). In the deposited map, these parts are oversharpened and appear discontinuous owing to noise, which is a result of the global sharpening and filtering. Generating the confidence map by including local resolution information shows the respective domains clearly, including both high- and low-resolution features visible at a single threshold of 0.01% FDR. 7. Visualization of confidence maps Confidence maps can be displayed at a given FDR threshold in common visualization programs that use an isosurface rendering approach. A typical property of such a map is that when density can be clearly distinguished from background noise, voxels assume values close to 1, which results in a close-to-binary distribution of signal versus background. Consequently, when visualizing confidence maps, for example in UCSF Chimera (Pettersen et al., 2004 ), they appear different from common cryo-EM maps. Confidence maps will display very sharp voxel features, with almost all values close to the extremes of 1 and 0. In order to make them appear like typical cryo-EM maps, the displayed surface can be oversampled and smoothed (Fig. 3 ). Alternatively, a −log[10] transformation of the FDR values leads to more shallow gradients and allows detailed analysis at very small FDRs (Fig. 3 , bottom). 8. Assessing additional error criteria for confidence maps Multiple testing is a major field of research in statistical inference (Wilson, 2019 ; Zhang et al., 2019 ; Ignatiadis et al., 2016 ), and several additional error rates and error-controlling procedures beyond Benjamini–Yekutieli FDR control have been proposed. The family-wise error rate (FWER) specifies the probability of having false positives at all (Lehmann & Romano, 2005 ). We have implemented FWER for confidence-map generation as an additional option within the CCP-EM suite. In contrast to FDR, FWER is considered to be the strictest criterion to rule out false-positive detection. Procedurally, background-noise estimation and p-value calculation remain identical, while only the FDR correction is replaced by a FWER-controlling routine. With respect to FWER-based confidence maps, thresholding at a value of 0.99 then means a probability of 1% of having any false positives at all. Mathematically, the FWER is defined as where V denotes the number of false-positive hypotheses. Although controlling the FDR in confidence maps already facilitates the interpretation of voxels in terms of significance, we still expect false positives, i.e. up to as many as is specified by the FDR threshold: in the case of 100000 significant voxels, 1% FDR corresponds to an expected maximum of 1000 false-positive voxels. Controlling the FWER instead of the FDR may be desirable in cases where no false positives can be tolerated for the interpretation. Generic methods for the control of FWER and FDR are given by the Bonferroni–Holm (Holm, 1979 ) and the Benjamini–Yekutieli (Benjamini & Yekutieli, 2001 ) approaches. It can be shown that both criteria control the respective error rates under arbitrary dependencies between the tested p-values (Benjamini & Yekutieli, 2001 ; Holm, 1979 ). In order to investigate the different error criteria from the multiple testing approaches in the context of cryo-EM maps, we compared the FDR and FWER using the 3.4Å resolution cryo-EM map (EMD-3061 ) of γ-secretase (Bai et al., 2015 ). As expected, FWER control is more stringent at 1% as less density is declared significant compared with 1% FDR (Figs. 4 a and 4 b). For example, the presumably false-positive density below the lipid declared by the 1% FDR thresholding is not present after 1% FWER thresholding. In addition, signal is assigned to the head of the embedded lipid (top arrow) using the FDR criterion, whereas this density appears smaller at 1% FWER. In order to assess the performance of different error-rate thresholding criteria more quantitatively, we applied them to a simulated map of 4194 water molecules (taken from PDB entry 6cvm; Bartesaghi et al., 2018 ). The simulated map was generated using UCSF Chimera and we included Gaussian white noise with a standard deviation of 0.5. The resulting signal-to-noise ratio is 1.75 for the density peaks and corresponds to common noise levels in cryo-EM maps in the high-resolution shells. We compared the detected false-positive map peaks that do not originate from water molecules and also the number of missed water molecules for the different procedures at 1% FDR or 1% FWER, respectively. At 1% FWER we did not detect any false positives, whereas at 1% FDR some false-positive peaks were identified, which could be mistakenly interpreted as water (Table 1 ). However, the decreased number of false positives in the case of FWER control comes at the price of missing a small number of water molecules (i.e. there are some false negatives). With respect to the analysis of cryo-EM maps, FWER provides the error criterion that is most useful for the interpretation of weak and isolated signal, as occurs for water molecules and bound ligands in high-resolution structures (Table 1 ). To compare FDR-based and FWER-based confidence maps, FDR is usually more useful for initial interpretation and FWER for the in-depth analysis of more ambiguous parts. Both error criteria provide complementary information in addition to cryo-EM maps that can be used during the model-building process. The numbers of false-negative H[2]O molecules and false-positive voxels together with the true FDR that this corresponds to are given. Controlling procedure False-positive voxels False-negative H[2]O molecules Holm FWER 1% 0/FDR: 0.00% 18 of 4194 Benjamini–Yekutieli FDR 1% 7/FDR: 0.05% 0 of 4194 9. Discussion Confidence maps are based on the statistical framework of multiple hypothesis testing. Applying global error rates in a multiple testing setting such as cryo-EM maps is a complex task as dependencies between the individual voxels have to be considered (for the treatment of dependencies, see Beckers et al., 2019 ). Thus, control of the errors should only be carried out rather conservatively, which means that a confidence map at an FDR of 1% will have a true FDR of below 1%. When applying a threshold to confidence maps, the FDR provides an error criterion that is theoretically interpretable: for example, 1% FDR corresponds to 1% of the thresholded density being background noise. Using a series of test cases of deposited cryo-EM maps, we demonstrated that the 1% FDR threshold was sufficient to visualize the relevant molecular details (Beckers et al., 2019 ). When incorporating local resolution information, lower FDRs of down to 0.01% can be used successfully (Fig. 2 ). The stringency of the FDR criterion is, in principle, also related to the types of features to be interpreted. For clear continuous density stretches such as polypeptide conformations, the occurrence of single false-positive voxels rarely complicates the interpretation. However, when interpreting point densities such as water and ions the cost of false-positive hits can be substantially higher. In these cases, more restrictive thresholds should usually be chosen. This will be particularly relevant as higher true atomic resolution cryo-EM maps become available. An additional possibility to aid the interpretation of atomic resolution densities may be the FWER criterion introduced here, which further minimizes the expected rate of false positives. Taken together, we conclude that lower FWER or more stringent FDR thresholds should be applied when more confident statements about the observed densities are to be made. Although the threshold of a confidence map can be adjusted by the user, it is much less sensitive to the inclusion of noise compared with threshold adjustments of cryo-EM maps. For common cryo-EM maps, the applied threshold is difficult to interpret in terms of significance. Specific σ thresholds remain highly subjective owing to the multitude of errors in the cryo-EM experiment and the subsequent 3D image-reconstruction procedure, as described in Beckers et al. (2019 ). Confidence maps, however, suppress noise, and the associated FDR threshold provides a quantitative error criterion with respect to background noise and gives feedback on the validity of the detected density features. Confidence maps provide complementary information that should be used together with the original density map. As confidence maps contain detection probabilities, information about scattering strength and occupancies is no longer present. Therefore, confidence maps must not be used for atomic coordinate refinement. However, for manual building and initial assignment of the atomic model, confidence maps directly provide information on which density parts can be faithfully analyzed, and incorporation of this additional information into automated model-building protocols may prove to be beneficial in the future. Confidence maps aim to separate signal from background noise. If significant signal is detected in a voxel, it means that, up to the specified confidence level, it is neither background from the amorphous ice nor shot noise. As such, signal corresponds to every feature that contains contributions beyond background noise. However, the following limitations need to be considered where confidence-map generation including background estimation was procedurally successful but claimed incorrect features as signal. In the case when a large fraction of particles were mis­aligned and included in the reconstruction, they will inevitably give rise to statistical assignment of signal in a generated confidence map with no structural meaning. Similar effects will occur in cases of highly preferred orientations and overfitting of noise. As confidence maps are able to detect very weak signal, in particular in cases where local resolutions are provided, this type of incorrect signal will be more prominent in confidence maps compared with regularly sharpened cryo-EM maps. Although this property of the signal-detection approach bears the danger of overinterpretation, it is also sensitively reveals more general problems of the computed cryo-EM structure. Owing to the significant recent progress in improved image quality and image-processing routines in cryo-EM, these cases are becoming less prominent, but internal image-processing validation criteria still need to be applied to ensure the determination of reliable cryo-EM structures. Background-noise determination is critical and is still a step that requires user intervention when confidence maps are generated. Estimation of noise levels in the solvent area can only provide accurate measurements if the background noise is homogeneous over the whole map. This remains an approximation owing to various factors discussed in our previous work (Beckers et al., 2019 ), but it controls the significance level even when the background-noise level is overestimated. Moreover, the manual selection of noise areas may affect this process and adds a level of subjectivity, especially for the case of subtomogram averages. Although guidelines can be given to the user on how to carefully apply the parameters (see above), in the future we envision the full automation of these parts of confidence-map generation. Once automated routines are available, the process of confidence-map generation can be seamlessly integrated into the initial building of atomic models. The first step towards this goal is the implementation of confidence-map generation in the CCP-EM suite (available now in version 1.4). This way, the complementary map information from confidence maps is available at little computational cost and integrated in common map interpretation and atomic modelling tools. ‡Candidate for joint PhD degree from EMBL and Faculty of Biosciences. Heidelberg University, Germany. Open access funding enabled and organized by Projekt DEAL. Bai, X.-C., Yan, C., Yang, G., Lu, P., Ma, D., Sun, L., Zhou, R., Scheres, S. H. W. & Shi, Y. (2015). Nature, 525, 212–217. Web of Science CrossRef CAS PubMed Google Scholar Bartesaghi, A., Aguerrebere, C., Falconieri, V., Banerjee, S., Earl, L. A., Zhu, X., Grigorieff, N., Milne, J. L. S., Sapiro, G., Wu, X. & Subramaniam, S. (2018). Structure, 26, 848–856. Web of Science CrossRef CAS PubMed Google Scholar Beckers, M., Jakobi, A. J. & Sachse, C. (2019). IUCrJ, 6, 18–33. CrossRef CAS PubMed IUCr Journals Google Scholar Benjamini, Y. & Hochberg, Y. (1995). J. R. Stat. Soc. B, 57, 289–300. Google Scholar Benjamini, Y. & Yekutieli, D. (2001). Ann. Stat. 29, 1165–1188. Google Scholar Cardone, G., Heymann, J. B. & Steven, A. C. (2013). J. Struct. Biol. 184, 226–236. Web of Science CrossRef PubMed Google Scholar Caspar, D. L. D. (1964). Adv. Protein Chem. 18, 37–121. CrossRef Google Scholar Desfosses, A., Ciuffa, R., Gutsche, I. & Sachse, C. (2014). J. Struct. Biol. 185, 15–26. CrossRef CAS PubMed Google Scholar Emsley, P. & Cowtan, K. (2004). Acta Cryst. D60, 2126–2132. Web of Science CrossRef CAS IUCr Journals Google Scholar Frank, J. (2006). Three-Dimensional Electron Microscopy of Macromolecular Assemblies: Visualization of Biological Molecules in Their Native State. Oxford University Press. Google Scholar Genovese, C. R., Lazar, N. A. & Nichols, T. (2002). Neuroimage, 15, 870–878. Web of Science CrossRef PubMed Google Scholar Glaeser, R. M. (2016). Methods Enzymol. 579, 19–50. Web of Science CrossRef CAS PubMed Google Scholar Goddard, T. D., Huang, C. C., Meng, E. C., Pettersen, E. F., Couch, G. S., Morris, J. H. & Ferrin, T. E. (2018). Protein Sci. 27, 14–25. Web of Science CrossRef CAS PubMed Google Scholar Guo, H., Suzuki, T. & Rubinstein, J. L. (2019). eLife, 8, e43128. CrossRef PubMed Google Scholar Heymann, J. B. & Belnap, D. M. (2007). J. Struct. Biol. 157, 3–18. Web of Science CrossRef PubMed CAS Google Scholar Hohn, M., Tang, G., Goodyear, G., Baldwin, P. R., Huang, Z., Penczek, P. A., Yang, C., Glaeser, R. M., Adams, P. D. & Ludtke, S. J. (2007). J. Struct. Biol. 157, 47–55. Web of Science CrossRef PubMed CAS Google Scholar Holm, S. (1979). Scand. J. Stat. 6, 65–70. Google Scholar Ignatiadis, N., Klaus, B., Zaugg, J. B. & Huber, W. (2016). Nat. Methods, 13, 577–580. CrossRef CAS PubMed Google Scholar Jakobi, A. J., Wilmanns, M. & Sachse, C. (2017). eLife, 6, e27131. Web of Science CrossRef PubMed Google Scholar Juszkiewicz, S., Chandrasekaran, V., Lin, Z., Kraatz, S., Rama­krishnan, V. & Hegde, R. S. (2018). Mol. Cell, 72, 469–481. CrossRef CAS PubMed Google Scholar Kucukelbir, A., Sigworth, F. J. & Tagare, H. D. (2014). Nat. Methods, 11, 63–65. Web of Science CrossRef CAS PubMed Google Scholar Lehmann, E. & Romano, J. (2005). Testing Statistical Hypotheses. New York: Springer. Google Scholar Lohmann, G., Stelzer, J., Lacosse, E., Kumar, V. J., Mueller, K., Kuehn, E., Grodd, W. & Scheffler, K. (2018). Nat. Commun. 9, 4014. CrossRef PubMed Google Scholar Penczek, P. A., Yang, C., Frank, J. & Spahn, C. M. T. (2006). J. Struct. Biol. 154, 168–183. Web of Science CrossRef PubMed CAS Google Scholar Pettersen, E. F., Goddard, T. D., Huang, C. C., Couch, G. S., Greenblatt, D. M., Meng, E. C. & Ferrin, T. E. (2004). J. Comput. Chem. 25, 1605–1612. Web of Science CrossRef PubMed CAS Google Scholar Punjani, A., Rubinstein, J. L., Fleet, D. J. & Brubaker, M. A. (2017). Nat. Methods, 14, 290–296. Web of Science CrossRef CAS PubMed Google Scholar Rosenthal, P. B. & Henderson, R. (2003). J. Mol. Biol. 333, 721–745. Web of Science CrossRef PubMed CAS Google Scholar Scheres, S. H. W. (2012). J. Struct. Biol. 180, 519–530. Web of Science CrossRef CAS PubMed Google Scholar Weis, F., Beckers, M., Hocht, I. & Sachse, C. (2019). EMBO Rep. 20, e48451. Google Scholar Wilson, D. J. (2019). Proc. Natl Acad. Sci. USA, 116, 1195–1200. CrossRef CAS PubMed Google Scholar Zhang, M. J., Xia, F. & Zou, J. (2019). Nat. Commun. 10, 3433. CrossRef PubMed Google Scholar This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited.
{"url":"https://journals.iucr.org/d/issues/2020/04/00/rr5194/","timestamp":"2024-11-14T02:20:45Z","content_type":"text/html","content_length":"146120","record_id":"<urn:uuid:247761a4-62ad-4f2c-ad8f-1dda720543cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00343.warc.gz"}
Organizers and tutors M. C. Payne, D. Cole and S. Dubois (University of Cambridge) C.-K. Skylaris, J. Dziedzic and K. Wilkinson (University of Southampton) P. D. Haynes, A. A. Mostofi and N. D. M. Hine (Imperial College London) Tuesday, August 28 12:30 : Welcome & lunch (TCM Group, Cavendish Laboratory) 14:00 : Introduce tutors and participants 14:15 : Introduction to linear-scaling DFT (Peter Haynes) PDF 15:00 : Introduction to the ONETEP code (Chris-Kriton Skylaris) PDF 15:45 : Coffee Break 16:00 : Presentations by participants Danny Cole/Alex Fokas: Optical absorption of the FMO protein. Adam Makarucha: Interactions of nanomaterials with amyloid peptides. Chris Knight: Derivation of Interaction Potentials for Reactive Models. Andrew Scott: Accurate Ab-initio determination of RNA molecular shape. Peter Cherry: Towards large-scale DFT simulationos of chemical reactions on metal nanoparticles. Keith Refson: Solvation of Cu-bis ethylenediamine and phthalocyanine. 19:00 : Dinner at Trinity Hall Wednesday, August 29 09:00 : Continuation of presentations by participants Misbah Sarwa: Computational Chemistry at Johnson Matthey Matthias Kahk: Metal oxide photoanodes for solar water-splitting. Gilberto Teobaldi: Molecular dynamics / linear-scaling DFT study of solvated aluminosilicate nanotubes. Joshua Elliot: Inorganic nanotubes. Francis Russell: Domain specific languages for quantum chemistry. 10:25 Work on projects 12:15 : Lunch 13:30 : Work on projects 15:30 : Projector Augmented Wave (PAW) implementation and core electrons in ONETEP (Nick Hine) PDF 16:30 : Work on projects 19:00 : Dinner at Trinity Hall Thursday, August 30 09:15: Discussions with tutors – review of overnight calculations 10:30: ONETEP solvation model (Jacek Dziedzic) 11:15: Work on projects 12:30: Lunch 13:30: Work on projects 14:00: Optimisation of conduction band NGWFs in ONETEP (Peter Haynes). PDF 14:30 : Work on projects 19:00 : Dinner at Trinity Hall Friday, August 31 09:15: Discussions with tutors– review of overnight calculations, work on projects 11:30: Participants’ 5-10 minute summaries of outcomes and future plans 12:30: Lunch 13:30: End of Masterclass Lectures and practical sessions took place in the Theory of Condensed Matter (TCM) group’s seminar room in the Cavendish Laboratory. Accommodation in Cambridge for the ONETEP spring school participants was arranged with Churchill College. Organizing Committee C.-K. Skylaris (University of Southampton) M. C. Payne (University of Cambridge) N. D. M. Hine (Imperial College London) P. D. Haynes (Imperial College London) A. A. Mostofi (Imperial College London) Monday, July 4 11:00 : Welcome & coffee (TCM Group, Cavendish Laboratory) 11:15 : Introduce tutors and participants 11:30 : Introduction to linear-scaling DFT (Peter Haynes) 12:15 : Introduction to the ONETEP code (Chris-Kriton Skylaris) 13:00 : Lunch 14:00 : Discussions with tutors; run calculations 17:00 : M. Probert (York), Ab initio simulations of DNA 17:15 : J. Ireta (U.A. Metropolitana-Iztapala, Mexico), Electronic charge density redistribution across the interface of two interacting proteins 18:30 : Dinner (at Sidney Sussex College) 19:30 : N. Todorova and A. Makarucha (RMIT, Melbourne), Investigating nanomaterial-protein interactions: How can ONETEP help? 19:45 : C. Eames (Bath), Lithium transport in next generation battery materials 20:00 : G. Jones (UCL, Johnson-Matthey), First-principles catalysis: understanding the the influence of support on metal particle activity 20:15 : S. Pyrlin (Minho, Portugal), Modelling the effect of the chemical matrix-CNT interaction on the composite electrical properties 20:30 To the pub! Tuesday, July 5 09:15 : Discussions with tutors 10:30 : J. Dziedzic (Southampton), Implicit solvent in ONETEP 11:15 : N. English (Dublin), Molecular modelling of Rubisco 11:30 : M. Sacchi (Cambridge), Controlling the band gap opening of graphene on boron nitride 11:45 : T. Thonhauser (Wake Forest), ”Ab initio anti-cancer drug development: A study of DNA intercalation” 12:00 : G. Teobaldi (Liverpool), In-silico development of the potential of inorganic nanotubes as novel (photo-)catalysts 12:15 : Lunch 13:30 : Work on projects 15:30 : Punting trip (weather permitting) 18:30 : Dinner (at Sidney Sussex College) Wednesday, July 6 09:15 : Discussions with tutors 10:30 : Ionic Forces in ONETEP (Nicholas Hine) 11:00 : Molecular Dynamics in ONETEP (Simon Dubois) 12:00 : Work on projects 13:00 : Lunch 14:00 : Work on projects 15:30 : Walk to Coton Orchard? (weather permitting) 18:30 : Dinner (at Sidney Sussex College) Thursday, July 7 09:15 : Discussions with tutors 10:30 : ODG talk(s) based on participant feedback 11:15 : Work on projects 12:30 : Lunch 13:30 : Work on projects 19:30 : Workshop Dinner (at Sidney Sussex College) Friday, July 8 09:15 : Discussions with tutors 10:30 : ODG talk(s) based on participant feedback 11:15 : Work on projects 12:30 : Lunch 13:00 : Participants’ 5-10 minute summaries of outcomes and future plans 15:00 : Finish Lectures and practical sessions will take place in the Theory of Condensed Matter (TCM) group’s seminar room in the Cavendish Laboratory. Extensive instructions on travel arrangements to the Cavendish are available on the TCM group website but do not hesitate to contact the organizers if you have further questions. Accommodation in Cambridge for the ONETEP spring school participants has been arranged with Sidney Sussex College, and with Emmanuel College in specific cases – both are in the centre of Cambridge. Organized by Chris-Kriton Skylaris (University of Southampton) Mike Payne (University of Cambridge) Nicholas Hine (University of Cambridge) Peter Haynes (Imperial College London) Arash A. Mostofi (Imperial College London) The list of participants is available on the CECAM website Schedule and Talk Slides Tuesday, April 13 0900 : Welcome and introductions 0930 : Lecture 1 : Overview of first principles calculations (M. C. Payne) PDF Δ 1030 : Lecture 2 : Overview of linear-scaling methods (P. D. Haynes) PDF 1130 : Coffee break 1200 : Lecture 3 : Introduction to ONETEP (C.-K. Skylaris) PDF 1300 : Lunch 1400 : Practical session 1 : Setting up Simple ONETEP Calculations PDF 1800 : Close Wednesday, April 14 0900 : Lecture 4 : Density matrices (P. D. Haynes) JPG 1000 : Short talks : Applications to biological systems 1000 : Daniel Cole : Protein-protein interactions from linear-scaling DFT calculations PDF 1020 : Stephen Fox : Protein-ligand interactions PDF 1040 : Jacek Dziedzic : Implicit Solvation Models in ONETEP PDF 1100 : Coffee break 1130 : Lecture 5 : Basis states: psincs and the FFT box (A. A. Mostofi) PDF 1230 : Participants’ talks 1230 : Oliviero Andreussi : Computational Design and Evaluation of Room Temperature Ionic Liquids for Rechargeable Lithium Batteries Applications PDF Δ 1300 : Lunch 1400 : Practical session 2 : Geometry optimisation PDF 1800 : Close Thursday, April 15 0900 : Lecture 6 : Electronic energy minimisation (C.-K. Skylaris) PDF 1000 : Short talks : Applications to nanostructures 1000 : Fabiano Corsetti : Phonon calculations in ONETEP with the finite displacement method PDF 1020 : Phil Avraam : Charge distribution in GaAs nanorods 1040 : Nicholas Zonias : Large-scale DFT calculations on H-passivated Si nanorods using the ONETEP code PDF 1100 : Coffee break 1130 : Lecture 7 : Parallel implementation (N. D. M. Hine) PDF 1230 : Group discussion : The ONETEP Wiki 1300 : Lunch 1400 : Practical session 3 : Analysis and visualisation PDF 1800 : Close Friday, April 16 0900 : Lecture 8 : Beyond DFT with ONETEP (N. D. M. Hine) PDF 0945: Short talks : Future developments 0945 : David O’Regan : Linear-scaling and projector self-consistent DFT+U for electronic correlations in large systems 1005 : Alvaro Ruiz Serrano : Pulay Forces and Multiple Accuracy Approach in ONETEP PDF 1025 : Laura Ratcliff : Towards the calculation of experimental spectra using linear-scaling density-functional theory PDF 1045 : Jacek Dziedzic : Hartree-Fock Exchange and Hybrid Exchange-Correlation Functionals PDF 1100 : Coffee break 1130 : Simon Dubois : Quantum transport in graphene based nanostructures PDF 1200 : Lecture 9 : Multiscale modelling with ONETEP (A. A. Mostofi) 1300 : Lunch Lectures and practical sessions will take place in the Theory of Condensed Matter (TCM) group’s seminar room in the Cavendish Laboratory. Extensive instructions on travel arrangements to the Cavendish are available on the TCM group website but do not hesitate to contact the organizers if you have further questions. Accommodation in Cambridge for the ONETEP spring school participants has been arranged with Corpus Christi College in the centre of Cambridge. Financial Support Funding from the Psi-K Training programme for this ONETEP Spring School is gratefully acknowledged. Eligible applicants received funding for full-board accommodation at Corpus Christi, and up to 200 euro towards travel expenses by rail or air. First-principles simulations based on density-functional theory (DFT), in particular the plane-wave pseudopotential (PWP) method, have become established as a powerful tool for gaining insight into complex atomistic processes and predicting the properties of new materials. Methods for performing such calculations are being developed and applied by a growing number of scientists including not just physicists, chemists and materials scientists but also biochemists and geologists. However the system-sizes accessible to first-principles simulations is limited by the computational scaling of traditional implementations, which grows with the cube of the number of atoms and restricts them to the study of several hundreds of atoms even with modern supercomputers. There has therefore been much interest in the development of so-called linear-scaling methods for insulators, which promise to revolutionise the scope and scale of simulations based upon DFT and facilitate calculations involving thousands of atoms. These new methods all abandon the conventional description of the fictitious Kohn-Sham system in terms of extended Bloch states in order to exploit the localisation of the density-matrix and/or Wannier functions. This also means that linear-scaling calculations are more amenable to embedding within other calculations and hence incorporation within multiscale simulations. This is reflected by the incorporation within Working Group 2 (Multiscale Methods) of the Psi-k Network. However only a few general purpose linear-scaling codes have emerged over the last decade. The ONETEP code has been applied to systems consisting of up to thirty thousand atoms and ranging from proteins to nanostructures. In ONETEP, local orbitals associated with each atom are described in terms of a systematic basis set equivalent to a set of plane-waves and individually optimised in situ to obtain high accuracy and transferability. While ONETEP inherits a number of desirable features from its relationship with the PWP method, it is nonetheless based on a reformulation of DFT in terms of the density-matrix whose truncation requires a considerably more complex (and sometimes conflicting) convergence procedure. Hence this tutorial is required to introduce the new principles and practices associated with ONETEP both to experienced practitioners and novices alike. Although ONETEP is marketed commercially by Accelrys, it is available to academic users worldwide direct from the University of Cambridge via an inexpensive license to cover administrative costs. These users are encouraged to participate in the self-supporting ONETEP user community through the Wiki: www.onetep.org. The first ONETEP summer school was held in Cambridge in July 2008 and was intended mainly for prospective developers. The attendees were almost exclusively from the UK. The aim of this tutorial is rather different: to provide training for new users from across Europe and beyond and to help them to exploit the new opportunities that ONETEP provides for their research. Participants will be expected to be familiar with electronic structure calculations within density-functional theory but no knowledge of ONETEP or linear-scaling methods in general is required. Scientific Objectives The tutorial will comprise lectures, practical sessions and short talks. Lectures by members of the ONETEP Developers’ Group will cover the theory underlying the method, its implementation in a general purpose computational scheme and future development work in progress and beyond. The practical sessions will provide a comprehensive overview to compiling and running the code and analysing the results obtained. Short talks from current ONETEP users will highlight the range of applications and development work currently under way and some participants will also be invited to speak about their plans for using ONETEP in their research. There will also be a group discussion about how to develop the ONETEP Wiki to promote effective communication across the ONETEP user community. The objectives are to provide both new and experienced first-principles simulators with: a basic grasp of the relevant theory underlying ONETEP a clear understanding of the parameters that must be converged to obtain reliable results the practical know-how to set up and run calculations that use the whole range of functionality currently in ONETEP experience in trouble-shooting common problems that arise tools for analysing the results of ONETEP simulations an invitation to participate in the ONETEP user community and Wiki enthusiasm to employ ONETEP in their future research Review of linear-scaling methods Linear scaling electronic structure methods, Stefan Goedecker, Rev. Mod. Phys. 71, 1085-1123 (1999) Principal ONETEP reference Introducing ONETEP: Linear-scaling density functional simulations on parallel computers, Chris-Kriton Skylaris, Peter D. Haynes, Arash A. Mostofi and Mike C. Payne, J. Chem. Phys. 122, 084119 (2005) General overview ONETEP: linear-scaling density-functional theory with local orbitals and plane waves, Peter D. Haynes, Chris-Kriton Skylaris, Arash A. Mostofi and Mike C. Payne, phys. stat. sol. (b) 243 2489-2499 Nonorthogonal generalized Wannier function pseudopotential plane-wave method, Chris-Kriton Skylaris, Arash A. Mostofi, Peter D. Haynes, Oswaldo Di�guez and Mike C. Payne, Phys. Rev. B 66, 035119 Total-energy calculations on a real space grid with localized functions and a plane-wave basis, A. A. Mostofi, C.-K. Skylaris, P. D. Haynes and M. C. Payne, Comput. Phys. Commun. 147, 788-802 (2002) Preconditioned iterative minimisation for linear-scaling electronic structure calculations, Arash A. Mostofi, Peter D. Haynes, Chris-Kriton Skylaris and Mike C. Payne, J. Chem. Phys. 119, 8842-8848 Implementation of linear-scaling plane wave density functional theory on parallel computers, Chris-Kriton Skylaris, Peter D. Haynes, Arash A. Mostofi and Mike C. Payne, phys. stat. sol. (b) 243, 973-988 (2006) Density kernel optimisation in the ONETEP code, P. D. Haynes, C.-K. Skylaris, A. A. Mostofi and M. C. Payne, J. Phys.: Condens. Matter 20, 294207 (2008) Linear-scaling density-functional theory with tens of thousands of atoms: Expanding the scope and scale of calculations with ONETEP, N. D. M. Hine, P. D. Haynes, A. A. Mostofi, C.-K. Skylaris and M. C. Payne Comput. Phys. Commun. 180, 1041-1053 (2009) Using ONETEP for accurate and efficient O(N) density functional calculations, Chris-Kriton Skylaris, Peter D. Haynes, Arash A. Mostofi and Mike C. Payne, J. Phys.: Condens. Matter 17, 5757-5769 Novel structural features of CDK inhibition revealed by an ab initio computational method combined with dynamic simulations, L. Heady, M. Fernandez-Serra, R. L. Mancera, S. Joyce, A. R. Venkitaraman, E. Artacho, C.-K. Skylaris, L. Colombi Ciacchi and M. C. Payne, J. Med. Chem. 49, 5141-5153 (2006) Achieving plane wave accuracy in linear-scaling density functional theory applied to periodic systems: A case study on crystalline silicon, Chris-Kriton Skylaris and Peter D. Haynes, J. Chem. Phys. 127, 164712 (2007) Linear-scaling first-principles study of a quasicrystalline molecular material, M. Robinson and P. D. Haynes, Chem. Phys. Lett. 476, 73-77 (2009) The inaugural ONETEP Summer School was held in Cambridge Tuesday 8th July 13.30 Lecture 1: Overview of first principles calculations (Mike Payne) 14.30 Lecture 2: Introduction to ONETEP (Peter Haynes) PDF 15.30 Coffee 16.00 Practical session 1 PDF Wednesday 9th July 09.00 Lecture 3: Density matrices (Peter Haynes) 10.00 Lecture 4: Psinc functions and FFT boxes (Arash Mostofi) PDF 11.00 Coffee 11.30 Lecture 5: Electronic energy minimisation (Chris Skylaris) PDF 12.30 Lunch 13.30 Practical session 2 PDF 15.00 Practical session 3 PDF 16.00 Coffee 16.30 Lecture 6: ONETEP parallelisation (Nick Hine) PDF 19.00 Dinner Thursday 10th July 09.00 Lecture 7: Forces and geometry optimisation (Arash Mostofi) 10.00 Lecture 8: Overview of ONETEP applications (Chris Skylaris, David O’Regan, Mark Robinson, Quintin Hill and Phil Avraam) 11.00 Coffee 11.30 General discussion of future plans for ONETEP and creating a user community (Chris Skylaris) 12.30 Close
{"url":"https://onetep.org/news-events/page/3/","timestamp":"2024-11-12T13:52:16Z","content_type":"text/html","content_length":"69916","record_id":"<urn:uuid:5f3d74fc-f176-4a87-be10-0d33d5a4c516>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00052.warc.gz"}
Tips and Tricks – The Spatial Database Advisor This articles describes how to compute the cumulative length of a single linestring, or the total tonnage that crosses a set road segments. Introduction The ordinates stored in an geometry objects, across all spatial data types do not have any rounding applied to their values. This is one aspect of data management that is seldom considered by most practitioners. Other articles in this website deal with how to round their values. This article is about a related topic”Read More Sometimes it is a data quality requirement for linear data (roads, pipelines, transmission lines) that curves within the lines must have a radius greater than a particular amount. Recently a customer asked me to write some TSQL functions to help them run data quality checks over linear data loaded into a SQL Server 2008 geometryRead More This article demonstrates the power of database-based spatial processing. The business requirement is to determine, dynamically, the side of a land parcel that faces the street (could be single 2 point straight line to something more complex), and then determine the clockface direction from either ends of the frontage or the middle, to a object in the roads reserve (eg telecommunications This article shows how to create a GeoJSON Document from Selection of SDO_Geometry objects. I love working with the PostGIS API. It is a deep and wide river to swim in. But sometime having read, researched, experimented, and not found the right solution one can be forced to roll one’s own so to speak to get the job done. For the problem below, if someone can point out aRead More This article describes the elements of a telecommunications pit and shows how they can be spatially represented. In particular the concept of a butterfly as a method for displaying the walls of a pit is introduced. A great day has arrived for PostgreSQL developers (not just Spatial geeks) in that finally an integrated spatial viewer has been added to PgAdmin 4.3 as announced by Regina Obe You can see the announcement and some examples here While there is always a need for software like Safe Software’s Awesome FME to automate import and export tasks from your database, the ability to do so just using Oracle can also be enormously useful. I have implemented the approach described below many times over the years, in different customer sites. At one customer site,Read More The Java Topology Suite (JTS) has a linestring densifier function that is available in my Spatial Companion for Oracle (SC4O) solution. See here for documentation. For those who aren’t able to install SC4O (some DBAs don’t like it), there was a PL/SQL function in my the old GEOM Package implemetation on this website that allowsRead More
{"url":"https://www.spdba.com.au/tag/tips-and-tricks/","timestamp":"2024-11-10T11:52:30Z","content_type":"text/html","content_length":"187621","record_id":"<urn:uuid:fb03c165-ccb7-475f-b3ce-074dc77d36d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00276.warc.gz"}
Supplementary Material | Q is for Quantum top of page Quantum mechanics for those who only know basic arithmetic The Measurement problem This note contains a simplified version of von-Neumann's exposition of the so-called "measurement problem" using the misty-state formalism Recovering the standard quantum mathematical formalism The formalism introduced in Q is for Quantum involves doing (a large subset of) quantum mechanics using elementary string-rewriting rules instead of the regular formalism (which is based on linear algebra). A very brief summary of how the two are connected can be found in this note. Python and Matlab code Many thanks to Ilia Kurgansky, Head of Computer Science at Rugby School in the UK, who is making available this python code capable of doing all the misty state calculations! Some extremely poor matlab code is available from Terry on request. Other resources Ed Barnes, Sophia Economou and Terry Rudolph discuss a program for teaching quantum theory to high school students in this paper. The paper includes a new game “Money or Tiger” that is an even simpler introduction to a quantum algorithm than the “bank robbery” game at the end of Part I of Q is for Quantum Nathan Schor, the creator of Quantum Curious, has set up this page where he summarizes some of the rules for calculating with the mists, as well as providing his own tips, tricks & insights into learning with the misty-state approach. Although there is an unbelievable amount of Q-Rubbish in the pop-sci literature, not everything is! In this note I have listed a few favorite pop-sci (or slightly beyond) books. These are all written by excellent physicists are at the cutting edge of modern research in the areas they discuss An introductory online course which begins using some of the formalism in Q is for Quantum can be found at this link. This course was designed Prof. Diana Franklin, Dr. Kaitlin Smith, and her CANON Lab team and is intended for students with only an algebra background. bottom of page
{"url":"https://www.qisforquantum.org/supplementary-material","timestamp":"2024-11-12T13:55:39Z","content_type":"text/html","content_length":"328966","record_id":"<urn:uuid:e984fd5c-9b1a-4151-b1b5-edac428f310a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00092.warc.gz"}
Python package for geometric Neighbour Embeddings This is work in progress, the description below will likely be subject to substantial changes. This project introduces and explores geometric Neighbour Embeddings (gNE – /ˈdʒ:ɪni/, like Genie from Aladdin), a method for dimensionality reduction that aims to preserve geometric structures more faithfully than standard methods. Similar to traditional Neighbour Embeddings like t-SNE and UMAP, gNE seeks to maintain the neighbour relationships present in high-dimensional data. However, gNE modifies these approaches in two ways: It extends beyond pairwise affinities and instead aims to preserve geometric properties like distances, areas, and volumes of higher-order neighbour relations. The resulting embeddings are intended to be more geometrically interpretable, such that they can facilitate downstream tasks on the low-dimensional representation, like dynamical inference. An early implementation of these ideas is available as python package gNE on GitHub. Let \(X\) be a finite subset of a Riemannian manifold \((M,g_M)\) and let \((N,g_N)\) be a chosen target Riemannian manifold. We then want to find an embedding \[f: X \hookrightarrow (N,g_N)\] that tries to preserve the geometric properties of (\(k\)-nearest) neighbourhoods in \(X\). The higher-order neighbour relations in \(X\) and \(Y = f(X)\) are modeled by \textit{weighted simplicial complexes} \((K_X, w_X)\) and \((K_Y, w_Y)\). The weight functions \(w_X\) and \(w_Y\) associate geometric quantities, as determined by \(g_N\) and \(g_M\), to each simplex. For example distances for 1-simplices (edges), areas for 2-simplices (triangles), volumes for 3-simplices In complete analogy to standard Neighbour Embeddings, gNE then factors through a comparison of \((K_X, w_X)\) and \((K_Y, \tilde{w}_Y)\). The general structure of the loss function is given by a simplex-wise comparison of weights: \[\mathcal{L}(K_X, K_Y) = \lambda_0 \sum_{i} \mathcal{L}_0 (w_i ,\tilde w_i) + \lambda_1 \sum_{i,j} \mathcal{L}_1 (w_{ij}, \tilde{w}_{ij}) + \lambda_2 \sum_{i,j,k} \mathcal{L}_2 (w_{ijk}, \tilde{w}_ {ijk}) + \ldots\] Here \(\lambda_i \in \mathbb{R}\) are hyperparameters that fix the relative importance of the simplices of various dimensionalities, \(\mathcal{L}_i\) are loss functions (e.g. \(L^2\)), and we used abbreviations of the form \(w_{ij} = w_X({x_i,x_j})\) and \(\tilde{w}_{ij}=w_Y({y_i,y_j})\). Further Ideas • For future purposes want to implement Hamiltonian dynamics on a given geometry. For this need tensors and differential forms. • parametric gNE, i.e. use \((X, d_X) \subset (M,g_M)\) to learn a map \(f:M \to N\) such that one can calculate \(f(x)\) for any \(x \in M\) that was not originally in \(X\). As far as I understand it, this is usually done with neural nets and relies on a completely different Ansatz. Currently I do not plan to follow up on this idea. 1. Attraction-Repulsion Spectrum in Neighbor Embeddings 1. Symmetric Spaces for Graph Embeddings: A Finsler-Riemannian Approach In International Conference on Machine Learning, 2021 1. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction 1. Poincaré Embeddings for Learning Hierarchical Representations In Advances in Neural Information Processing Systems 30, 2017 1. Visualizing Data Using T-SNE. 1. Stochastic Neighbor Embedding In Advances in Neural Information Processing Systems, 2002
{"url":"https://michael.bleher.me/projects/gne/","timestamp":"2024-11-11T10:15:53Z","content_type":"text/html","content_length":"14631","record_id":"<urn:uuid:2cf86aa4-459f-46c9-bd8b-3edbd7773e0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00580.warc.gz"}
Michael Fekete - Biography Quick Info 19 July 1886 Zenta, Bácska, Austria-Hungary (now Senta, Serbia) 13 May 1957 Jerusalem, Israel Michael Fekete was a Hungarian mathematician and set theorist who worked on the transfinite diameter of a set. Michael Fekete's given name was Mihály but he later adopted the name Michael. He was born into a Jewish-Hungarian family in Zenta which was in Hungary, but it was part of the Austro-Hungarian Empire at the time of his birth. Today Zenta is in Serbia. His parents, Alexander and Emma Fekete, owned a bookshop in Zenta and they also edited a local newspaper. Michael, one of his parents' four children, attended elementary school in Zenta and then moved on to study at the local Gymnasium. While he was at the Gymnasium he helped his parents both with editing the newspaper and also helping by contributing short stories which were published in the newspaper. After graduating from the Gymnasium, Fekete entered the University of Budapest to study mathematics. He was taught by Lipót Fejér and greatly influenced by him - this is hardy surprising, since Fejér inspired a whole generation of Hungarian mathematicians. Just a little note at this point; in Hungarian 'Fekete' means 'black' and 'Fejér' means 'white' so the student and his advisor were 'black and white'! Fekete was awarded a doctorate by the University of Budapest in 1909 but already he had a number of papers related to his doctoral thesis in print: A general treatment of linear congruence systems (Hungarian) (1908), Über die additive Darstellung einiger zahlentheoretischer Funktionen Ⓣ (1908), and On the additive representation of some number theoretical functions (Hungarian) (1909). After the award of his doctorate, Fekete went to Göttingen to undertake postdoctoral studies in 1909-10 at the Georg-August University of Göttingen. There he worked with Edmund Landau and during his year in Göttingen wrote a number of further papers including: On necessary and sufficient conditions for the summability of power series (Hungarian) (1910), Sur les séries de Dirichlet Ⓣ (1910), and Sur un théorème de M Landau Ⓣ (1910). However, it was Fejér, not Edmund Landau, who was the greatest influence on Fekete as Rogosinski explains [2]:- Fekete as a mathematician was very typically an analyst of the school of L Fejér. From him he inherited the delight in a particular isolated problem. From him too he learned the elegant simplicity of his analytical technique and style. Very little influence of his second teacher Edmund Landau is seen in this. After his year in Germany, Fekete returned to Budapest where he taught in secondary schools for eighteen years. He met the mathematics teacher Dora Lenk and, in 1914, they married. Michael and Dora Fekete had two sons but, sadly, Dora died in 1922. During these years in which he was a school teacher, Fekete also taught at the university as a docent. One of the university students he taught was John von Neumann and they published the joint paper Über die Lage der Nullstellen gewisser Minimum Polynome Ⓣ in 1922. In fact Fekete had taught von Neumann while he was still at school as he had been employed as a private tutor. By this time Fekete had already published about 20 papers, but this was von Neumann's first paper published only one year after he completed his studies at the Gymnasium. This paper looked at the transfinite diameter of a set, a concept which Fekete worked on throughout the rest of his career. Fekete emigrated in 1928 when he became a lecturer at the Hebrew University of Jerusalem. It was a new university, founded on Mount Scopus three years before Fekete took up the lectureship there. After working as a lecturer for a year, he was made a professor and appointed as Director of the University's Einstein Institute of Mathematics [2]:- At the university he played an important role in the administration, was Dean of Science, and later the Rector from 1945-1948. His period as rector was important for the Hebrew University for in 1948 Jerusalem was divided into Israeli and Jordanian sectors with Mount Scopus in the east part which became Jordanian. The university was then moved to Giv'at Ram in the Israeli part. Fekete retired in 1955 and in the year of his retirement he attended the International Congress of Mathematicians in Amsterdam. He gave his lecture Transfinite diameter and Fourier series to the congress on Monday, 6 September, to Section IId with Jean Leray in the chair. Also in the year in which he retired he received his greatest honour when presented with the Israel Prize for Exact Sciences. Retirement did not mean that Fekete gave up research but continued working on ideas that had fascinated him throughout his career. In 1958, one year after his death, his paper New methods of summability was published by the London Mathematical Society. V F Cowling begins a review (which explains how it came to be written) as follows:- In 1916 the author showed that analytic continuation of a power series can be represented as a matrix-transformation of the partial sums by [a specific] upper triangular matrix .... This appeared in a Hungarian textbook by Beke (1916) an English summary of which is to be found in a paper of Vermes (1949). This method of summability has subsequently been called the 'Taylor method'. In the present note (a summary of a lecture given by the author in 1954 as compiled by P Vermes) two new methods of summability are introduced. In [2] Fekete's last few years are described; note that he married Erna Baruch during these late years:- Fekete was a genuine and enthusiastic mathematician and a very fertile one, as can be seen from the long list of his publications [containing 77 items]. Even as an old man he had preserved his youthful enthusiasm and his capacity for work - in fact, he died over his desk doing mathematics. He travelled widely in his late years, both in Europe and in the United States of America, and loved lecturing on his problems wherever he could. The small energetic man with fiery eyes and an unruly Einstein mane of white hair on his fine large head was a well-known visitor and speaker at mathematical conferences and seminars everywhere. Fekete advised several doctoral students who went on the become world-leading mathematicians, perhaps the most famous being Aryeh Dvoretzky and Menahem Max Schiffer. Let us end this short biography by recording Rogosinski's debt to Fekete [2]:- Fekete's genuine love of mathematics showed also in his keen interest in the work of other, and in particular younger, mathematicians. I myself met him first in 1923 at a conference at Innsbruck when his interest in some early work of my own was so encouraging to me at the beginning of my career. My case is not isolated, and in this way he has made himself many life-long 1. J Balázs, The scientific work of the late Michael Fekete (Hungarian), Mat. Lapok 9 (1958), 197-224. 2. W W Rogosinski, Obituary : Michael Fekete, J. London Math. Soc. 33 (1958), 496-500. 3. The list of the papers of the late M Fekete. Mat. Lapok 9 (1958), 1-5. 4. S Agmon, Prof Fekete on his 70th birthday (Hebrew), Riveon Lematematika 10 (1956), 1-7. Additional Resources (show) Other websites about Michael Fekete: Honours awarded to Michael Fekete Written by J J O'Connor and E F Robertson Last Update March 2011
{"url":"https://mathshistory.st-andrews.ac.uk/Biographies/Fekete/","timestamp":"2024-11-11T07:34:13Z","content_type":"text/html","content_length":"26174","record_id":"<urn:uuid:2978adfd-9233-4ecd-b539-bf255b634567>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00819.warc.gz"}
Giancoli 7th "Global" Edition, Chapter 16, Problem 5 Giancoli Answers, including solutions and videos, is copyright © 2009-2024 Shaun Dychko, Vancouver, BC, Canada. Giancoli Answers is not affiliated with the textbook publisher. Book covers, titles, and author names appear for reference purposes only and are the property of their respective owners. Giancoli Answers is your best source for the 7th and 6th edition Giancoli physics solutions.
{"url":"https://www.giancolianswers.com/giancoli-physics-7th-global-edition-solutions/chapter-16/problem-5","timestamp":"2024-11-05T22:40:34Z","content_type":"text/html","content_length":"162135","record_id":"<urn:uuid:c0677913-9e86-4e6d-8173-e80b57f9928c>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00312.warc.gz"}
Samacheer Kalvi 9th Maths Guide Chapter 4 Geometry Ex 4.5 Students can download Maths Chapter 4 Geometry Ex 4.5 Questions and Answers, Notes, Samacheer Kalvi 9th Maths Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus, helps students complete homework assignments and to score high marks in board exams. Tamilnadu Samacheer Kalvi 9th Maths Solutions Chapter 4 Geometry Ex 4.5 Question 1. Construct the ΔLMN such that LM = 7.5 cm, MN = 5cm and LN = 8cm. Locate its centroid. Steps for construction: Step 1: Draw the ΔLMN using the given measurement LM = 7.5 cm, MN = 5cm and LN = 8cm. Step 2: Construct the perpendicular bisectors of any two sides LM and MN intersect LM at P and MN at Q respectively. Step 3: Draw the median LQ and PN meet at G. The point G is the centroid of the given ΔLMN. Question 2. Draw and locate the centroid of the triangle ABC where right angle at A, AB = 4cm and AC = 3 cm. Steps for construction: Step 1: Draw the ΔABC using the given measurement AB = 4cm and AC = 3 cm and ZA = 90°. Step 2: Construct the perpendicular bisectors of any two sides AB and AC to find the mid-points P and Q of AB and AC. Step 3: Draw the medians PC and BQ intersect at G. The point G is the centroid of the given ΔABC. Question 3. Draw the ΔABC, where AB = 6cm, ∠B = 110° and AC = 9cm and construct the centroid. Steps for construction: Step 1: Draw the ΔABC using the given measurement AB = 6cm, AC = 9cm and ∠B =110°. Step 2: Construct the perpendicular bisectors of any two sides AB and BC to find the mid-points P and Q of AB and BC. Step 3: Draw the medians PC and AQ intersect at G. The point G is the centroid of the given ΔABC. Question 4. Construct the ΔPQR such that PQ = 5cm, PR = 6cm and ∠QPR = 60° and locate its centroid. Steps for construction: Step 1 : Draw ΔPQR using the given measurements PQ = 5cm, PR = 6cm and ∠P = 60°. Step 2 : Construct the perpendicular bisectors of any two sides PQ and QR to find the mid-points of M and N respectively. Step 3 : Draw the median PN and MR and let them meet at G. The point G is the centroid of the given ΔPQR. Question 5. Draw ΔPQR with sides PQ = 7 cm, QR = 8 cm and PR = 5 cm and construct its Orthocentre. Steps for construction: Step 1: Draw the ΔPQR with the given measurements. Step 2: Construct altitudes from any two vertices P and Q to their opposite sides QR and PR respectively. Step 3: The point of intersection of the altitude H is the orthocentre of the given ΔPQR. Question 6. Draw an equilateral triangle of sides 6.5 cm and locate its Orthocentre. Steps for construction: Step 1: Draw the ΔABC with the given measurements. Step 2: Construct altitudes from any two vertices A and C to their opposite sides BC and AB respectively. Step 3: The point of intersection of the altitude H is the orthocentre of the given ΔABC. Question 7. Draw ΔABC, where AB = 6 cm, ∠B = 110° and BC = 5 cm and construct its Orthocentre. Steps for construction: Step 1: Draw the ΔABC with the given measurements. Step 2: Construct altitudes from any two vertices B and C to their opposite sides AC and BC respectively. Step 3: The point of intersection of the altitude H is the orthocentre of the given ΔABC. Question 8. Draw and locate the Orthocentre of a right triangle PQR where PQ = 4.5 cm, QR = 6 cm and PR = 7.5 cm. Steps for construction: Step 1: Draw the ΔPQR with the given measures. Step 2: Construct altitude from any two vertices Q and R to their opposite side PR and PQ respectively. Step 3: The point of intersection of the altitude H is the orthocentre of the given ΔPQR.
{"url":"https://samacheerkalvi.guide/samacheer-kalvi-9th-maths-guide-chapter-4-ex-4-5/","timestamp":"2024-11-02T12:45:07Z","content_type":"text/html","content_length":"78534","record_id":"<urn:uuid:25812491-527e-4f31-a5fd-d7aa50449699>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00428.warc.gz"}
Elgamal Cryptographic System In 1984, T. Elgamal announced a public-key scheme based on discrete logarithms, closely related to the Diffie-Hellman technique [ELGA84, ELGA85]. The ElGamal2 cryptosystem is used in some form in a number of standards including the digital signature standard (DSS), which is covered in Chapter 13, and the S/MIME e-mail standard (Chapter 18). As with Diffie-Hellman, the global elements of ElGamal are a prime number q and a, which is a primitive root of q. User A generates a private/public key pair as follows: 1. Generate a random integer XA, such that 1 6 XA 6 q - 1. 2. Compute YA = aXA mod q . 3. A’s private key is XA; A’s pubic key is {q, a, YA}. Any user B that has access to A’s public key can encrypt a message as follows: 1. Represent the message as an integer M in the range 0 … M … q - 1. Longer messages are sent as a sequence of blocks, with each block being an integer less than q. 2. Choose a random integer k such that 1 … k … q - 1. 3. Compute a one-time key K = (YA)k mod q . 4. Encrypt M as the pair of integers (C1, C2) where C1 = ak mod q; C2 = KM mod q User A recovers the plaintext as follows: 1. Recover the key by computing K = (C1)XA mod q . 2. Compute M = (C2K-1) mod q . These steps are summarized in Figure 10.3. It corresponds to Figure 9.1a: Alice generates a public/private key pair; Bob encrypts using Alice’s public key; and Alice decrypts using her private key. Let us demonstrate why the ElGamal scheme works. First, we show how K is recovered by the decryption process: K = (YA)k mod q K is defined during the encryption process K = (aXA mod q)k mod q substitute using YA = aXA mod q K = akXA mod q by the rules of modular arithmetic K = (C1)XA mod q substitute using C1 = ak mod q Next, using K, we recover the plaintext as C2 = KM mod q (C2K-1) mod q = KMK-1 mod q = M mod q = M We can restate the ElGamal process as follows, using Figure 10.3. 1. Bob generates a random integer k. 2. Bob generates a one-time key K using Alice’s public-key components YA, q, and k. 3. Bob encrypts k using the public-key component a, yielding C1. C1 provides suffi- cient information for Alice to recover K. 4. Bob encrypts the plaintext message M using K. 5. Alice recovers K from C1 using her private key. 6. Alice uses K-1 to recover the plaintext message from C2. Thus, K functions as a one-time key, used to encrypt and decrypt the message. For example, let us start with the prime field GF(19); that is, q = 19. It has primitive roots {2, 3, 10, 13, 14, 15}, as shown in Table 8.3. We choose a = 10. Alice generates a key pair as follows: 1. Alice chooses XA = 5. 2. Then YA = a XA mod q = a5 mod 19 = 3 (see Table 8.3). 3. Alice’s private key is 5; Alice’s pubic key is {q, a, YA} = {19, 10, 3}. Suppose Bob wants to send the message with the value M = 17. Then, 1. Bob chooses k = 6. 2. Then K = (YA)k mod q = 36 mod 19 = 729 mod 19 = 7. 3. So C1 = ak mod q = a6 mod 19 = 11 C2 = KM mod q = 7 * 17 mod 19 = 119 mod 19 = 5 4. Bob sends the ciphertext (11, 5). For decryption: 1. Alice calculates K = (C1)XA mod q = 115 mod 19 = 161051 mod 19 = 7. 2. Then K -1 in GF(19) is 7-1 mod 19 = 11. 3. Finally, M = (C2K- 1) mod q = 5 * 11 mod 19 = 55 mod 19 = 17. If a message must be broken up into blocks and sent as a sequence of encrypted blocks, a unique value of k should be used for each block. If k is used for more than one block, knowledge of one block m1 of the message enables the user to compute other blocks as follows. Let The security of ElGamal is based on the difficulty of computing discrete logarithms. To recover A’s private key, an adversary would have to compute XA = dloga,q(YA) . Alternatively, to recover the one-time key K, an adversary would have to determine the random number k, and this would require comput- ing the discrete logarithm k = dloga,q(C1). [STIN06] points out that these calcu- lations are regarded as infeasible if p is at least 300 decimal digits and q - 1 has at least one “large” prime factor.
{"url":"https://www.brainkart.com/article/Elgamal-Cryptographic-System_8441/","timestamp":"2024-11-06T04:25:00Z","content_type":"text/html","content_length":"102605","record_id":"<urn:uuid:89ebe25d-5644-4c60-be8b-debc27844a71>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00220.warc.gz"}
Which L_p norm is the fairest? Approximations for fair facility location across all "p" Friday, March 31, 2023 - 1:00pm for 1 hour (actually 50 minutes) Jai Moondra – Georgia Tech CS – jmoondra3@gatech.edu – https://jaimoondra.github.io/ The classic facility location problem seeks to open a set of facilities to minimize the cost of opening the chosen facilities and the total cost of connecting all the clients to their nearby open facilities. Such an objective may induce an unequal cost over certain socioeconomic groups of clients (i.e., total distance traveled by clients in such a group). This is important when planning the location of socially relevant facilities such as emergency rooms and grocery stores. In this work, we consider a fair version of the problem by minimizing the L_p-norm of the total distance traveled by clients across different socioeconomic groups and the cost of opening facilities, to penalize high access costs to open facilities across r groups of clients. This generalizes classic facility location (p = 1) and the minimization of the maximum total distance traveled by clients in any group (p = infinity). However, it is often unclear how to select a specific "p" to model the cost of unfairness. To get around this, we show the existence of a small portfolio of at most (log2r + 1) solutions for r (disjoint) client groups, where for any L_p-norm, at least one of the solutions is a constant-factor approximation with respect to any L_p-norm. We also show that such a dependence on r is necessary by showing the existence of instances where at least ~ sqrt(log2r) solutions are required in such a portfolio. Moreover, we give efficient algorithms to find such a portfolio of solutions. Additionally, We introduce the notion of refinement across the solutions in the portfolio. This property ensures that once a facility is closed in one of the solutions, all clients assigned to it are reassigned to a single facility and not split across open facilities. We give poly(exp (sqrt(r))-approximation for refinement in general metrics and O(log r)-approximation for the line and tree metrics. This is joint work with Swati Gupta and Mohit Singh.
{"url":"https://math.gatech.edu/seminars-colloquia/series/aco-student-seminar/jai-moondra-20230331","timestamp":"2024-11-08T17:32:06Z","content_type":"text/html","content_length":"33162","record_id":"<urn:uuid:f9f8d762-866b-4fd0-8b47-5bbbcdb2264c>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00088.warc.gz"}
pub struct NonZeroU128(/* private fields */); Expand description An integer that is known not to equal zero. This enables some memory layout optimization. For example, Option<NonZeroU128> is the same size as u128: use std::mem::size_of; assert_eq!(size_of::<Option<core::num::NonZeroU128>>(), size_of::<u128>()); NonZeroU128 is guaranteed to have the same layout and bit validity as u128 with the exception that 0 is not a valid instance. Option<NonZeroU128> is guaranteed to be compatible with u128, including in FFI. Thanks to the null pointer optimization, NonZeroU128 and Option<NonZeroU128> are guaranteed to have the same size and alignment: use std::num::NonZeroU128; assert_eq!(size_of::<NonZeroU128>(), size_of::<Option<NonZeroU128>>()); assert_eq!(align_of::<NonZeroU128>(), align_of::<Option<NonZeroU128>>()); Creates a non-zero without checking whether the value is non-zero. This results in undefined behaviour if the value is zero. The value must not be zero. Creates a non-zero if the given value is not zero. Returns the value as a primitive type. 1.53.0 (const: 1.53.0) · source Returns the number of leading zeros in the binary representation of self. On many architectures, this function can perform better than leading_zeros() on the underlying integer type, as special handling of zero can be avoided. Basic usage: let n = std::num::NonZeroU128::new(u128::MAX).unwrap(); assert_eq!(n.leading_zeros(), 0); 1.53.0 (const: 1.53.0) · source Returns the number of trailing zeros in the binary representation of self. On many architectures, this function can perform better than trailing_zeros() on the underlying integer type, as special handling of zero can be avoided. Basic usage: let n = std::num::NonZeroU128::new(0b0101000).unwrap(); assert_eq!(n.trailing_zeros(), 3); 1.64.0 (const: 1.64.0) · source Adds an unsigned integer to a non-zero value. Checks for overflow and returns None on overflow. As a consequence, the result cannot wrap to zero. let one = NonZeroU128::new(1)?; let two = NonZeroU128::new(2)?; let max = NonZeroU128::new(u128::MAX)?; assert_eq!(Some(two), one.checked_add(1)); assert_eq!(None, max.checked_add(1)); 1.64.0 (const: 1.64.0) · source Adds an unsigned integer to a non-zero value. Return NonZeroU128::MAX on overflow. let one = NonZeroU128::new(1)?; let two = NonZeroU128::new(2)?; let max = NonZeroU128::new(u128::MAX)?; assert_eq!(two, one.saturating_add(1)); assert_eq!(max, max.saturating_add(1)); 🔬This is a nightly-only experimental API. (nonzero_ops #84186) Adds an unsigned integer to a non-zero value, assuming overflow cannot occur. Overflow is unchecked, and it is undefined behaviour to overflow even if the result would wrap to a non-zero value. The behaviour is undefined as soon as self + rhs > u128::MAX. let one = NonZeroU128::new(1)?; let two = NonZeroU128::new(2)?; assert_eq!(two, unsafe { one.unchecked_add(1) }); 1.64.0 (const: 1.64.0) · source Returns the smallest power of two greater than or equal to n. Checks for overflow and returns None if the next power of two is greater than the type’s maximum value. As a consequence, the result cannot wrap to zero. let two = NonZeroU128::new(2)?; let three = NonZeroU128::new(3)?; let four = NonZeroU128::new(4)?; let max = NonZeroU128::new(u128::MAX)?; assert_eq!(Some(two), two.checked_next_power_of_two() ); assert_eq!(Some(four), three.checked_next_power_of_two() ); assert_eq!(None, max.checked_next_power_of_two() ); Returns the base 2 logarithm of the number, rounded down. This is the same operation as u128::ilog2, except that it has no failure cases to worry about since this value can never be zero. assert_eq!(NonZeroU128::new(7).unwrap().ilog2(), 2); assert_eq!(NonZeroU128::new(8).unwrap().ilog2(), 3); assert_eq!(NonZeroU128::new(9).unwrap().ilog2(), 3); Returns the base 10 logarithm of the number, rounded down. This is the same operation as u128::ilog10, except that it has no failure cases to worry about since this value can never be zero. assert_eq!(NonZeroU128::new(99).unwrap().ilog10(), 1); assert_eq!(NonZeroU128::new(100).unwrap().ilog10(), 2); assert_eq!(NonZeroU128::new(101).unwrap().ilog10(), 2); 🔬This is a nightly-only experimental API. (num_midpoint #110840) Calculates the middle point of self and rhs. midpoint(a, b) is (a + b) >> 1 as if it were performed in a sufficiently-large signed integral type. This implies that the result is always rounded towards negative infinity and that no overflow will ever occur. let one = NonZeroU128::new(1)?; let two = NonZeroU128::new(2)?; let four = NonZeroU128::new(4)?; assert_eq!(one.midpoint(four), two); assert_eq!(four.midpoint(one), two); 1.64.0 (const: 1.64.0) · source Multiplies two non-zero integers together. Checks for overflow and returns None on overflow. As a consequence, the result cannot wrap to zero. let two = NonZeroU128::new(2)?; let four = NonZeroU128::new(4)?; let max = NonZeroU128::new(u128::MAX)?; assert_eq!(Some(four), two.checked_mul(two)); assert_eq!(None, max.checked_mul(two)); 1.64.0 (const: 1.64.0) · source Multiplies two non-zero integers together. Return NonZeroU128::MAX on overflow. let two = NonZeroU128::new(2)?; let four = NonZeroU128::new(4)?; let max = NonZeroU128::new(u128::MAX)?; assert_eq!(four, two.saturating_mul(two)); assert_eq!(max, four.saturating_mul(max)); 🔬This is a nightly-only experimental API. (nonzero_ops #84186) Multiplies two non-zero integers together, assuming overflow cannot occur. Overflow is unchecked, and it is undefined behaviour to overflow even if the result would wrap to a non-zero value. The behaviour is undefined as soon as self * rhs > u128::MAX. let two = NonZeroU128::new(2)?; let four = NonZeroU128::new(4)?; assert_eq!(four, unsafe { two.unchecked_mul(two) }); 1.64.0 (const: 1.64.0) · source Raises non-zero value to an integer power. Checks for overflow and returns None on overflow. As a consequence, the result cannot wrap to zero. let three = NonZeroU128::new(3)?; let twenty_seven = NonZeroU128::new(27)?; let half_max = NonZeroU128::new(u128::MAX / 2)?; assert_eq!(Some(twenty_seven), three.checked_pow(3)); assert_eq!(None, half_max.checked_pow(3)); 1.64.0 (const: 1.64.0) · source Raise non-zero value to an integer power. Return NonZeroU128::MAX on overflow. let three = NonZeroU128::new(3)?; let twenty_seven = NonZeroU128::new(27)?; let max = NonZeroU128::new(u128::MAX)?; assert_eq!(twenty_seven, three.saturating_pow(3)); assert_eq!(max, max.saturating_pow(3)); 1.59.0 (const: 1.59.0) · source Returns true if and only if self == (1 << k) for some k. On many architectures, this function can perform better than is_power_of_two() on the underlying integer type, as special handling of zero can be avoided. Basic usage: let eight = std::num::NonZeroU128::new(8).unwrap(); let ten = std::num::NonZeroU128::new(10).unwrap(); The smallest value that can be represented by this non-zero integer type, 1. assert_eq!(NonZeroU128::MIN.get(), 1u128); The largest value that can be represented by this non-zero integer type, equal to u128::MAX. assert_eq!(NonZeroU128::MAX.get(), u128::MAX); The size of this non-zero integer type in bits. This value is equal to u128::BITS. assert_eq!(NonZeroU128::BITS, u128::BITS); Trait Implementations§ The resulting type after applying the | operator. The resulting type after applying the | operator. The resulting type after applying the | operator. This operation rounds towards zero, truncating any fractional part of the exact result, and cannot panic. The resulting type after applying the / operator. Converts a NonZeroU128 into an u128 Converts NonZeroU16 to NonZeroU128 losslessly. Converts NonZeroU32 to NonZeroU128 losslessly. Converts NonZeroU64 to NonZeroU128 losslessly. Converts NonZeroU8 to NonZeroU128 losslessly. The associated error which can be returned from parsing. Parses a string to return a value of this type. Read more Compares and returns the maximum of two values. Read more Compares and returns the minimum of two values. Read more Restrict a value to a certain interval. Read more This method tests for self and other values to be equal, and is used by ==. This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason. This method returns an ordering between values if one exists. Read more This method tests less than (for ) and is used by the Read more This method tests less than or equal to (for ) and is used by the Read more This method tests greater than (for ) and is used by the Read more This method tests greater than or equal to (for ) and is used by the Read more This operation satisfies n % d == n - (n / d) * d, and cannot panic. The resulting type after applying the % operator. Attempts to convert NonZeroI128 to NonZeroU128. The type returned in the event of a conversion error. Attempts to convert NonZeroI16 to NonZeroU128. The type returned in the event of a conversion error. Attempts to convert NonZeroI32 to NonZeroU128. The type returned in the event of a conversion error. Attempts to convert NonZeroI64 to NonZeroU128. The type returned in the event of a conversion error. Attempts to convert NonZeroI8 to NonZeroU128. The type returned in the event of a conversion error. Attempts to convert NonZeroIsize to NonZeroU128. The type returned in the event of a conversion error. Attempts to convert NonZeroU128 to NonZeroI128. The type returned in the event of a conversion error. Attempts to convert NonZeroU128 to NonZeroI16. The type returned in the event of a conversion error. Attempts to convert NonZeroU128 to NonZeroI32. The type returned in the event of a conversion error. Attempts to convert NonZeroU128 to NonZeroI64. The type returned in the event of a conversion error. Attempts to convert NonZeroU128 to NonZeroI8. The type returned in the event of a conversion error. Attempts to convert NonZeroU128 to NonZeroIsize. The type returned in the event of a conversion error. Attempts to convert NonZeroU128 to NonZeroU16. The type returned in the event of a conversion error. Attempts to convert NonZeroU128 to NonZeroU32. The type returned in the event of a conversion error. Attempts to convert NonZeroU128 to NonZeroU64. The type returned in the event of a conversion error. Attempts to convert NonZeroU128 to NonZeroU8. The type returned in the event of a conversion error. Attempts to convert NonZeroU128 to NonZeroUsize. The type returned in the event of a conversion error. Attempts to convert NonZeroUsize to NonZeroU128. The type returned in the event of a conversion error. Attempts to convert u128 to NonZeroU128. The type returned in the event of a conversion error. Auto Trait Implementations§ Blanket Implementations§ Returns the argument unchanged. Calls U::from(self). That is, this conversion is whatever the implementation of From<T> for U chooses to do. The type returned in the event of a conversion error. Performs the conversion. The type returned in the event of a conversion error. Performs the conversion.
{"url":"https://dev-doc.rust-lang.org/beta/core/num/struct.NonZeroU128.html","timestamp":"2024-11-13T12:39:48Z","content_type":"text/html","content_length":"151243","record_id":"<urn:uuid:611902aa-a2ef-4596-8b29-31d6b6e1678b>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00193.warc.gz"}
Categorical data AP Statistics ? Chapter 14 Practice Test: The Chi-Square Distributions Part II, Free Response ? Show all work and communicate completely and clearly. 1. Computer software generated 500 random numbers that should look like they are from the uniform distribution on the interval 0 to 1. They are categorized into five groups: (1) less than or equal to 0.2 (2) greater than 0.2 and less than or equal to 0.4, (3) greater than 0.4 and less than or equal to 0.6, (4) greater than 0.6 and less than or equal to 0.8, and (5) greater than 0.8. The counts in the five groups are 113, 95, 108, 99, and 85, respectively. a. The probabilities for these five intervals are all the same. What is this probability?
{"url":"https://course-notes.org/taxonomy/term/1049317","timestamp":"2024-11-09T22:39:06Z","content_type":"text/html","content_length":"57411","record_id":"<urn:uuid:12536c85-db32-4c8c-8c4b-cd70beda1385>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00094.warc.gz"}
FIP-0059 (Synthetic PoRep) fip title author discussion-to status type category created 0059 Synthetic PoRep @Kubuxu @Luca @ @Nicola @Irene #649 draft technical core Simple Summary This proposal presents a new PoRep protocol (Synthetic PoRep) that reduces the size of the temporary data stored between PreCommit and ProveCommit (150 epochs) from ~400GiB to ~25GiB, with no impact on security. Synthetic PoRep achieves reduction in used up space by reducing the set of challenges that might be chosen during the interactive Commit step from all possible challenges to some predetermined number that is feasible to precompute. • A Storage Provider can complete the challenge generation and vanilla proof computation before performing PreCommit on-chain thus removing layers data before the sector is pre-committed on-chain; • The GPU cost for SNARK generation during Commit is not significantly increased. The current interactive PoRep protocol (PreCommit + ProveCommit) requires to seal and keep a buffer of 12 layers (11 SDR layers + 1 data layer) between PreCommit and ProveCommit. This is the cost that we are targeting to reduce with this FIP. Protocol Overview Differences between currently deployed PoRep and Synthetic PoRep are limited to challenge generation and additional capabilities for the Storage Provider. 1. Starting point a. We assume there is a sector S for which the Storage Provider completed PreCommit1 and PreCommit2 computations. b. This sector is not listed on-chain. c. The Storage Provider possesses knowledge or CommR and CommD of that sector (acquired in PreCommit2 step) and the layers needed to generate CommR 2. Storage Provider generates “Synthetic” challenges from CommR a. Based on the CommR, the Storage Provider generates a list of N_synchallenges b. The Storage Provider computes responses for all the N_syn challenges, which take the form of N_syn vanilla proofs and saves them for future use. 3. Storage Provider can remove layers data a. As the Storage Provider knows responses to all possible challenges that will be asked in the interactive step, he can remove the layers data which is needed to respond to challenges. 4. Storage Provider publishes “PreCommitsSector” a. Using the same flow as today Storage Provider submits the sector for PreCommit. b. This establishes when in the future the randomness for interactive response will be known (PreCommitChallengeDelay) 5. Storage Provider generates and publishes ProveCommitSectors proof a. Storage Provider waits PreCommitChallengeDelay (150 epochs). b. The randomness revealed at PreCommitEpoch+PreCommitChallengeDelay selects N_verified challenges to be verified on-chain from the N_syn challenges generate in step 2. c. Storage Provider takes the N_verified vanilla proofs which were generated earlier corresponding to selected challenges and computes SNARK proofs of these challenges. d. Storage Provider publishes the ProveCommit either in individual or aggregated form. 6. Chain verifies proof a. Using interactive randomness as a seed chain generates N_verified challenges by selecting N_verified indices out of N_syn and computing them. b. Generated challenges are fed into proof verification. Actor changes • Add two new proof types to the list of proof types that can be used when pre-committing a new sector □ RegisteredSealProof_SynthStackedDrg32GiBV1 □ RegisteredSealProof_SynthStackedDrg64GiBV1 • The allowable delay of the new proof types is the same as the StackedDRG proof types V1.1; • No changes in the PreCommit and ProveCommit methods used today • The ProveCommit passes the new prove type to the proof verification syscall. • [DEL:In :DEL][DEL:ProveCommitAggregate verify that all precommits share the same proof type.:DEL] Proof changes • New parameters □ N_syn set to 2^18; □ N_verified set to 176 (same as N_porep_challenges/k) ; • Two new registered seal proofs: □ RegisteredSealProof_SynthStackedDrg32GiBV1 □ RegisteredSealProof_SynthStackedDrg64GiBV1 • New challenge generation functions. For example 1. Note that we are evaluation using ChaCha20 in the place of Sha256 (we will updated this FIP accordingly) • Proof construction and verification can use the same functions as today. Design Rationale Synthetic PoRep is a PoRep optimization which has basically no downside with respect to the status. Indeed, it would allow for more than 90% storage cost savings between PreCommit and ProveCommit. Additionally, we would have • No impact on the current on-chain flow, • No need of new Trusted Setup • No proving overhead on the StorageProvider side • No impact on PoRep security - Point out that NI can be an alternative. - More PoRep challanges gives a trade of with reduced number of synth challgenes but requires trusted setup and more proving. Backwards Compatibility Synthetic PoRep would become a new proof type with the same on-chain flow as current PoRep. Test Cases Will be included with implementation to be presented. Security Considerations In the current PoRep protocol if more that 3.9% of nodes in a layer are wrongly encoded, then the ProveCommit step will fail with large probability (larger than 1-2^(-10)). In the new protocol, the SP first samples a set of N_syn positions and then proceeds to sample the 176 challenges from there. In order to be able to keep the same security as before, we need that the distribution of errors in the synthetic challenge set is as close as possible to the original distribution of errors in the layer. However the adversary can try different sets to get one where the fraction of wrongly encoded nodes is smaller than 3.9%. Say for example that adversary wants a fraction of 3.49% (this will allows it to pass with probability (1-0.0349)^176 > 2^{-9} > 2^{-10}), we can show that if N = N_syn is large enough, then this is not possible. More in details, the probability that the number of wrongly encoded in the sythetic set is ≤ 0.0349*N is given by the bynomial probabilty: $P= \sum_{i=0}^{0.0349 N} \binom{N}{i}p^i (1-p)^{N-i} \text{ with } p ≥ 0.039$ and with N = N_syn ≥ 225000, then P < 2^{-80}. Incentive Considerations This proposal does not affect the current incentive system of the Filecoin network. Product Considerations This proposal reduces the hardware usage for the PoRep and therefore represents a cost saving opportunity for Storage Providers. Moreover, Synthetic PoRep can be also beneficial in terms of sealing throughput. Today SPs need to have ~500 GiB SSD for sealing a sector. After PC1 and PC2 this storage capacity is mostly filled with the 11 layers of SDR which need to stay there for 150 epochs, before being proved at ProveCommit. With Synthetic PoRep, only a small buffer of less than 25GiB need to be kept around until ProveCommit. This means that with less than 5% more SSD storage available, SPs can start sealing a new sector right after completing PC1 and PC2 of the old sector, without need to wait ProveCommit to be over. Note that, assuming PC1 takes almost 3h and we have 150 epochs between PreCommit and ProveCommit, this result in a possible 25% additional sealing throughput. Implementation in progress. Copyright Waiver Copyright and related rights waived via CC0.
{"url":"https://cryptonet.org/notebook/fip-00xx-synthetic-porep","timestamp":"2024-11-06T16:56:28Z","content_type":"text/html","content_length":"247706","record_id":"<urn:uuid:6a64d64c-3ae6-45f7-b816-f1a2667a4664>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00760.warc.gz"}
EXPIRED - Used HGST 3.82TB U.2 SSD. Posted $135(updated). Accepts lower. Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks. HGST HUSPR3238ADP301 3.82TB / 3820GB 2.5" PCIe U.2 SSD Solid State Drive Grade A | eBay I offered $115 each for 2x. Accepted. I'm considering going back for 2x more... PCIe 3.0 x4. 5.5PBW(0.8 Drive Writes/day) I'm starting to see a bunch of these ~4TB U.2 drives on eBay stating 70-85% remaining use, which is PLENTY for a Lab. This one's price started lower than what I was going to offer some of the other ones. As long as it has 50% remaining use I'll be more than happy. This is less than the cost of a new 2TB M.2 drive with DRAM that will come rated for 500TB written. NOTE: Can someone explain how to get the little boxed eBay preview in a post here? I couldn't figure it out. Last edited: F3 = Key Functions Working (Appendix C – Test and Repair). A subset of the primary functions of the device that an ordinary user of the device expects to function are verified working through manual or software tests. dunno. I hit the preview button first, before posting this reply. Maybe that has something to do with it? Made a kind of lowball offer, seeing as I don't have anything to put them in. Good deal, but be aware these are power hogs with poor idle support. Last edited: card version is also available at 110$/offer. Month old listing with 2 price drops and just 1 sold, put a 60$/ea offer on some. Would love to buy a bunch but any recommendations on a high density cloud server option for U2 drives like the C6320 series? Can I just buy a C6400 off ebay and change the backplane to the nvme one? Last edited: Can I just buy a C6400 off ebay and change the backplane to the nvme one? You will 100% want to get one that is already setup for nvme, its not just replacing the backplane. Good find. I have automated searches on ebay but not for that odd disk size (3.82TB vs the usual 3.84TB) Jul 25, 2018 There are ppl using boards with up to 4-6 PCI 16x slots and use pci cards that can take 4x nvme's each with bifurcation. Gives u 16+ drives w/o a backplane. From my experience, these specific drives have terrible write performance. If you just want to use them as part of a file server that is usually used for reading files only, fine. But other than that, avoid. From my experience, these specific drives have terrible write performance. If you just want to use them as part of a file server that is usually used for reading files only, fine. But other than that, avoid. How terrible? I had a similar bad experience with PM963, tops around 1GB/s sequential write. Might as well be a SAS drive for that speed. Read is about twice as fast. card version is also available at 110$/offer. Month old listing with 2 price drops and just 1 sold, put a 60$/ea offer on some. I did likewise. Anyone have thoughts on cooling needs for this? From my experience, these specific drives have terrible write performance. If you just want to use them as part of a file server that is usually used for reading files only, fine. How bad are we talking? I'm just about to replace my current 2x NAS (z2 4TBx10 each) with a new one(z2 10TBx12). I was planning on using some NVMe drives as a Special Metadata VDEV to speed up media library scans and torrent access times. I have 4x 1TB 970 Evos for a 2x2, but I was nervous that they would fill up too fast. (I also have 5x 118GB Optane drives from that last fire sale. But I'm CERTAIN those are going to be too small...probably.) Special Metadata is mostly about read performance and latency/IOPS anyway though right? I would think metadata is more about IOPS than sequential read/write speed. I am looking at my zfs special vdev which is a backup zfs pool, and the nvme are hardly doing any writes. There are ppl using boards with up to 4-6 PCI 16x slots and use pci cards that can take 4x nvme's each with bifurcation. Gives u 16+ drives w/o a backplane. From my experience, these specific drives have terrible write performance. If you just want to use them as part of a file server that is usually used for reading files only, fine. But other than that, avoid. Trying to replace a multi C6220 Ceph cluster so looking for density in the node area. 4-6x slot servers would be quite the bump up in rack space. Can you point me to one of those 4x NVMe cards btw? The ones I found have no cooling and I feel like even if I manage to fit all of them in one server they will burn out in no time Last edited: You will 100% want to get one that is already setup for nvme, its not just replacing the backplane. The ones setup for NVMe on fleabay are 4x the price of the normals For the PCIe card version, vendor accepted 2 @ $75 each. We'll see what data I'll get out of them on receipt. I just realized a potential issue for my plan. The datasheet lists power usage as 25w active / 8w idle. I was planning on using 2x of these directly on a dual U.2 PCIe 8x card like... ...since I know there have been issues using cheap cables, and shorter distances are better. (I believe the issue is more with PCIe 4.0 though. Still, less cables...) But the power budget on an x8 slot is 25w... May 3, 2023 I believe those dual U.2 adapters exist with aux power connections, but you'll still have an issue cooling them because there's no great way to get the airflow moving the right direction (note those holes in the ends of the drive, air probably needs to go through those.) I have had fine results with cables on PCIe 3.0, but past two of them it started getting really difficult to route them and convince them to stay attached to the drives. ...Can you point me to one of those 4x NVMe cards btw? I have 3x of these... They come with 80mm slim fans (10mm thick) The fans are nothing special but fine. However, You can always grab a regular fan and have it sitting on top of multiple cards blowing down. I believe those dual U.2 adapters exist with aux power connections... I've literally been searching Aliexpress for over an hour and haven't found any with external power. ...but you'll still have an issue cooling them because there's no great way to get the airflow moving the right direction... If I did manage to find a card with external power, I'd have to figure something out. Even if it meant taking the cases off and/or drilling more vent holes in them. ...but past two of them it started getting really difficult to route them and convince them to stay attached to the drives. • I have my servers in Rosewill RSV-L4000U 4U chassis. • Both are converted to the RSV-L4500U configuration by pulling out the 3x horizontal 5.25" bays and the horizontal front panel connector and remounting them vertically. • Chassis #1 donated a fan/drive bay to Chassis #2 to fill that 3x slot. • Chassis #1 is fitted with 3x RSV-SATA-Cage-34 4-in-3 Hot-Swap Enclosures So I simultaneously have too much room, and not enough room. Those are 25" deep chassis. Enough to give me a full 7" depth between the edge of an H11SSL-i Motherboard and the fan on the back of the hot-swap enclosure. On the other hand, There isn't anywhere else to mount more drives unless/until I rig up an internal cage to mount 2.5" drives. (I had also planned to add 4x more 10TB drives over the next 18-24 months, or 4x 2TB SATA SSDs and I didn't have a real plan where to mount those either. And I don't need the SATA SSDs now that I'm buying U.2 anyway.) I'm kinda in panic mode right now. I had the old plan stuff in my Ali cart. I wasn't expecting those drives to be here in like, 36 more hours. I don't actually have a way to test them unless I pull the one M.2-to-Sff-8639 adapter I own that is currently connecting the Optane system drive on this machine. I gotta buy SOMETHING and get it moving, but I hate-Hate-HATE buying single-use/disposable solutions. I don't want to have to keep buying different cables for the same drives at $25 a pop because I changed from a simple slot-to-port adapter, to a PCIe switch card or even a tri-mode HBA like a 9400-8i or something. AArrgggg. I hate being on a "student budget"... May 3, 2023 I've literally been searching Aliexpress for over an hour and haven't found any with external power. Not exactly cheap but I found this: PCIe x8 Gen 4 for Bifurcated U.2 NVME Dual Port AIC I'd get the cheap dual SFF-8643 adapter card and cables then start 3D printing a cage with a fan duct, but when I was a student it would have been zipties and cardboard.
{"url":"https://forums.servethehome.com/index.php?threads/used-hgst-3-82tb-u-2-ssd-posted-135-updated-accepts-lower.45809/","timestamp":"2024-11-10T12:29:59Z","content_type":"text/html","content_length":"199606","record_id":"<urn:uuid:600f3689-8f84-470e-9ac9-839f983b4e43>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00714.warc.gz"}
NCERT Solutions for Class 11 Physics Chapter 10 Mechanical Properties of Fluids NCERT Solutions for Class 11 Physics Chapter 10 Mechanical Properties of Fluids are part of Class 11 Physics NCERT Solutions. Here we have given NCERT Solutions for Class 11 Physics Chapter 10 Mechanical Properties of Fluids. NCERT Solutions for Class 11 Physics Chapter 10 Mechanical Properties of Fluids Topics and Subtopics in NCERT Solutions for Class 11 Physics Chapter 10 Mechanical Properties Of Fluids: Section Name Topic Name 10 Mechanical Properties Of Fluids 10.1 Introduction 10.2 Pressure 10.3 Streamline flow 10.4 Bernoulli’s principle 10.5 Viscosity 10.6 Reynolds number 10.7 Surface tension Question 10. 1. Explain why (a) The blood pressure in humans is greater at the feet than at the brain. (b) Atmospheric pressure at a height of about 6 km decreases to nearly half of its value at the sea level, though the height of the atmosphere is more than 100 km. (c) Hydrostatic pressure is a scalar quantity even though pressure is force divided by area. Answer: (a) The height of the blood column is more for the feet as compared to that for the brain. Consequently, the blood pressure in humans is greater at the feet than at the brain. (b) The variation of air-density with height is not linear. So, pressure also does not reduce linearly with height. The air pressure at a height h is given by P = P0e–αh where P0 represents the pressure of air at sea-level and α is a constant. (c) Due to applied force on liquid, the pressure is transmitted equally in all directions inside the liquid. That is why there is no fixed direction for the pressure due to liquid. Hence hydrostatic pressure is a scalar quantity. Question 10. 2. Explain why (a) The angle of contact of mercury with glass is obtuse, while that of water with glass is acute. (b) Water on a clean glass surface tends to spread out while mercury on the same surface tends to form drops. (Put differently, water wets glass while mercury does not.) (c) Surface tension of a liquid is independent of the area of the surface. (d) Water with detergent dissolved in it should have small angles of contact. (e) A drop of liquid under no external forces is always spherical in shape. Answer: (a) Let a drop of a liquid L be poured on a solid surface S placed in air A. If TSL,and TSA be the surface tensions corresponding to solid-liquid layer, liquid-air layer and solid-air layer respectively and θ be the angle of contact between the liquid and solid,then TLA Cos θ + TSL =TSA =>Cos θ=TSA-TSL/TLA For the mercury-glass interface, TSA< TSL. Therefore, cos 0 is negative. Thus θ is an obtuse angle. For the water-glass interface, TSA > TSL. Therefore cos 0 is positive. Thus, θ is an acute angle. (b) Water on a clean glass surface tends to spread out i.e., water wets glass because force of cohesion of water is much less than the force of adhesion due to glass. In case of mercury force of cohesion due to mercury molecules is quite strong as compared to adhesion force due to glass. Consequently, mercury does not wet glass and tends to form drops. (c) Surface tension of liquid is the force acting per unit length on a line drawn tangentially to the liquid surface at rest. Since h as force is independent of the area of liquid surface therefore, surface tension is also independent of the area of the liquid surface. (d) We know that the clothes have narrow pores or spaces which act as capillaries. Also, we know that the rise of liquid in a capillary tube is directly proportional to cosθ (Here θ is the angle of contact). As θ is small for detergent, therefore cos θ will be large. Due to this, the detergent will penetrate more in the narrow pores of the clothes. (e) We know that any system tends to remain in a state of minimum energy. In the absence of any external force for a given volume of liquid its surface area and consequently. Surface energy is least for a spherical shape. It is due to this reason that a liquid drop, in the absence of an external force is spherical in shape. More Resources for CBSE Class 11 • NCERT Solutions • NCERT Solutions Class 11 Maths • NCERT Solutions Class 11 Physics • NCERT Solutions Class 11 Chemistry • NCERT Solutions Class 11 Biology • NCERT Solutions Class 11 Hindi • NCERT Solutions Class 11 English • NCERT Solutions Class 11 Business Studies • NCERT Solutions Class 11 Accountancy • NCERT Solutions Class 11 Psychology • NCERT Solutions Class 11 Entrepreneurship • NCERT Solutions Class 11 Indian Economic Development • NCERT Solutions Class 11 Computer Science Question 10. 3. Fill in the blanks using the words from the list appended with each statement: (a) Surface tension of liquids generally…………….with temperature. (increases/decreases) (b) Viscosity of gases………………..with temperature, whereas viscosity of liquids…………..with temperature. (increases/decreases) (c) For solids with elastic modulus of rigidity, the shearing force is proportional to…………………..while for fluids it is proportional to…………. (shear strain/rate of shear strain) (d) For a fluid in steady flow, the increases inflow speed at a constriction follows from…………………………. while the decrease of pressure there follows from………………….(conservation of mass/Bernoulli’s (e) For the model of a plane in a wind tunnel, turbulence occurs at a…………….speed than the critical speed for turbulence for an actual plane. (greater/smaller) Answer: (a), decreases (b) increases; decreases (c) shear strain; rate of shear strain (d) conservation of mass; Bernoulli’s principle (e) greater. Question 10. 4. Explain why (a) To keep a piece of paper horizontal, you should blow over, not under, it. (b) When we try to close a water tap with our fingers, fast jets of water gush through the openings between our fingers. (c) The size of a needle of a syringe controls flow rate better than the thumb pressure exerted by a doctor while administering an injection. (d) A fluid flowing out of a small hole in a vessel results in a backward thurst on the vessel. (e) A spinning cricket ball in air does not follow a parabolic trajectory. Answer: (a) When we blow over the piece of paper, the velocity of air increases. As a result, the pressure on it decreases in accordance with the Bernoulli’s theorem whereas the pressure below remains the same (atmospheric pressure). Thus, the paper remains horizontal. (b) By doing so the area of outlet of water jet is reduced, so velocity of water increases according to equation of continuity av = constant. (c) For a constant height, the Bernoulli’s theorem is expressed as P +1/2 ρ v2 = Constant In this equation, the pressure P occurs with a single power whereas the velocity occurs with a square power. Therefore, the velocity has more effect compared to the pressure. It is for this reason that needle of the syringe controls flow rate better than the thumb pressure exerted by the doctor. (d) This is because of principle of conservation of momentum. While the flowing fluid carries forward momentum, the vessel gets a backward momentum. (e) A spinning cricket ball would have followed a parabolic trajectory has there been no air. But because of air the Magnus effect takes place. Due to the Magnus effect the spinning cricket ball deviates from its parabolic trajectory. Question 10. 5. A 50 kg girl wearing high heel shoes balances on a single heel. The heel is circular with a diameter 1.0 cm. What is the pressure exerted by the heel on the horizontal floor? Question 10. 6. Toricelli’s barometer used mercury. Pascal duplicated it using French wine of density 984 kg m-3. Determine the height of the wine column for normal atmospheric pressure. Question 10. 7. A vertical off-shore structure is built to withstand a maximum stress of 109 Pa. Is the structure suitable for putting up on top of an oil well in the ocean? Take the depth of the ocean to be roughly 3 km, and ignore ocean currents. Answer: Here, Maximum stress = 109 Pa, h = 3 km = 3 x 103 m; p (water) = 103 kg/m3 and g = 9.8 m/s2. The structure will be suitable for putting upon top of an oil well provided the pressure exerted by sea water is less than the maximum stress it can bear. Pressure due to sea water, P = hρg = 3 x 103 x 103x 9.8 Pa = 2.94 x 107 Pa Since the pressure of sea water is less than the maximum Question 10. 8. A hydraulic automobile lift is designed to lift cars with a maximum mass of 3000 kg. The area of cross-section of the piston carrying the load is 425 cm2. What maximum pressure would the smaller piston have to bear? This is also the maximum pressure that the smaller piston would have to bear. Question 10. 9. A U tube contains water and methylated spirit separated by mercury. The mercury columns in the two arms are in level with 10.0 cm of water in one arm and 12.5 cm of spirit in the other. What is the relative density of spirit? Answer: For water column in one arm of U tube, h1 = 10.0 cm; ρ1 (density) = 1 g cm-3 For spirit column in other arm of U tube, h2 = 12.5 cm; ρ2 =? As the mercury columns in the two arms of U tube are in level, therefore pressure exerted by each is equal. Hence h1ρ1g = h2ρ2g or ρ2 = h1ρ1/h2 =10 x 1/12.5 = 0.8 g cm-3 Therefore, relative density of spirit = ρ2/ρ1 = 0.8/1 = 0.8 Question 10. 10. In Q.9, if 15.0 cm of water and spirit each are further poured into the respective arms of the tube, what is the difference in the levels of mercury in the two arms? (Relative density of mercury = 13.6) Height of the water column, h1 = 10 + 15 = 25 cm Height of the spirit column, h2 = 12.5 + 15 = 27.5 cm Density of water, ρ1 = 1 g cm–3 Density of spirit, ρ2 = 0.8 g cm–3 Density of mercury = 13.6 g cm–3 Let h be the difference between the levels of mercury in the two arms. Pressure exerted by height h, of the mercury column: = hρg = h × 13.6g … (i) Difference between the pressures exerted by water and spirit: = ρ1h1g – ρ2h2g = g(25 × 1 – 27.5 × 0.8) = 3g … (ii) Equating equations (i) and (ii), we get: 13.6 hg = 3g h = 0.220588 ≈ 0.221 cm Hence, the difference between the levels of mercury in the two arms is 0.221 cm. Question 10. 11. Can Bernoulli’s equation be used to describe the flow of water through a rapid motion in a river? Explain. Answer: Bernoulli’s theorem is applicable only for there it ideal fluids in streamlined motion. Since the flow of water in a river is rapid, way cannot be treated as streamlined motion, the theorem cannot be used. Question 10. 12. Does it matter if one uses gauge instead of absolute pressures in applying Bernoulli’s equation? Explain. Answer: No, it does not matter if one uses gauge instead of absolute pressures in applying Bernoulli’s equation, provided the atmospheric pressure at the two points where Bernoulli’s equation is applied are significantly different. Question 10. 13. Glycerine flows steadily through a horizontal tube of length 1.5 m and radius 1.0 cm. If the amount of glycerine collected per second at one end is 4.0 x 10-3 kg s-1, what is the pressure difference between the two ends of the tube? (Density of glycerine = 1.3 x 103 kg m-3 and viscosity of glycerine = 0.83 Pa s). [You may also like to check if the assumption of laminar flow in the tube is correct], Question 10. 14. Question 10. 15. Figures (a) and (b) refer to the steady flow of a (non-viscous) liquid. Which of the two figures in incorrect? Why? Answer: Figure (a) is incorrect. It is because of the fact that at the kink, the velocity of flow of liquid is large and hence using the Bernoulli’s theorem the pressure is less. As a result, the water should not rise higher in the tube where there is a kink (i.e., where the area of cross-section is small). Question 10. 16. The cylindrical tube of a spare pump has a cross-section of 8.0 cm2 one end of which has 40 fine holes each of diameter 1.0 mm. If the liquid flow inside the tube is 1.5 m min-1, what is the speed of ejection of the liquid through the holes? Question 10. 17. A U-shaped wire is dipped in a soap solution, and removed. A thin soap film formed between the wire and a light slider supports a weight of 1.5 x 10-2 N (which includes the small weight of the slider). The length of the slider is 30 cm. What is the surface tension of the film? Answer: In present case force of surface tension is balancing the weight of 1.5 x 10-2 N, hence force of surface tension, F = 1.5 x 10-2 N. Total length of liquid film, l = 2 x 30 cm = 60 cm = 0.6 m because the liquid film has two surfaces. Surface tension, T = F/l =1.5 x 10-2 N/0.6m =2.5 x 10-2 Nm-1 Question 10. 18. Figure (a) below shows a thin film supporting a small weight = 4.5 x 10-2 N. What is the weight supported by a film of the same liquid at the same temperature in Fig. (b) and (c) Explain your answer physically. Ans. (a) Here, length of the film supporting the weight = 40 cm = 0.4 m. Total weight supported (or force) = 4.5 x 10-2 N. Film has two free surfaces, Surface tension, S =4.5 x 10-2/2 x 0.4 =5.625 x 10-2 Nm-1 Since the liquid is same for all the cases (a), (b) and (c), and temperature is also same, therefore surface tension for cases (b) and (c) will also be the same = 5.625 x 10-2. In Fig. 7(b), 38(b) and (c), the length of the film supporting the weight is also the saihe as that of (a), hence the total weight supported in each case is 4.5 x 10-2 N. Question 10. 19. What is the pressure inside a drop of mercury of radius 3.0 mm at room temperature? Surface tension of mercury at that temperature (20°C) is 4.65 x 10-1 Nm-1. The atmospheric pressure is 1.01 x 105 Pa. Also give the excess pressure inside the drop. Since data is correct up to three significant figures, we should write total pressure inside the drop as 1.01 x 105 Pa. Question 10. 20. What is the excess pressure inside a bubble of soap solution of radius 5.00 mm, given that the surface tension of soap solution at the temperature (20 °C) is 2.50 x 10-2 Nm-1? If an air bubble of the same dimension were formed at depth of 40.0 cm inside a container containing the soap solution (of relative density 1.20), what would be the pressure inside the bubble? (1 atmospheric pressure is 1.01 x 105 Pa). Answer: Here surface tension of soap solution at room temperature T = 2.50 x 10-2 Nm-1, radius of soap bubble, r = 5.00 mm = 5.00 x 10-3 m. Question 10. 21. A tank with a square base of area 1.0 m2 is divided by a vertical partition in the middle. The bottom of the partition has a small-hinged door of area 20 cm2. The tank is filled with water in one compartment, and an acid (of relative density 1.7) in the other, both to a height of 4.0 m. Compute the force necessary to keep the door close. Question 10. 22. A manometer reads the pressure of a gas in an enclosure as shown in Fig. (a) When a pump removes some of the gas, the manometer reads as in Fig. (b). The liquid used in the manometers is mercury and the atmospheric pressure is 76 cm of mercury. (a) Give the absolute and gauge pressure of the gas in the enclosure for cases (a) and (b), in units of cm of mercury. (b) How would the levels change in case (b) if 13.6 cm of water (immiscible with mercury) is poured into the right limb of 1 the manometer? Ignore the small change in the volume of the gas. Question 10. 23. Two vessels have the same base area but different shapes. The first vessel takes twice the volume of water that the second vessel requires to fill up to a particular common height. Is the force exerted by the water on the base of the vessel the same in the two cases? If so, why do the vessels filled with water to that same height give different readings on a weighing scale? Answer: Pressure (and therefore force) on the two equal base areas are identical. But force is exerted by water on the sides of the vessels also, which has a non-zero vertical component when sides of the vessel are not perfectly normal to the base. This net vertical component of force by water on the sides of the vessel is greater for the first vessel than the second. Hence, the vessels weigh different even when the force on the base is the same in the two cases. Question 10. 24. During blood transfusion, the needle is inserted in a vein where the gauge pressure is 2000 Pa. At what height must the blood container be placed so that blood may just enter the vein? Given: density of whole blood = 1.06 x 103 kg m-3. Answer: h=P/ρg =200/(1.06 x 103 x 9.8) =0.1925 m The blood may just enter the vein if the height at which the blood container be kept must be slightly greater than 0.1925 m i.e„ 0.2 m. Question 10. 25. In deriving Bernoulli’s equation, we equated the work done on the fluid in the tube to its change in the potential and kinetic energy, (a) What is the largest average velocity of blood flow in an artery of diameter 2 x 10-3 m if the flow must remain laminar? (b) Do the dissipative forces become more important as the fluid velocity increases? Discuss qualitatively. Answer: (a) If dissipative forces are present, then some forces in liquid flow due to pressure difference is spent against dissipative forces, due to which the pressure drop becomes large. (b) The dissipative forces become more important with increasing flow velocity, because of turbulence. Question 10. 26. (a) What is the largest average velocity of blood flow in an artery of radius 2 x 103 m if the flow must remain laminar? (b) What is the corresponding flow rate? Take viscosity of blood to be 2.084 x 10-3 Pa-s. Density of blood is 1.06 x 103 kg/m3 . Question 10. 27. A plane is in level flight at constant speed and each of its wings has an area of 25 m2. If the speed of the air is 180 km/h over the lower wing and 234 km/h over the upper wing surface, determine the plane’s mass. (Take air density to be 1 kg/m3), g = 9.8 m/s2. Question 10. 28. In Millikan’s oil drop experiment, what is the terminal speed of an uncharged drop of radius 2.0 x 10-5 m and density 1.2 x 103 kg m-3. Take the viscosity of air at the temperature of the experiment to be 1.8 x 10-5 Pa-s. How much is the viscous force on the drop at that speed? Neglect buoyancy of the drop due to air. Answer: Here radius of drop, r = 2.0 x 10-5 m, density of drop, p = 1.2 x 103 kg/m3, viscosity of air TI = 1.8 x 10-5 Pa-s. Neglecting upward thrust due to air, we find that terminal speed is Question 10. 29. Mercury has an angle of contact equal to 140° with soda-lime glass. A narrow tube of radius 1.0 mm made of this glass is dipped in a trough containing mercury. By what amount does the mercury dip down in the tube relative to the liquid surface outside? Surface tension of mercury at the temperature of the experiment is 0.465 Nm-2. Density of mercury = 13.6 x 10 kg m-3 Question 10. 30. Two narrow bores of diameters 3.0 mm and 6.0 mm are joined together to form a U-tube open at both ends. If the U-tube contains water, what is the difference in its levels in the two limbs of the tube? Surface tension of water at the temperature of the experiment is 7.3 x 10-2 Nm-2. Take the angle of contact to be zero and density of water to be 1.0 x 103 kg m-3(g = 9.8 ms-2). Answer: Let rx be the radius of one bore and r2 be the radius of second bore of the U-tube. The, if h1 and h2 are the heights of water on two sides, then Question 10. 31. (a) It is known that density p of air decreases with height y as where ρ0 = 1.25 kg m-3 is the density at sea level, and y0 is a constant. This density variation is called the law of atmospheres. Obtain this law assuming that the temperature of atmosphere remains a constant (isothermal conditions). Also assume that the value of g remains constant. (b) A large He balloon of volume 1425 m3 is used to lift a payload of 400 kg. Assume that the balloon maintains constant radius as it rises. How high does it rise?[Take y0 = 8000 m and ρHe = 0.18 kg Answer: (a) We know that rate of decrease of density p of air is directly proportional to the height y. It is given as dρ/dy = – ρ/y0 where y is a constant of proportionality and -ve sign signifies that density is decreasing with increase in height. On integration, we get NCERT Solutions for Class 11 Physics All Chapters • Chapter 1 Physical World • Chapter 2 Units and Measurements • Chapter 3 Motion in a Straight Line • Chapter 4 Motion in a plane • Chapter 5 Laws of motion • Chapter 6 Work Energy and power • Chapter 7 System of particles and Rotational Motion • Chapter 8 Gravitation • Chapter 9 Mechanical Properties Of Solids • Chapter 10 Mechanical Properties Of Fluids • Chapter 11 Thermal Properties of matter • Chapter 12 Thermodynamics • Chapter 13 Kinetic Theory • Chapter 14 Oscillations • Chapter 15 Waves We hope the NCERT Solutions for Class 11 Physics Chapter 10 Mechanical Properties of Fluids help you. If you have any query regarding NCERT Solutions for Class 11 Physics Chapter 10 Mechanical Properties of Fluids, drop a comment below and we will get back to you at the earliest. Leave a Comment This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://ncertguess.com/ncert-solutions-for-class-11-physics-chapter-10-mechanical-properties-of-fluids/","timestamp":"2024-11-10T06:13:50Z","content_type":"text/html","content_length":"306457","record_id":"<urn:uuid:94b6584e-427a-43f1-b12c-25583a4ab39e>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00445.warc.gz"}
Could the Atlantic Overturning Circulation ‘shut down’? Posted on 6 April 2020 by Guest Author This is a re-post from Carbon Brief by Dr. Richard Wood and Dr. Laura Jackson Generally, we think of climate change as a gradual process: the more greenhouse gases that humans emit, the more the climate will change. But are there any “points of no return” that commit us to irreversible change? The “Atlantic Meridional Overturning Circulation”, known as “AMOC”, is one of the major current systems in the world’s oceans and plays a crucial role in regulating climate. Tipping points This article is part of a week-long special series on “tipping points”, where a changing climate could push parts of the Earth system into abrupt or irreversible change It is driven by a delicate balance of ocean temperatures and salinity, which is at risk from being upset by a warming climate. The latest research suggests that AMOC is very likely to weaken this century, but a collapse is very unlikely. However, scientists are some way from being able to define exactly how much warming might push AMOC past a tipping point. The figure below shows an illustration of the AMOC. In the North Atlantic, warm water from the subtropics travels northwards near the surface and cold – and, hence, more dense – water is travelling southwards at depth, typically 2–4km below the surface. In the north, the warm surface water is cooled by the overlying atmosphere, converted to cold, dense water, and sinks to supply the deep, southward branch. Elsewhere, the cold water upwells and is warmed, re-supplying the upper, warm branch and completing the circuit. Could the AMOC collapse? The AMOC is vulnerable to climate change. As the atmosphere warms due to increasing greenhouse gases, the ability of the ocean to lose heat from the North Atlantic surface is diminished and one of the driving factors of the AMOC is weakened. Climate-model projections of global warming this century consistently point to a weakening of the AMOC. The most recent assessments of the Intergovernmental Panel on Climate Change (IPCC) – the fifth assessment report (AR5) and special report on oceans and cryosphere in a changing climate (SROCC) – both conclude that the AMOC is “very likely” to weaken over the 21st century. Such a weakening would have a cooling effect on climate around the North Atlantic region, as the northward heat supply is slowed down. This effect is included in the climate projections, but the direct warming effect from rising concentrations of greenhouse gases is stronger, so the net result is still warming over land regions. But more dramatic changes are theoretically possible. A “tipping point” may exist beyond which the current strong AMOC becomes unsustainable. Evidence for this goes back to a seminal paper published in 1961 by one of the fathers of modern oceanography, Henry Stommel. Stommel realised that the AMOC is a kind of competition between the effects of temperature and salinity, both of which influence the density of seawater. The figure below illustrates the different possible AMOC states. In today’s climate, temperature dominates and the cold, dense high latitude water drives a strong AMOC (red curve). But in other climate states it is possible for fresh water (from rainfall or ice melt) to freshen – and so lighten – the high-latitude water; in this case, the water is not dense enough to drive the AMOC, which collapses (blue curve). If the freshwater input to the Atlantic were strong enough – from rapid melting of the Greenland ice sheet, for example – the blue dot would move to the right in the figure. According to Stommel’s model, at some point the strong AMOC state (red) becomes unsustainable and the AMOC collapses to the “off” state (blue). Then, even if the driving climate change were later reversed (the blue dot moving back to the left on the figure), the AMOC would stay on the blue curve and would not switch back on again until the climate had overshot the present day conditions in the opposite direction. This phenomenon is known as “hysteresis”. Long-term projections Stommel’s idea has evolved over the years, but the fundamental insight is still relevant. There is evidence that AMOC changes may have played a role in some major climate shifts of the past – most recently around 8,200 years ago as the world was emerging from the last ice age. At that time, a huge lake in northwest Canada was being held back by an ice wall. As temperatures warmed the ice wall collapsed, depositing the fresh water from the lake into the North Atlantic and interrupting the AMOC. A major cooling at this time can be seen in palaeoclimatic records across North America, Greenland and Europe. Comprehensive climate models generally do not project a complete shutdown of the AMOC in the 21st century, but recently models have been run further into the future. Under scenarios of continued high greenhouse gas concentrations, a number of models project an effective AMOC shutdown by 2300. Model projections of the future AMOC do range widely, though. As a result, on the question of what level of global warming would result in an AMOC shutdown, it is unlikely that the scientific community will see any convergence in the near future. While the fundamental mechanism that destabilises the AMOC in Stommel’s original model appears to be important in climate models, there are other processes that are trying to stabilise the AMOC. Many of these processes are difficult to model quantitatively, especially with the limited resolution that is possible with current computing power. So our AMOC projections will continue to be subject to quite some uncertainty for some time to come. Taking all the evidence into account, the IPCC’s AR5 and SROCC concluded that an AMOC collapse before 2100 was “very unlikely” (pdf). However, the impacts of passing an AMOC tipping point would be huge, so it is best viewed as a “low probability, high impact” scenario. What would be the impacts of a collapse? Climate models can be used to assess the impact on climate if the AMOC were to shut down completely. By adding large amounts of fresh water to the North Atlantic in a model, scientists artificially lighten the cold, dense water that forms the lower branch of the loop. This stops the AMOC and we can then look at the impact on climate. The figure below illustrates the changes that result in one such experiment. Shutdown of the AMOC results in a cooling (blue shading) of the whole northern hemisphere, particularly the regions closest to the zone of North Atlantic heat loss (the “radiator” of the North Atlantic central heating system). In these regions the cooling exceeds the projected warming due to greenhouse gases, so a complete shutdown in the 21st century, while very unlikely, could result in a net cooling in regions such as western Europe. Other impacts include major shifts in rainfall patterns, increases in winter storms over Europe and a sea level rise of up to 50cm around the North Atlantic basin. In many regions these effects would exacerbate the trends due to global warming. While such model experiments are artificial “what if?” scenarios, they illustrate the magnitude of the changes that could result from an AMOC collapse. The impacts on agriculture, wildlife, transport, energy demand and coastal infrastructure would be complex, but we can be certain that there would be major socioeconomic consequences. For example, one study showed a 50% reduction in grass productivity in major grazing regions of the western UK and Ireland. What can be done about the risk of a collapse? As explained above, scientists are some way from being able to confidently define a level of global warming at which the AMOC would be at risk of crossing a tipping point. However, it may be possible to manage the risk of AMOC collapse, even without knowing how likely it is. To take a domestic analogy: I know that it is possible, but unlikely, that my house will burn down – it is a low-probability, high-impact event. I don’t have much idea of how probable it is that I will have a fire, but I can manage the risk anyway by getting the electrical wiring checked and by installing smoke alarms. The wiring check reduces the chance of a fire, while the smoke alarm gives me early warning if a fire starts so that the impact can be reduced – by evacuating the house and calling the fire brigade. Recently, along with colleagues at the University of Exeter, we have been exploring the possibility of developing an early-warning system for AMOC tipping. Using a simple model, we have shown that the way the salinities of the subtropical and subpolar Atlantic evolve over time can give an early indication if the AMOC is on the path to a collapse, possibly decades before any major weakening has been seen in the AMOC itself. It is early days for this research, but by monitoring such an indicator it may be possible to give more time to prepare for the consequences of an AMOC collapse, or to adopt more aggressive climate change mitigation measures to get the AMOC onto a more stable pathway. Outstanding questions As the world gets to grips with the challenges of meeting the targets of the Paris climate accord, interest is increasing in climate pathways that temporarily overshoot the final target level. It is important that such overshoots do not cross any irreversible thresholds on the way to the final destination, so research on tipping points needs to link theoretical results to these more practical Much of the modelling of AMOC tipping points to date has used idealised scenarios of freshwater input to the North Atlantic. This is relevant to some past AMOC changes, but to model future climate change we need to understand what happens when warming and freshening are taking place together. This is a more challenging problem because the number of relevant processes and feedbacks is increased. Some of these processes operate at small scales that models struggle to resolve with current computing power. Improving the modelling of key AMOC processes needs patience and long-term commitment, but will eventually pay dividends in more confident AMOC predictions. Research on early warning of AMOC collapse is in its infancy, but may be a fruitful way to respond to the risk. One thing is for sure: early warning will require continuous observations of key aspects of the AMOC. AMOC monitoring entered a new era in 2004 with RAPID-MOCHA, an array of moored instruments that spans the width of the Atlantic at latitude 26.5 degrees north and provides continuous monitoring of the AMOC. Before this there had only been five snapshots of the circulation spread over 47 years. Results have already changed our understanding of how the AMOC varies in time: for example, an unexpected dip in the AMOC – observed in Autumn 2009 – is thought to have played a role in the unusually cold European winters of 2009-10 and 2010-11. More recently, a similar monitoring array has been installed further north in the subpolar Atlantic. Along with continuous measurements of temperature and salinity from drifting Argo floats, oceanographers now have an unprecedented database to study this crucial element of our climate system and give the world a chance to prepare for any nasty surprises. Comments 1 to 4: 1. SirCharles at 09:07 AM on 8 April, 2020 More Evidence: The North Atlantic “Cold Spot” Human Caused 2. william5331 at 05:11 AM on 9 April, 2020 No mention is made of why sea level would rise '50cm around the Atlantic basin' if AMOC shut down. Am I correct in assuming this is due to Coriolis no longer trying to pull the water away from the Atlantic coast of America. If so, wouldn't this effect only be seen on the East Coast of North America and not on the west coast of Europe. In fact, there might be a slight decrease in sea level along the West Coast of Europe as water is released to rise along the Eastern Seaboard of the USA and Canada. 3. william5331 at 05:30 AM on 9 April, 2020 Not mentioned in the article on the destabilization of the WAIS is the effect of the 'ice pump'. Moderator Response: [DB] Self-promotional advertising snipped. 4. MA Rodger at 06:39 AM on 9 April, 2020 william @2, The SLR on the US East coast appears more often in the literature. The SLR on the West coast of Europe has been seen in models. See Kuhlbrodt et al (2009) 'An Integrated Assessment of changesin the thermohaline circulation' which is likely the source of the 50cm figure. You need to be logged in to post a comment. Login via the left margin or if you're new, register here.
{"url":"https://skepticalscience.com/could-amoc-shut-down.html","timestamp":"2024-11-02T14:33:41Z","content_type":"application/xhtml+xml","content_length":"65067","record_id":"<urn:uuid:c5416f1d-7420-4e68-88ed-0931996d1d58>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00545.warc.gz"}
Realizations of Infinite Products, Ruelle Operators and Wavelet Filters Using the system theory notion of state-space realization of matrix-valued rational functions, we describe the Ruelle operator associated with wavelet filters. The resulting realization of infinite products of rational functions have the following four features: (1) It is defined in an infinite-dimensional complex domain. (2) Starting with a realization of a single rational matrix-function $$M$$M, we show that a resulting infinite product realization obtained from $$M$$M takes the form of an (infinite-dimensional) Toeplitz operator with the symbol that is a reflection of the initial realization for $$M$$M. (3) Starting with a subclass of rational matrix functions, including scalar-valued ones corresponding to low-pass wavelet filters, we obtain the corresponding infinite products that realize the Fourier transforms of generators of $$\mathbf L_2(\mathbb R)$$L2(R) wavelets. (4) We use both the realizations for $$M$$M and the corresponding infinite product to obtain a matrix representation of the Ruelle-transfer operators used in wavelet theory. By “matrix representation” we refer to the slanted (and sparse) matrix which realizes the Ruelle-transfer operator under • Filter banks • Infinite products • State space realization • Wavelet filters ASJC Scopus subject areas • Analysis • General Mathematics • Applied Mathematics Dive into the research topics of 'Realizations of Infinite Products, Ruelle Operators and Wavelet Filters'. Together they form a unique fingerprint.
{"url":"https://cris.bgu.ac.il/en/publications/realizations-of-infinite-products-ruelle-operators-and-wavelet-fi-2","timestamp":"2024-11-02T01:29:47Z","content_type":"text/html","content_length":"58404","record_id":"<urn:uuid:0009839e-68de-4c99-b040-301099184ebb>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00778.warc.gz"}
ACM Other ConferencesGraph Algorithms: Cuts, Flows, and Network Design (Dagstuhl Seminar 23422) This report documents the program and the outcomes of Dagstuhl Seminar 23422 "Graph Algorithms: Cuts, Flows, and Network Design". This seminar brought 25 leading researchers in graph algorithms together for a discussion of the recent progress and challenges in two areas: the design of fast algorithm for fundamental flow/cut problems and the design of approximation algorithms for basic network design problems. The seminar included several talks of varying lengths, a panel discussion, and an open problem session. In addition, sufficient time was set aside for research discussions and collaborations.
{"url":"https://drops.dagstuhl.de/entities/document/10.4230/DagRep.13.10.76/metadata/acm-xml","timestamp":"2024-11-05T00:24:14Z","content_type":"application/xml","content_length":"5788","record_id":"<urn:uuid:90ba1bf6-370d-46aa-acb0-90f26c9154a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00114.warc.gz"}
Malick Ndiaye - Mathematician of the African Diaspora Malick Ndiaye place: Senegal thesis: Director of thesis: Professor Hector Giacomini. Topic : Dynamical Systems, The problem of the center of plane dynamical polynomial systems personal or universal URL: current address: 31 Forbus St., Apt. B1; Poughkeepsie, NY 12601 1988-91: Teacher in Mathematics in Ivory Cost. 1994-95: Junior Lecturer at the University of Tours (France). Teaching: Mechanics of the point in second year of course. Fourier sequences, Fourier Transformation, distribution, Hilbert Spaces, Laplace Transformation. 1996-99: Junior Lecturer at UCAD (University Cheikh Anta Diop). Teaching: Mathematical Analysis, Algebra, Operation Research: linear programming, Theory of Games, Graphs theory, Boolean Algebra. Mathematical of decision: Convex analysis, convex programming, Optimization, calculus. 1999-00: Senior lecturer at the UCAD.Teaching: Operation research, Mathematical of decision , Mathematical analysis. 4. M. Ndiaye, H. Giacomini, Quadratic systems equivalent by domain to a linear one: Global phase portrait, Extrata Matematica, 15 (2000) numero 1 3. Ndiaye, Malick; Michelot, Christian A geometrical construction of the set of strictly efficient points in the polyhedral norm case. Proceedings of the 9th Meeting of the EURO Working Group on Locational Analysis (Birmingham, 1996). Stud. Locat. Anal. No. 11 (1997), 89--99. 2. H. Giacomini, M. Ndiaye, New sufficient conditions for a center and global Phase portraits for polynomial systems. Publ. Mat , 40 (1996), 351-372. 1.H. Giacomini, M. Ndiaye, Sufficient conditions for the existence of a center in polynomial systems of arbitrary degree. Publ. Mat., 40 (1996),205-214.
{"url":"https://www.math.buffalo.edu/mad/PEEPS/ndiaye_malick.html","timestamp":"2024-11-09T20:54:33Z","content_type":"text/html","content_length":"4307","record_id":"<urn:uuid:0c1d6bf6-d8ae-4bf1-ba0c-12a00c5953ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00311.warc.gz"}
Applying Memoized to Recursive Function Applying Memoized to Recursive Function Does anyone know how to apply memoized function to a recursive function? specifically the def f(xn) function in the following function? I am trying to improve the following code to be able to factor a numbers. It seems to work well for values like 7331116 and 7331118 but 7331117 results in a recursion depth error and I can't figure out how to improve the code. Someone suggested to me to memoize the def f(xn) function but all can find online is that you add @CachedFunction right before the function is declared but it doesn't seem to help. def pollard_Rho(n): def f(xn): # This calculates f(x) = x^2+1 based on x0=2. return (1 + f(xn-1)**2)%n if xn else 2 i=0 # Counting variable x=f(i) # calculating x of the pollard rho method. y=f(f(i)) # calculating y of the pollard rho method. d=gcd(abs(x-y),n) # calculating gcd to construct loop. while d==1: # A loop looking for a non 1 interesting gcd. i = i + 1 # counting iterations print d # Printing d=gcd for debugging purposes. root1=d # Yes! found a factor, now we can find the other one. root2=n/d # Hey! Here is the other factor! print i + 1 # debugging print out. return (root1,root2) # Kick those factors out into the world. print pollard_Rho(7331118) Fixed indentation. 1 Answer Sort by ยป oldest newest most voted Your code is probably not well copied since the function f directly returns something, so what is after that will never be executed. Anyway, here is an example on how to memoize a recursive function: sage: @cached_function ....: def fibo(n): ....: if n == 0 or n == 1: ....: return 1 ....: else: ....: return fibo(n-1) + fibo(n-2) You can see the difference by calling fibo(100) with and without the @cached_function decorator. Now, if you want to compute, say, fibo(1000), you may encounter a RuntimeError: maximum recursion depth exceeded. So the trick is to fill the memory in the right order: sage: for i in range(1000): ....: a = fibo(i) sage: fibo(1000) edit flag offensive delete link more
{"url":"https://ask.sagemath.org/question/30384/applying-memoized-to-recursive-function/","timestamp":"2024-11-06T21:04:22Z","content_type":"application/xhtml+xml","content_length":"55621","record_id":"<urn:uuid:09989d03-5a4b-4930-9300-bac9b8ccab9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00714.warc.gz"}
Computational Geodynamics: Advection \[ \newcommand{\dGamma}{\mathbf{d}\boldsymbol{\Gamma}} \newcommand{\erfc}{\mbox{\rm erfc}} \newcommand{\curly}{\sf } \newcommand{\Red }[1]{\textcolor[rgb]{0.7,0.0,0.0}{ #1}} \newcommand{\Green }[1]{\ textcolor[rgb]{0.0,0.7,0.0}{ #1}} \newcommand{\Blue }[1]{\textcolor[rgb]{0.0,0.0,0.7}{ #1}} \newcommand{\Emerald }[1]{\textcolor[rgb]{0.0,0.7,0.3}{ #1}} \] We now move on to look at a different problem which also brings a few surprises when we attempt to find a straightforward numerical treatment. Specifically, the equations that govern transport of a quantity by a moving fluid (advection). This seems pretty straightforward as we simply have to account for a concentration being moved around by a velocity vector field. There are multiple tricks involved in doing this accurately. Generalizing from the Simplest Example We have learned a number of things — in particular • We have to know something about the governing equations of the system before modeling can proceed (i.e. a conceptual model, and then a mathematical model). • Before the equations can be solved in the computer, it is necessary to render the problem finite. • The discretization method chosen may not be effective for a particular problem. Numerical modeling can be an art since experience with different differential equations is often needed to avoid The Problem with Advection Advection is a fundamental concept in fluid mechanics, get it makes the lives of fluid dynamicists much more difficult. It can be particularly problematic in numerical modeling. This is worth having in mind when we discuss different numerical methods because in application to solid Earth dynamics, advection will be a major issue with any method we choose. As we discussed previously in dealing with approximate analytic solutions, the solution to all our advection woes is to deal with a coordinate system locked to the fluid. Unfortunately, while this approach works well in some situations – predominantly solid mechanics and engineering applications where total deformation is generally no more than a few percent strain – in fluids, the local coordinate system becomes quite hard to track. In the figure above, a small, square region of fluid has been tagged and is followed during the deformation induced by a simple convection roll. It is clear that a coordinate system based on initially orthogonal sets of axes rapidly becomes unrecognizably distorted. Advection, in the absence of any diffusion terms, represents a transport of information about the state of an individual parcel of fluid which is different from the state of its neighbours. For example, it might be a dye which tells us whether a parcel of fluid started in on half of the domain or the other. We are dealing with a chaotic system in the sense that parcels of material which start abitrarily close together will wind up exponentially far apart as time progresses. Thus, the dye will become ever more stretched and filamented without ever being mixed (at least in laminar If the dye can diffuse then, the finer scales of the tendrils will in fact be mixed because they are associated with enormous spatial gradients (e.g. compare this with boundary layers). If the dye cannot diffuse then the density of information needed to characterize the system increases without limit. Numerically this is impossible to represent since at some stage, the stored problem has to be kept finite. This can be imagined as an effective diffusion process, although it has an anisotropic and discretization dependent form. The rule of thumb, that arises from this observation is that the real diffusion coefficient must be larger than the numerical one if the method is to give a true representation of the problem. Numerical Example in 1D Let us follow the usual strategy and consider the simplest imaginable advection equation: \[ \frac{\partial \phi}{\partial t} = -v \frac{\partial \phi}{\partial x} \] in which \(v \) is a constant velocity. Obviously we need to introduce some notation as a warm-up for solving the problem. We break up the spatial domain into a series of points separated by \(\delta x \) as shown above and, as we did in the earlier examples, break up time into a discrete set separated by \(\delta t \). The values of \(\phi\) at various times and places are denoted by \[ \nonumber _ {i-1} & \phi _ {n-1} & _ {i} & \phi _ {n-1} & _ {i+1} & \phi _ {n-1} \newline \nonumber _ {i-1} & \phi _ {n} & _ {i} & \phi _ {n} & _ {i+1} & \phi _ {n} \nonumber \newline \nonumber _ {i-1} & \phi _ {n+1} & _ {i} & \phi _ {n+1} & _ {i+1} & \phi _ {n+1} \nonumber \] where the \(i \) subscript is the \(x \) position and \(n\) denotes the timestep: \[ \phi(x _ i,n\Delta t) = { _ {i}\phi} _ n \] A simple discretization gives \[ \begin{split} _ {i}\phi _ {n+1} &= \frac{\partial \phi}{\partial t} \Delta t + { _ {i}\phi _ n } & = -v \frac{ { _ {i+1}\phi _ n} - { _ {i-1}\phi _ n}}{2 \delta x} + { _ {i}\phi _ n} \end{split} \] For simplicity, we set \(v\delta t = \delta x /2\) and write \[ { _ {i}\phi} _ {n+1} = \frac{1}{4} \left( { _ {i+1}\phi} _ n - { _ {i-1}\phi} _ n \right) \] time i-2 i-1 i i+1 i+2 i+3 \( \int \phi dx \) Centred t=0 1 1 1 0 0 0 3 t=\(\delta x/2v\) 1 1 1.25 0.25 0 0 3.5 t=\(\delta x/v \) 1 0.938 1.438 0.563 0.0625 0 4.0 Upwind t=0 1 1 1 0 0 0 3 t=\(\delta x/2v\) 1 1 1 0.5 0 0 3.5 t=\(\delta x/v \) 1 1 1 0.75 0.25 0 4.0 Table: Hand calculation of low order advection schemes We compute the first few timesteps for a step function in \(\phi\) initially on the location \(x_i\) as shown in the diagram. These are written out in the first section of the table above. There are some oddities immediately visible from the table entries. The step has a large overshoot to the left, and its influence gradually propogates in the upstream direction. However, it does preserve the integral value of \(\phi\) on the domain (allowing for the fact that the step is advancing into the domain). These effects can be minimized if we use “upwind differencing”. This involves replacing the advection term with a non-centred difference scheme instead of the symmetrical term that we used above. \[ \begin{split} _ {i}\phi _ {n+1} &= \frac{\partial \phi}{\partial t} \Delta t + { _ {i}\phi _ n} & = -v \frac{ { _ {i}\phi _ n} - { _ {i-1}\phi_n}}{\delta x} + { _ {i}\phi _ n} \end{split} \] Where we now take a difference only in the upstream direction. The results of this advection operator are clearly superior to the centred difference version. Now the step has no influence at all in the upstream direction, and the value does not overshoot the maximum. Again, the total quantity of \(\phi\) is conserved. Why does this apparently ad hoc modification make such an improvement to the solution ? We need to remember that the fluid is moving. In the time it takes to make the update at a particular spatial location, the material at that location is swept downstream. Consider where the effective location of the derivative is computed at the beginning of the timestep - by the end of the timestep the fluid has carried this point to the place where the update will occur. This has some similarity to the implicit methods used earlier to produce stable results. Node/Particle Advection Contrary to the difficulty in advecting a continuum field, discrete particle paths can be integrated very easily. For example a Runge-Kutta integration scheme can be applied to advance the old positions to the new based on the known velocity field. It is only when the information needs to be recovered back to some regular grid points that the interpolation degradation of information becomes important. Courant condition For stability, the maximum value of \(\delta t\) should not exceed the time taken for material to travel a distance \(\delta x\). This makes sense as the derivatives come from local information (between a point and its immediate neighbour) and information cannot propogate faster than \(\delta x / \delta t \). If the physical velocity exceeds the maximum information velocity, then the procedure must fail. This is known as the Courant (or Courant-Friedrichs-Lewy) condition. In multidimensional applications it takes the form \[ \delta t \le \frac{\delta x}{\sqrt{N} |v|} \] where \(N \) is the number of dimensions, and a uniform spacing in all directions, \( \delta x \) is presumed. The exact details of such maximum timestep restrictions for explicit methods vary from problem to problem. It is, however, important to be aware that such restrictions exist so as to be able to search them out before trouble strikes. One of the ugliest problems from advection appears when viscoelasticity is introduced. In this case we need to track a tensor quantity (stress-rate) without diffusion or other distorting effects. Obviously this is not easy, especially in a situation where very large deformations are being tracked elsewhere in the system - e.g. the lithosphere floating about on the mantle as it is being stressed and storing elastic stress.
{"url":"http://www.moresi.info/pages/ComputationalGeodynamics/NumericalMethodsPrimer/NumericalMethodsPrimer-3/","timestamp":"2024-11-05T16:44:24Z","content_type":"text/html","content_length":"23975","record_id":"<urn:uuid:8a3ec0ad-6386-4e7b-b473-1c2dec152bce>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00505.warc.gz"}
speed of quantum interactions Inspired by another quantum thread, I am going to put some messy thoughts down on paper to try and make sense of them. IANA physicist as you can tell. If one takes a multiple slit photon experiment to get a diffraction pattern, the precise pattern one gets is determined by the total arrangement of slits. But some of those slits could be so far away that the photon “potential field” would have to travel faster than the speed of light to have been influenced by them (this is also true for photons hitting the middle of the screen as the precise pattern can be influenced by further away slits). This does not seem to be true of other potential fields which propagate at the speed of light. (I know I am mixing up field concepts here but there does seem to be some analogy) Now imagine a photon in flight. If one changes just one of the slits and it affects the pattern, whats the latest that one can change it before having an effect on the photon passing though the collective? Is the idea of non-locality in the EPR experiment also relevant here? Does the fact that the photon appears to interact with the entire slit device at once imply some sort of universal time (now) for the device or are quantum interactions governed by the speed of light too? I have probably mangled many fine concepts and I suspect Heisenberg has something to say on it too. A couple of nitpicks here: First, there’s only one other field that we know of that we think travels at the speed of light, namely the gravitational field. And second, you can get interference patterns with electrons and other massive particles just as well as you can with photons; in fact, this is one of the ways that scientists can investigate molecular structure. Photons, in this regard, behave just like waves. If you drop a pebble into a pond, the pattern made by the waves as they interact with obstacles in the water is determined by what the obstacles look like when the waves actually get there. So if I were to fire off a photon towards a double slit, and then close one of the slits before the photon could get there, then the photon would just “see” a single slit and wouldn’t interfere with itself; there’s nothing terribly out of the ordinary here, as long as you’re familiar with how waves act. Now, where things get weird is in the fact that you can “erase” quantum information even after it’s been collected and the waveform should have “collapsed”, thereby restoring the quantum nature of the system. See the quantum eraser experiment and its even freakier cousin the deayed choice quantum eraser experiment, for details. The color field (responsible for the strong force) is also presumed to travel at the speed of light, though it’s probably hard to measure at those short distances. And other fields (or particles, or whatever) which propagate slower than c don’t have any particular speed (it depends on the energy compared to the mass of the particle). One way of phrasing the principle of Lorentz invarience is that c is the only inherent speed in fundamental physics. Wow, this has really been the week for questions about quantum mechanics. I need to figure out a way to turn this into a best-selling airplane novel before the trend dies out and I’ll be set for This is not a trivial or nescient question, and in fact the answer is at the heart of interpreting what is going on here. According to Heisenberg interpretations, there is no “realism”, nothing exists until you look at it, and everything that seems to require particles doing strange things over fast differences is just an artifact of the probabilistic nature of QM. They try really really hard not to think about this kind of thing and just crank through the equations, which give the right answers. Boosters of the Bohm interpretations would say that yes, the photon is nonlocally connected (i.e. spread out), very much real whether you’re looking at it or not, and communication from one point on it to another is instantaneous, or at least not dependant upon the spacetime distance between the points. The relative state/Many Worlds advocates will say there is an infinite number of photons and you happen to have ended up in the universe where this photon took that path. The consistant histories romantics believe that it was all statistically classical behavior all along, but doctored up by nature to look probabilistic and conforming to the Schrödinger equation. And nobody can prove anything; it’s not just a bit like reading Murder On The Orient Express, in that all interpretations appear to satisfy the theory, but none more than the others. Another slight nitpick is that we expect that gravity moves at c, but we don’t know that it does for certain. In general relativity, gravity is shown to be a property of spacetime (that is, its curvature) and propagation occurs at a single speed for all fields. As I understand it, this speed not necessarily equal to c; however, it if goes slower then you have a potential for non-conservation of momentum, and if you go faster than light, it violates causality. The former is a stock principle of physics which gives every evidence of applying on all scales, from the cosmological to the quantum. The latter (causality) is something we’ve always assumed to be true, and violating it would have serious problems not only for the grandfather you’ve always wanted to go back and assassinate, but also for thermodynamics, and more general, being able to make any kind of consistant rules about the mechanics of nature. Similar arguments exist for the color field, but because the interactions are mediated by virtual gluons that, owing to color confinement, can’t come out and play for hardly any time at all, certainly not enough to get a lock on them and measure anything. This is an interesting question – one that the complementarity of the quantum world may make impossible to answer. In the double-slit experiment, the space between the interference fringes on the detector is approximately wL/d, with w the wavelength of the light, d the distance between the slits, and L the distance from the slits to the detector. As you make the slits farther and farther apart, increasing d, the fringes will get closer and closer together, and the pattern on the detector begins to look more and more classical. By the time d gets large enough for you to have any hope of “fooling” the photon by making a change to one of the slits on the time scale of d/c (with c=3x10[sup]8[/sup]m/s, a microsecond corresponds to 30 kilometers), or to have any reasonable expectation that the field might not interact with itself, your system may be too large to make any precision measurement (too many “observer” atoms; I like the thermodynamic analogue you suggested in the other thread). A really nice, accessible piece on interference and complementarity is here. If you search google video, for “phase space interferences of neutrons”, you’ll find a talk by a charming fellow describing how the interference patterns of neutrons vanishes as you move the clusters farther apart (caveat: I think it’s that one, but I haven’t watched it in some time). Oops, make that 1/3 of a kilometer per microsecond. Or, as I like to say, 30 kilocentimeters. thanks for the replies. Glad it wasnt a totally stupid question - I suspected that as illoe pointed out the scales involved means that you may never be able to test it (or that nature will find a way to prevent you) It certainly doesn’t seem testable given what we know now. It’s certainly possible that someone will come up with a novel method of testing the premises of various interpretations, and if they do they’ll no doubt win one of those fancy awards from Sweden, but with what we have at our disposal we’re at a loss to say more than “Urrrow?” It’s also quite possible (and in my estimation, likely) that all interpretations are wrong and that something fundamentally stranger than we can possibly imagine is going on. And to think that Keats whined of the destruction of beauty and mystery by science “unweaving the rainbow.” Or Wordsworth (even less mellifluously) that “we murder to dissect.” Those consarned quantum particles just can’t be properly interrogated, but if they could, they’d be quivering in their socks right about now.
{"url":"https://boards.straightdope.com/t/speed-of-quantum-interactions/400719","timestamp":"2024-11-14T01:05:24Z","content_type":"text/html","content_length":"48164","record_id":"<urn:uuid:38dd60b7-bfc7-401e-be4d-565180384da2>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00099.warc.gz"}
Surface of wall - math word problem (4082) Surface of wall Find by what percentage the cube's surface will decrease if we reduce the surface of each of its walls by 12%. Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! Tips for related online calculators percentage calculator will help you quickly calculate various typical tasks with percentages. You need to know the following knowledge to solve this word math problem: Grade of the word problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/4082","timestamp":"2024-11-07T06:24:35Z","content_type":"text/html","content_length":"46346","record_id":"<urn:uuid:c059a274-5841-448f-8085-0f3006cde2c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00805.warc.gz"}
How to calculate GST Archives - Clooud Consulting | SG Calculating GST & Service Charges in SG 2024 How to calculate GST in 2024 How the GST Rate Change Impacts You In Singapore, when you buy something, you almost always pay more than the stated price due to GST and service charge. These charges, at 8% and 10% of the item's retail price, respectively, are payable when you receive the bill at most restaurants and hotels. As many of us know, commencing from 1 January 2024, Singapore will implement the second tranche of its Goods and Services Tax (GST) hike, transitioning from 8% to 9%. Let's find out more in this post on calculating GST & service charges in SG 2024. There are scenarios where one or more of the following events straddle 1 Jan 2024: • Issuance of invoice • Payment for the goods or services • Receipt of goods or services Easy Way to Add GST and Service Charge to Your Bill No need for complicated calculators! Here’s a super simple way to figure out your bill with extra charges: • GST (9%): Multiply your bill by 1.09. • Service Charge (10%): Multiply your bill by 1.10. • Both GST and Service Charge: Multiply your bill by 1.199. Here's the example: Its is easy to calculate Singaporean GST at 9% rate: 1.If $100 is the GST exclusive value $100 * 0.09 = $9 GST amount. 2. To get the GST inclusive amount, multiply the GST exclusive value by 1.09. For a $100 GST exclusive value, $100 * 1.09 = $109 GST inclusive amount. 3. To extract the GST part from a GST inclusive amount, divide the GST inclusive amount by 109 and multiply by 9. If $109 is the GST inclusive value, then ($109/109) * 9 = $9 GST value. Calculating GST & Service Charges in SG 2024 can be daunting. Here's more on Things you need to know about GST in 2024 Read more: What is GST and how it works!
{"url":"https://clooudconsulting.com/tag/how-to-calculate-gst/","timestamp":"2024-11-03T13:10:17Z","content_type":"text/html","content_length":"49647","record_id":"<urn:uuid:4fabe746-5b41-448d-8440-5b513a0ead27>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00264.warc.gz"}
55 Quantitative Aptitude Job Interview Questions And Answers Quantitative Aptitude Interview Questions And Answers Sharpen your Quantitative Aptitude interview expertise with our handpicked 55 questions. These questions will test your expertise and readiness for any Quantitative Aptitude interview scenario. Ideal for candidates of all levels, this collection is a must-have for your study plan. Access the free PDF to get all 55 questions and give yourself the best chance of acing your Quantitative Aptitude interview. This resource is perfect for thorough preparation and confidence building. 55 Quantitative Aptitude Questions and Answers: Quantitative Aptitude Job Interview Questions Table of Contents: Quantitative Aptitude Job Interview Questions and Answers 1 :: 36 people {a1, a2, ..., a36} meet and shake hands in a circular fashion. In other words, there are totally 36 handshakes involving the pairs, {a1, a2}, {a2, a3}, ..., {a35, a36}, {a36, a1}. Find the size of the smallest set of people such that the rest have shaken hands with at least one person in the set? For at least one person & for exactly one person =36/3 For at least two person & for exactly two person =36/2 ans is 36/3= 12 if question is asked lyk no "cycle" then n-1. Read More 2 :: In a group of 15, 7 have studied Latin, 8 have studied Greek, and 3 have not studied either. How many of these studied both Latin and Greek? A. 0 B. 3 C. 4 D. 5 Total number of students(U)=15 Number of students who study neither of them (N)=3 draw a simple diagram representing the above situation. we need to find out L INTERSECTION G. Thus we have, L UNION G + N = 15 L UNION G = L + G - L INTERSECTION G = 7 + 8 - L INTERSECTION G L INTERSECTION G= 7 + 8 +N-15 = 7+8+3-15 = 3 Read More 3 :: Explain series: 2,7,24,77,. Ans is: Therefore ans is 238 Read More 4 :: What is the largest prime no. stored in 8bit/6bit/9bit/7bit memory? A,B,C are the mechanisms used separately? for 6 bit maximum allowed number is 64 prime number is 61 for 7 bit maximum allowed number is 128 prime number is 127 for 8 bit max allowed number is 256 prime number is 253 for 9 bit max allowed number is 512 prime number is 511 Read More 5 :: A circular dart board of radius 1 foot is at a distance of 20 feet from you. You throw a dart at it and it hits the dartboard at some point X in the circle. What is the probability that X is closer to the center of the circle than the periphery? Read More 6 :: 1/3 of a number is 3 more than 1/6 of the same number. What is the number? let x be a no now 1/3 of a no is x/3 and again Read More 7 :: A person, who decided to go to weekend trip should not exceed 8 hours driving in a day. Average speed of forward journey is 40 m/h. Due to traffic in Sundays, the return journey average speed is 30 m/h. How far he can select a picnic spot? a) 120 miles b) Between 120 and 140 miles c) 160 miles solution--- t1= time for forword journey. t2 =time 4 return journey. t=t1 + t2 =8hrs; x= destination; t1 + t2 = x / 40 + x / 30 = 8; x( 7/120 ) = 8; x= 8*120/ 7 = 137 milesie answer is(b) Read More 8 :: How many 5 digit numbers can be formed using the digits 1, 2, 3, 4, 5 (but with repetition) that are divisible by 4? Divisibility rule of 4 is last two digits are divisible by 4 so in 5 digit no last two digit should always be 12,24,32,44,52; (any one of them may be) so for last two digits 5 ways are possible ; since repetion are allowed then for 3rs place form right there are also 5 ways so in 3rd place either of 1,2,3,4,5 is possible; simillarly at 4 rth place either of 1,2,3,4,5 is possible so again 5 ways are possible simillarly at 5th place also 5 ways are possible so total no are: 5*5*5*5=625 Read More 9 :: If A is traveling at 72 km per hour on a highway. B is traveling at a speed of 25 meters per second on a highway. What is the difference in their speeds in m/sec? ya the answer is 5sec.. converting 72kmph in m/sec we get 20m/sec so 25-20=5sec/m Read More 10 :: The cost of one pencil, two pens and four erasers is Rs.22 while the cost of five pencils, four pens and two erasers is Rs.32.How much will three pencils, three pens and three erasers cost? Let a, b, and c denote pencil, pen and eraser respectively. Given that the cost of one pencil, two pens and four erasers is Rs.22 This can be written as, a + 2b + 4c = 22 ..........(1) Similarly, the cost of 5 pencils, 4 pens and 2 erasers is Rs. 32 This implies .... 5a + 4B + 2c = 32 ......... (2) Adding (1) and (2), we get 6a + 6b + 6c = 54 We need to find the value of 3a + 3b + 3c. Dividing by 2, we get 3a + 3b + 3c = 27. Hence, the cost of 3 pencils, 3 pens, and 3 erasers is Rs.27 Read More 11 :: A candidate appearing for an examination has to secure 40% marks to pass paper I. But he secured only 40 marks and failed by 20 marks. What is the maximum mark for paper I? let us consider x is the maximumber number of marks. secured marks => 40% of x= 40/100 of x=2x/5 To pass an exam he has to score =fail marks+20 maximum marks for paper I is =150 Read More 12 :: After the typist writes 12 letters and addresses 12 envelopes, he inserts 1 letter per envelope randomly into the envelopes. What is the probability that exactly 1 letter is inserted in an improper envelope? as 1 letter cannot be in improper envelope. there has to be more than 1 letter that is in improper envelope Read More 13 :: A moves 3 kms east from his starting point . He then travels 5 kms north. From that point he moves 8 kms to the east.How far is A from his starting point? ans is 12.43... diagramatically, u wil get a right angled triangle with sides 5 n 8.. using pythogoras th. 3rd side will b 9.43.. 9.43+3= 12.43 Read More 14 :: Find the right number, from the given options, at the place marked by the question mark: 2, 4, 8, 32, 256, ? Each of the preceding number is multipied by the previous number So on multiplying 2*2 = 4 4*2 = 8 8*4 = 32 32*8 = 256 256*32 = 8192 so the answer for this series is 8192 Read More 15 :: Given 3 lines in the plane such that the points of intersection form a triangle with sides of length 20, 20 and 30, What is the number of points equidistant from all the 3 lines? As it is the incenter of the formed triangle which is equidistant from all the three lines... Read More 16 :: An amount doubles itself in 3 years. When This amount can become 8 times of itself? Suppose Amount Was Rs 100.00 In 1st Year 1 To 3 Yrs - 200 4 To 6 Yrs - 400 7 To 9 Yes - 800 Read More 17 :: Two pipes A and B fill at A certain rate B is filled at 10,20,40,80,. If 1/16 of B if filled in 17 hours what time it will take to get completely filled? It fills at a rate of 2 (i.e) 10,20,40.... 1/16 is filled in 17hr. then it fills as 1/8,1/4,1/2,1.. Hence 17+1+1+1+1=21 Read More 18 :: A moves 3km. east from his starting point,he then travels 5km north,from that point he moves 8kms to the east. how far is A from his starting point? if a student scores GPA in his sem & year as given below then calculate his GPA for 3 year. sem 2.4,3.4 yr 1.3,1.9 bbgmpqbgskobgasbbgdefbgsti how many bs are followed by gs which are not followed by s in the above series? t=truncation ,r=round-off A-starting point B-starting point of 5km walk C-starting point of 8km walk D-Final point Now to find distance between A and D. BCD makes a right-angle triangle... therefore, DB=sqrt(64+25) ....... (pythagoras theorem) Now AD=AB+DB=3+9.43 Hence, AD=12.43 (Answer) Read More 19 :: If 8 men can reap 80 hectares in 24 days, then how many hectares can 36 men reap in 30 days? By Direct approach : more men,more hectares moredays,more hectares 8 / 36 : 24 / 30 = 80 / x x = (36 x 30 x 80) / (8 x 24) Read More 20 :: The program size is N. The memory occupied by the program is M=4000sqrt(N). If the program size is increased by 1%. Then what is the percentage increase in memory? Read More 21 :: Two pencils costs 8 cents, then 5 pencils cost how much ? 2 pencils cost = 8 5 pencils cost = ? cross multiply 8*5/2 = 40 / 2 = 20 so 20 is price of 5 pencils Read More 22 :: A man walks at the speed of 4km per hour from point A to B and comes back from point B to A at the speed of 8km per hour. What is the ratio between the time taken by man from point A to B and B to A? 4 km/hr from A to B = s1 8 km/hr from B to A = s2 Speed = Distance/Time s1 = d1/t1 s2 = d2/t2 s1/s2 = t2/t1 We need t1/t2 so, Answer ratio is 2:1 Read More 23 :: a 100100011 b 000110010 c 100000110 then find a-(buc) or au(b-c)? (i) we know that (a-b)=a intersection b' therefor a-(buc)=a intersection (buc)' a intersection (buc)'=00000001=a-(buc) (ii)similarly au(b-c)=100110011 Read More 24 :: What is the probability that 4 numbers selected from 1 to 40 are not consecutive? From 1 to 40 there are 37 possible combinations of 4 consecutive numbers. Total number of possible combinations= 40C4. Probability of outcome to be consecutive = 37/(40C4)= 1/2470 Probability of outcome not to be consecutive = 1-(1/2470)= 2469/2470 Read More 25 :: Low temperature at the night in a city is 1/3 more than 1/2 high as higher temperature in a day. Sum of the low temperature and highest temp. is 100 degrees. Then what is the low temp? night temp=n,day temp=d hence, 4d/6+d=100 n==40 deg Read More
{"url":"https://www.globalguideline.com/interview/questions/Quantitative_Aptitude","timestamp":"2024-11-12T21:57:24Z","content_type":"text/html","content_length":"88735","record_id":"<urn:uuid:8335e4db-f3d8-4098-97e9-d49e6847b179>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00248.warc.gz"}
An ideal gas with heat capacity at constant volume C(V) undergoes a qu Doubtnut is No.1 Study App and Learning App with Instant Video Solutions for NCERT Class 6, Class 7, Class 8, Class 9, Class 10, Class 11 and Class 12, IIT JEE prep, NEET preparation and CBSE, UP Board, Bihar Board, Rajasthan Board, MP Board, Telangana Board etc NCERT solutions for CBSE and other state boards is a key requirement for students. Doubtnut helps with homework, doubts and solutions to all the questions. It has helped students get under AIR 100 in NEET & IIT JEE. Get PDF and video solutions of IIT-JEE Mains & Advanced previous year papers, NEET previous year papers, NCERT books for classes 6 to 12, CBSE, Pathfinder Publications, RD Sharma, RS Aggarwal, Manohar Ray, Cengage books for boards and competitive exams. Doubtnut is the perfect NEET and IIT JEE preparation App. Get solutions for NEET and IIT JEE previous years papers, along with chapter wise NEET MCQ solutions. Get all the study material in Hindi medium and English medium for IIT JEE and NEET preparation
{"url":"https://www.doubtnut.com/qna/649445121","timestamp":"2024-11-09T05:56:13Z","content_type":"text/html","content_length":"213176","record_id":"<urn:uuid:578da078-3ebf-4b6c-99f1-563cbc2d663b>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00527.warc.gz"}
Mathematics: Preparing for College and Apprenticeship 12 Student Resources TOC Mathematics Prerequisites for Selected Programs Disclaimer: This page is intended as a general guide only. Universities vary in their requirements, and sometimes prerequisites are changed. Check the specific web site for your desired university for detailed and updated information. Accounting: Calculus & Advanced Functions or Geometry & Discrete Mathematics Applied Science: One Grade 12 mathematics course Architecture: See Science Aviation Management: Calculus & Advanced Functions, one other Grade 12U mathematics course Commercial Studies: Calculus & Advanced Functions, one other Grade 12U mathematics course Computer Science: Geometry & Discrete Mathematics, one other Grade 12 mathematics course Dentistry: See Science Earth Science: One Grade 12U mathematics course Engineering: Calculus & Advanced Functions, Geometry & Discrete Mathematics Environmental Science: One Grade 12U mathematics course Health Science: Calculus & Advanced Functions or Geometry & Discrete Mathematics Human Ecology: Functions and Relations Grade 11U Information Technology: One Grade 12U or 12M mathematics course Justice Studies: One Grade 12U or 12M mathematics course Kinesiology: Calculus & Advanced Functions Landscape Architecture: One Grade 12 mathematics course Mathematics: Calculus & Advanced Functions, Geometry & Discrete Mathematics Music Administration: Calculus & Advanced Functions Nuclear Science: Calculus & Advanced Functions Nursing: Functions and Relations Grade 11U or Functions Grade 11U/C Oenology & Viticulture: One Grade 12U mathematics course Optometry: See Science Science: Calculus & Advanced Functions, one other Grade 12U mathematics course Veterinary Medicine: See Science
{"url":"https://learningcentre.nelson.com/student/9780070917125/9780070917125.htm","timestamp":"2024-11-14T00:02:53Z","content_type":"text/html","content_length":"106334","record_id":"<urn:uuid:c21d1563-8fd5-47af-8198-061d928add1b>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00674.warc.gz"}
"Digital Signal Processing" - scientific and technical journal “Digital Signal Processing” No. 4-2021 Digital image processing In the issue: - Radar imaging - hyperspectral images analysis - noisy radar images classification - object image contours rediscretization - corresponding objects search algorithm - accuracy estimation of images stitching - organization of remote sensing imaging systems calibration - neural network polyp detector on video images - design quantized pulse-shaping FIR filters - analysis of the codes soft decoding method Design quantized pulse-shaping FIR filters for digital communication systems Mingazin A.T., e-mail: alexmin@radis.ru RADIS Ltd, Russia, Moscow Keywords: quantized pulse-shaping linear-phase FIR filters, square-root raised cosine filters, weighted Chebyshev approximation, inter-symbol interference, variation of initial parameters. Three significantly different design methods of matched quantized pulse-shaping linear-phase FIR filters of direct structure for digital communication systems are considered. The first is based on obtaining a frequency characteristic of the square-root raised cosine form, and the second and third are based on a weighted Chebyshev approximation of a magnitude response. The third method corresponds to a two-step approximation for a half-band filter and an amplitude corrector, the cascade connection of which forms a pulse-shaping filter. The design by each method is aimed at achieving the required values of stopband attenuation of magnitude response for the pulse-shaping filter and the ISI level at the output of the cascade connection of a pair of such filters at minimum values of the order and coefficient wordlength. The problem of coefficient quantization in each of the methods is solved by using the technique of variation of initial parameters, which is illustrated graphically. Results of the filter design for two values of roll-off factor 0.35 and 0.05 at the oversampling factor equal to 2 are presented. All the results obtained by the three methods satisfy the specified requirements for attenuation in the stopband (≥50 dB) and ISI (≤-25 dB) at close values of the coefficient wordlength differing by no more than 1 bit for each of the two values of the roll-off factor. The second and third methods in comparison with the first result in a significantly smaller number of multipliers and adders in the filter structures. The second - allows you to obtain smaller orders of filters and, therefore, the smallest group delay values. According to the level of peak-to-average power ratio of the modulated signal, the results have a spread of 4.2-5.5 dB and 7.8-9.9 dB, respectively, for roll-off factors 0.35 and 0.05. The lower limits of this parameter are achieved by the second method. How good are the obtained quantized pulse-shaping FIR filters in combination with possible additional interpolation/decimation steps in specific digital communication systems in the presence of timing jitter, noise, interference and non-linear distortions can be ascertained by mathematical and/or physical modeling. 1. Beaulieu N. C., Damen M. O. Parametric construction of Nyquist-I pulses// IEEE Trans. Commun., 2004, vol. 52, no. 12, pp. 2134-2142. 2. Assalini A., Tonello A. M. Improved Nyquist pulses// IEEE Commun. Letters, 2004, vol. 8, no. 2, pp. 87-89. 3. Bobula M., Prokes A., Danek K. Nyquist filters with alternative balance between time- and frequency-domain parameters// EURASIP J. Adv. in Signal Processing, vol. 2010, Article ID 903980, 11p. 4. Assimonis S. D., Matthaiou M., Karagiannidis G. K., Nossek J. A. Improved parametric families of intersymbol interference-free Nyquist pulses using inner and outer functions// IET Signal Processing, 2011, vol. 5, no. 2, pp. 157–163. 5. Samueli H. On the design of optimal equiripple FIR digital filters for data transmission applications// IEEE Trans. on CAS, 1988, vol. 35, no. 12, pp. 1542-1546. 6. Ramachandran R. P., Kabal P. Minimax design of factorable Nyquist filters for data transmission systems// Signal Processing, 1989, vol. 18, no. 3, pp. 327-339. 7. Samueli H. On the design of FIR digital data transmission filters with arbitrary magnitude specifications//IEEE Trans. on CAS, 1991, vol. 38, no. 12, pp. 1563-1567. 8. Farhang-Boroujeny B., Mathew G. Nyquist filters with robust performance against timing jitter//IEEE Trans. on SP, 1998, vol. 46, no. 12, pp. 3427-3431. 9. Ashrafi A., Harris F. J. A novel square-root Nyquist filter design with prescribed ISI energy// Signal Processing, 2013, vol. 93, no. 9, pp. 2626-2635. 10. Siohan P., Moreau de Saint-Martin F. New designs of linear-phase transmitter and receiver filters for digital transmission systems// IEEE Trans. on CAS-II.1999, vol. 46, no. 4, pp. 428-433. 11. Boonyanant P., Tantaratana S. Design and hybrid realization of FIR Nyquist filters with quantized coefficients and low sensitivity to timing jitter// IEEE Trans. on SP, 2005, vol. 53, no. 1, pp. 12. Farhang-Boroujeny B. A square-root Nyquist (M) filter design for digital communication systems// IEEE Trans on SP, 2008, vol. 56, no. 5, pp. 2127-2132. 13. Yao C.-Y., Willson A. N. The design of hybrid symmetric-FIR/analog pulse-shaping filters// IEEE Trans. on SP, 2012, vol. 60, no. 4, pp. 2060-2065. 14. Ashrafi A. Optimized linear phase square-root Nyquist FIR filters for CDMA IS-95 and UMTS standards// Signal Processing, 2013, vol. 93, no. 4, pp. 866-873. 15. Traverso S. A family of square-root Nyquist filter with low group delay and high stopband attenuation// IEEE Commun. Letters, 2016, vol. 20, no. 6, pp. 1136-1139. 16. Yao C.-Y., Wang S.-C. A QCQP design method of the symmetric pulse-shaping filters against receiver timing jitter// ISCAS, 2017, 4p. 17. Xiao R., Lei Q., Guo X., Du W., Zhao Y. A design of two sub-stage square-root Nyquist matched filter// IEEE Access, 2018, vol. 6, may, pp. 23292-23302. 18. Renfors M., Saramaki T., Pulse-shaping filters for digital transmission systems// GLOBECOM, 1992, pp. 467-471. 19. Vaisanen K., Renfors M. Efficient digital filters for pulse-shaping and jitter-free frequency error detection and timing recovery// Signal Processing, 2001, vol. 81, no. 4, pp. 829-844. 20. Samueli H. The design of multiplierless digital data transmission filters with powers-of-two coefficients// Proc. IEEE Int. Telecomm. Symp., 1990, pp. 425-429. 21. Kim H. Computer simulation results and analysis for a root-raised cosine filter design using canonical signed digits// NASA Technical Memorandum 107327, 1996, 16p. 22. Bonnaud A., Feltrin E., Barbiero L. DVB-S2 extension: end-to-end impact of sharper roll-off factor over satellite link// SPACOMM, 2014, pp. 36-41. 23. Lim Y. C., Yu Y. J. A width-recursive depth-first tree search approach for the design of discrete coefficient perfect reconstruction lattice filter bank// IEEE Trans. on CAS: II, 2003, vol. 50, no. 6, pp. 257-266. 24. Yli-Kaakinen J., Saramaki T., Bregovic R. An algorithm for the design of multiplierless two-channel perfect reconstruction orthogonal lattice filter banks// ISCCSP, 2004, pp. 415-418. 25. Mingazin A. T. Two examples of multiplierless perfect reconstruction lattice filter bank design //11-th Int. Conf. Digital Signal Processing and its Applications (DSPA-2009), vol.1, pp. 99-102. 26. Vaidyanathan P. P. Multirate digital filters, filter banks, polyphase networks, and applications: a tutorial //Proceedings of the IEEE, 1990, vol. 78, no 1. pp. 56-93. 27. Mingazin A. T. Variation of initial parameters in design FIR digital filters with finite wordlength coefficients// 3-th Int. Conf. Digital Signal Processing and its Applications (DSPA-2000), vol.1, pp. 162-166. 28. Mingazin A. T. Variation of initial parameters of weighted Chebyshev approximation in multiplierless FIR filter design (DSPA-2005)//7-th Int. Conf. Digital Signal Processing and its Applications (DSPA-2005), vol. 1, pp. 54-56. 29. Mingazin A. T. Three-dimensional graphics in analysis problem of quantized FIR filters// Digital Signal processing. Russian Scientific and Technical Journal, 2020, no. 2, pp. 46-51. Increasing the efficiency of the signals processing in case of continuous wave interference by choosing the function of the preliminary weighting for frequency notch E.V. Kuzmin, e-mail: ekuzmin@sfu-kras.ru Siberian Federal University (SibFU), Russia, Krasnoyarsk Keywords: signals searching, continuous wave interference, frequency notch, weight function, discrete Fourier transform. The efficiency of the spread spectrum signal delay searching is investigated for the suppres-sion of an intense additive continuous wave interference due to frequency rejection based on the discrete Fourier transform. To reduce the influence of the "pedestal effect" on the quality of processing, the weight (window) functions of Hann, Blackman, Parzen (de la Valle-Poussin), Henning and some others are considered. Statistical experiments have established and demonstrated an in-crease in the efficiency of spread spectrum signal searching under these conditions when using power-law variations of the Henning window (in comparison with others considered). The article presents curves of dependences of the probability of correct signal searching for various reception conditions and typical variants of the produced coherent accumulations. 1. Statisticheskaya teoriya priema slozhnykh signalov (Statistical theory of complex signals receiving) / G.I. Tuzov. M.: Sov. Radio. 1977. 400 p. 2. Davidovici S., Kanterakis E.G. Narrow-Band Interference Rejection Using Real-Time Fourier Transforms // IEEE Transactions on Communications, Jul. 1989. V.37. ¹7. pp.713–722. 3. Adaptivnaja obrabotka signalov (Adaptive signal processing) / B. Widrow, S. Stearns. M.: Radio i svjaz'. 1989. 440 p. 4. Cifrovaja chastotnaja selekcija signalov (Digital frequency selection of signals) / V.V. Vit-yazev. M.: Radio i svjaz', 1993. 240 p. 5. Shilov A.I., Bakit'ko R.V., Pol'shhikov V.P., Hackelevich Ja.D. Predvaritel'naja obrabotka shumopodobnyh signalov pri nalichii sil'nyh interferencionnyh pomeh (Preprocessing of spread spectrum signals in the presence of strong interference interference) // Radiotehnika. 2005. no 7. pp. 31–35. 6. Perov A.I. Sintez optimal'nogo algoritma obrabotki signalov v prijomnike sputnikovoj nav-igacii pri vozdejstvii garmonicheskoj pomehi (Synthesis of an optimal signal processing algorithm in a satellite navigation receiver under the influence of harmonic interference) // Radiotehnika. 2005. no 7. pp. 36–42. 7. Bakit'ko R.V., Pol'shhikov V.P., Shilov A.I., Hackelevich Ja.D., Boldenkov E.N. Ispol'zovanie vesovyh funkcij dlja predvaritel'noj obrabotki shumopodobnyh signalov pri nalichii sil'nyh interferencionnyh pomeh (Using weighting functions for preprocessing spread spectrum sig-nals in the presence of strong interference) // Radiotehnika. 2006. no 6. pp. 13–17. 8. Perov A.I., Boldenkov E.N. Issledovanie adaptivnyh transversal'nyh fil'trov dlja prijomni-kov sputnikovoj navigacii pri vozdejstvii uzkopolosnyh pomeh (Investigation of adaptive transversal filters for satellite navigation receivers under the influence of narrow-band interference) // Radio-tehnika. 2006. no 7. pp. 98–105. 9. GLONASS. Printsipy postroeniya i funktsionirovaniya (GLONASS. Design Principles and Functioning) / ed. by A.I. Perov, V.N. Kharisov. Ì.: Radiotekhnika. 2010. 800 p. 10. Kuzmin E.V., Zograf F.G. Povyshenie verojatnosti pravil'nogo poiska shumopodobnogo signala po vremeni zapazdyvanija na fone tonal'noj pomehi (Enhancement of the probability of spread-spectrum signal correct searching in case of narrow-band interference) // Uspekhi sovremen-noi radioelektroniki (Achievements of Modern Radioelectronics). 2016. no 11. pp. 137–140. 11. Kulikov G.V., Nesterov A.V., Leljuh A.A. Pomehoustojchivost' priema signalov s kvadraturnoj amplitudnoj manipuljaciej v prisutstvii garmonicheskoj pomehi (Noise immunity of re-ceiving signals with quadrature amplitude shift keying in the presence of harmonic interference) // Zhurnal radiojelektroniki [jelektronnyj zhurnal]. 2018. no 11. URL: http://jre.cplire.ru/jre/nov18/9/text.pdf. 12. Kuzmin E.V. O vlijanii kvantovanija po urovnju na jeffektivnost' procedury poiska shu-mopodobnogo signala po zaderzhke na fone shuma i garmonicheskoj pomehi (Efficiency of the spread spectrum signal searching procedure in case of continuous wave interference and quantiza-tion effect) // Cifrovaja obrabotka signalov (Digital signal processing). 2020. no 2. pp. 41–45. 13. Shahtarin B.I. Analiz fazovoj avtopodstrojki pri vozdejstvii garmonicheskoj pomehi i shuma (Phase-locked analysis for harmonic interference and noise) // Radiotehnika i jelektronika. 2021. V. 66. no 8. pp. 782–790. 14. Kuzmin E.V. Analiz chastotnyh harakteristik procedur kvadraturnoj korreljacionnoj obrabotki kompleksnyh signalov (Analysis of the frequency responses of the quadrature correlation processing of complex signals) // Cifrovaja obrabotka signalov (Digital signal processing). 2020. no 4. pp. 13–20. 15. Harris F.J. On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform // Proceedings of the IEEE, Jan. 1978. V.66. pp.51–83. 16. Cifrovoj spektral'nyj analiz i ego prilozhenija (Digital Spectrum Analysis and its Applica-tions) / S.L. Marpl.-ml. Per. s angl. M.: Mir. 1990. 584 p. 17. Metody i tehnika obrabotki signalov pri fizicheskih izmerenijah: v 2-h tomah (Methods and techniques for signal processing in physical measurements: in 2 volumes) / Zh. Maks. M.: Mir, 1983. V. 1. 312 p. 18. Okonnye funkcii dlja garmonicheskogo analiza signalov (Window functions for harmon-ic analysis of signal) / V.P. Dvorkovich, A.V. Dvorkovich. M.: Tehnosfera, 2016. 208 p. 19. Cifrovye radiopriemnye sistemy: Spravochnik (Digital Radio Receiving Systems: A Handbook) / ed. by M.I. Zhodzishskii. M.: Radio i svjaz', 1990. 208 p. 20. Kuzmin E.V. Issledovanie jeffektivnosti besporogovoj procedury poiska psevdoslu-chajnogo signala pri ogranichenii razrjadnosti vhodnyh nabljudenij (Efficiency of the non-threshold spread spectrum signal searching procedure in case of quantization of the incoming observations) // Cifrovaja obrabotka signalov (Digital signal processing). 2020. no 1. pp. 9–12. Analysis of the method of soft decoding of error-correcting codes M.A. Bykhovskiy, e-mail: bykhmark@gmail.com Moscow Technical University of Communications and Informatics, Russia, Moscow Keywords: Noise-immune codes, hard decoding, soft decoding, Shannon's threshold, energy efficiency of communication systems, two-dimensional and multi-dimensional signal ensembles. The paper proposes a new method of soft decoding of a code combination of an error-correcting code and a method for determining the reliability of communication with a hard (HDD) and soft (SDD) method for decoding an error-correcting code (PC), which depends on the marginal speed of signal transmission (R[f]) belonging to AS with QAM, and the code speed (R[c]) of the PC. It is demonstrated that for small values of (R[c]) application of the SDD method provides energy gain equal to 2 dB as compared to the HDD method. This gain diminishes to 1 dB at high (R[c]) values. The author provides recommendations on how to choose energy parameters of the communication system that would allow to reduce the length of the code for a given reliability of message reception, essentially without increasing the energy power of the communication line. Possible energy losses of communication systems with a PC as compared to the Shannon limit are introduced. It is demonstrated that these losses can be insignificant only for low-speed communication systems. For high-speed communication systems, they turn out to be rather material, especially when using a PC with a low code speed. It is noted that in promising communication systems intended for the transmission of messages with high speed and high energy efficiency, it is advisable to use multidimensional AS that are optimal according to Shannon, which make it possible to ensure high reliability of message reception without the application of error-correcting codes. 1. Shannon C.E. (1949) Communication in the Presence of Noise. Proceedings of the IRE, 37, 10-21. 2. John G. Proakis Digital Communications. McGraw-Hill Publishing Company, 1989, p. 608. 3. Georg C. Clark, Jr., Bibb J. Cain Error – Correcting Coding for Digital Communication. Springer, Boston, New York, 1981, P. 422 4. Chase D. A class of algorithms for decoding block codes with channel measurement information, IEEE Trans. Inf. Theory, vol. IT-18, Jan. ¹ 1, 1972, pp. 170–182 5. Peterson W. Wesley, Weldon E.J. Error –Correcting Codes. The M.I.T. Press, Cambridge, Second Edition, 1972, P. 576 6. Uryvsky L., Osypchuk S. The analytical description of regular LDPC codes correcting ability. Transport and Telecommunication, vol.15, ¹ 3, 2014, pp. 177–184 7. A.A. Frolov, V.V. Zyablov, Granitsy minimal'nogo kodovogo rasstoyaniya dlya nedvoichnykh kodov na dvudol'nykh grafakh, Problemy peredachi informatsii, (Boundaries of the minimum code distance for nonbinary codes on bipartite graphs, Problems of Information Transmission), vypusk 4, 2011, pp. 27–42 8. Bykhovskiy M.A. Giperfazovaya modulyatsiya – optimal'nyy metod peredachi soobshcheniy v gaussovskikh kanalakh svyazi. (Hyperphase modulation is the optimal method for transmitting messages in Gaussian communication channels) M.: Tekhnosfera, 2018, P. 310 9. Vicente Torres 1, Javier Valls 1, Maria Jose Canet and Francisco Garcia-Herrero, Soft-Decision, Low-Complexity Chase Decoders for the RS (255,239) Code. Electronics, 2019, pp. 8-10 10. Yingquan Wu, Fast Chase Decoding Algorithms and Architectures for Reed–Solomon Codes, IEEE Trans. Inf. Theory. Vol. 58, January ¹ 1, 2012, pp. 109-129 11. Siyun Tang and Xiao Ma Member, IEEE A New Chase-type Soft-decision Decoding Algorithm for Reed-Solomon Codes. Electronics and Communication Engineering, Sun Yatsen University, 2013, pp. 1-28 12. Nemirovskiy E.E., Romanenko G.V., Mikhaylovskaya L.G. Analiticheskiye otsenki kvazioptimal'nykh metodov priyema v tselom blochnykh kodov. Problemy peredachi informatsii, vyp. 4, 1981, str. 34-40. (Nemirovsky E.E., Romanenko G.V., Mikhailovskaya L.G. Analytical estimates of quasi-optimal reception methods for block codes in general. Problems of Information Transmission, vol. 4, 1981, pp. 13. G. David Forney Concatenated Codes. Research Monograph (¹ 37), M.I.T Press, 1966, P.176 14. Berrou Ñ., Glavieux A., Thitimajshima P. Near Shannon limit error-correcting coding and decoding: Turbo-codes. Proc. IEEE Int. Conf. Communications, Geneva, Switzerland, 1993, pp. 1064-1070 15. MacKay D.J.C., Neal R.M. Near Shannon limit performance of low density parity check codes. Electronics Letters, 13th March, Vol. 33, ¹ 6, 1997, pp. 1645 – 1646 16. Shannon, C. E. Probability of error for optimal codes in a Gaussian channel. the Bell System Technical Journal, vol. 38, May ¹ 3, 1959, pp. 611–656 Algorithm for generating detailed radar images with compensation of flight's trajectory instabilities of the SAR's carrier by the leeway N.P. Muraviev, e-mail: nikitamuraviev10@gmail.com L.B. Ryazantsev, e-mail: kernel386@mail.ru MESC «Zhukovsky–Gagarin Air Force Academy», Russia, Voronezh Keywords: synthetic aperture radar, trajectory instability, leeway, radar images. Recently, there has been an active use of radar equipment on small-sized unmanned aerial vehicles (UAVs) to solve problems of aerial monitoring, mapping, communications control and other tasks in the military and civil sphere. Existing technologies of digital signal processing through the use of methods of synthesizing the antenna aperture make it possible to realize the formation of ra-dar images with high spatial resolution in units of decimeters and in close to real time, and miniatur-ization technologies make it possible to reduce the weight and size characteristics of the equipment to several kilograms, which allows them to be installed on small-sized UAVs, including multicop-ters. Despite the high potential capabilities of such radars, obtaining detailed radar images is associ-ated with significant computational costs, which significantly increase in the presence of trajectory instability of the radar carrier flight. So, with a rigidly fixed antenna and the lack of control over the position of the beam of the directional pattern in the azimuthal plane, which is typical when in-stalling radar on small-sized UAVs, the presence of a crosswind leads to a change in the angle of drift and deviation of the beam of the antenna pattern from the direction perpendicular to the carrier velocity vector. This leads to the fact that when implementing algorithms for the formation of radar images, which are very demanding on the performance of on-board computers, it is necessary to significantly increase the size of the image frame along the travel distance. Since the brightness of the radar images resolution elements is calculated within the entire frame, including those that do not fall into the beam of the radar radiation pattern, this leads to a proportional increase in computa-tional costs and time for the formation of the radar image frame. In addition, the constant change in the position of the beam of the directional pattern during the flight of the carrier significantly com-plicates the implementation of the strip mode of shooting, and non-zero brightness values of resolu-tion elements due to side lobes and ambiguity zones located outside the main beam of the radar di-rectional pattern have a negative impact on the quality of the automatic focusing algorithms of the radar images. Thus, the article is devoted to the development of an algorithm that provides up to two... three times declined the computational costs of the on-board computer when generating radar imag-es by excluding image elements from the calculation that are not included in the main beam of the antenna pattern. The calculation of the elements is carried out taking into account the presence of trajectory instabilities in the angle of demolition caused by a crosswind during the flight of a small-sized unmanned aerial vehicle. The leeway is determined based on an estimate of the average Dop-pler frequency in the signal at the output of the radar receiver. The results of the algorithm study showed that the proposed algorithm provides a two- to three-fold reduction in the time spent on the formation of a radar image in the presence of trajectory flight instability along the drift angle. Moreover, the greater the value of the drift angle– the greater the gain in the time of formation of the radar image is observed. This is due to an increase in the size of the frame at large values of the drift angle. Along with reducing the time of radar image for-mation, the use of geometric correction procedures makes it possible to improve the quality of au-tomatic focusing algorithms, simplify the procedure for further georeferencing images to digital ter-rain maps, and also improve the perception of images by the decoder operator. 1. Kupryashkin I.F., Lihachev V.P., Ryazancev L.B. Malogabaritnye mnogofunkcional'nye RLS s nepreryvnym chastotno-modulirovannym izlucheniem (Small-sized multifunctional radars with continuous frequency-modulated radiation). Monografiya. M.: Radiotekhnika, 2020. 280 s. 2. Kupryashkin I.F., Lihachev V.P., Ryazancev L.B. Kratkij opyt sozdaniya i pervye rezul'taty prakticheskoj s"emki poverhnosti malogabaritnoj RLS s sintezirovaniem apertury antenny s borta mul'tikoptera. (A brief experience of creating and the first results of practical shooting of the surface of a small-sized radar with synthesizing the antenna aperture from the side of a multicopter). ZHur-nal radioelektroniki [elektronnyj zhurnal], 2019. ¹ 4. Rezhim dostupa: http://jre.cplire.ru/jre/apr19/12/text.pdf. 3. Dmitriev A.V., ZHarkov D.S., YArcev I.M., Polovinkina A.S. Maket malogabaritnoj pro-grammno-opredelyaemoj RLS s sintezirovaniem apertury antenny na mul'tikoptere (Layout of a small-sized software-defined radar with synthesizing the antenna aperture on a multicopter) // Sbornik trudov XXV Mezhdunarodnoj nauchno-tekhnicheskoj konferencii, posvyashchennoj 160-letiyu so dnya rozhdeniya A.S. Popova. Voronezh: VGU, 2019. S. 164-180. 4. Brajtkrajc S.G., Il'in E.M., Polubekhin A.I., Prishchep D.V., YUrin A.D., Homyakov K.A. Problemy i puti sozdaniya radiolokacionnyh sistem dlya bespilotnyh letatel'nyh apparatov taktich-eskogo i operativno-takticheskogo naznacheniya (Problems and ways of creating radar systems for unmanned aerial vehicles for tactical and operational-tactical purposes) // Izvestiya Tul'skogo gosu-darstvennogo universiteta. Tula: TGU, 2018. S. 303-313. 5. Gnezdilov M.V., Kupryashkin I.F., Lihachev V.P., Ryazancev L.B. Algoritm formirovaniya radiolokacionnyh izobrazhenij s submetrovym razresheniem v malogabaritnyh RLS s sintezirovannoj aperturoj (Algorithm for generating radar images with a submeter resolution in small-sized synthetic aperture radars) // Cifrovaya obrabotka signalov,2018. ¹ 2. S. 53-58. 6. Gulyaev G.A., Ivannikova M.V., Ryazancev L.B., Unkovskij A.V. Issledovanie vliyaniya traektornyh nestabil'nostej poleta nositelya malogabaritnoj RLS s sintezirovannoj aperturoj na kachestvo formiruemyh radiolokacionnyh izobrazhenij (Investigation of the influence of trajectory instabilities of the flight of a carrier of a small-sized radar with a synthesized aperture on the quality of the generated radar images ) // Cifrovaya obrabotka signalov, 2021. ¹ 2. S. 25-31. 7. Aviacionnye sistemy radiovideniya. Monografiya (Aviation radio vision systems) / Pod red. G.S. Kondratenkova. M.: Radiotekhnika, 2015. c. 648. 8. Kolchinskij V.E., Mandurovskij I.A., Konstantinovskij M.I. Avtonomnye doplerovskie ustrojstva i sistemy navigacii letatel'nyh apparatov (Autonomous Doppler devices and aircraft navi-gation systems) / Pod red. V.E. Kolchinskogo. M.: Sov. radio, 1975. 432 s. 9. Ryazancev L.B. Mnogomodel'noe bajesovskoe ocenivanie vektora sostoyaniya manevrennoj vozdushnoj celi v diskretnom vremeni (Multimodel Bayesian estimation of the state vector of a ma-neuverable aerial target in discrete time) // Vestnik TGTU, 2009. ¹ 4. S. 729-739. Correlation and structural analysis of the gradient spectrum of hyperspectral images in the problem of spectral selection of contours of given objects V.V. Shipko, e-mail: shipko.v@bk.ru MESC AF «N.E. Zhukovsky and Y.A. Gagarin Air Force Academy», Russia, Voronezh Keywords: hyperspectral images, gradient, correlation function, structural function, random functions with stationary increments. As it is known, an important intermediate stage of the set of final tasks of digital image pro-cessing is the selection of the contours of objects. The use of contour images can significantly re-duce the computational costs of various algorithms for subsequent analysis and recognition, which is especially important for processing multicomponent hyperspectral images. There are many methods and algorithms for contour selection on single-component images, however, the classical approach of contour selection in each spectral component and their component analysis are ineffective for multicomponent hyperspectral images. This is mainly due to the lack of the possibility of taking into account the relationship between spectral components. Analyzing sequentially the contours of each spectral channel is a laborious and inefficient task, and averaging the results obtained leads to the loss of valuable information about the spectral relationship. Therefore, it is of interest to obtain an extended (functional) relationship of each component of the gradients with respect to all other components for the possibility of more flexible contour se-lection. Taking into account the presence of gradients of each spectral component of the GSI, which in themselves are a spatial characteristic of brightness differences, we will consider the correlation and structural functions as their functional relationship. The efficiency of correlation and structural functions in the task of selecting the contours of spectral-selective objects on hyperspectral images has been studied. The results obtained indicate a higher noise immunity and informativeness of the structural function compared to the correlation function. An approach to the synthesis of an optimal algorithm for the selection of contours of spec-tral-selective objects based on the distribution densities of the values of the structural function of spectral gradient images is proposed. 1. Vinogradov A.N., Egorov V.V., Kalinin A.P., Rodionov A.I., Rodionov I.D. Line of aviation hy-perspectrometers of ultraviolet, visible and near infrared ranges // Optical Journal. 2016. Vol. 88. No. 4. pp. 54-62. 2. Pozhar V.E., Balashov A.A., Bulatov M.F. Modern spectral optical devices of STC UP RAS // Scientific instrumentation. 2018. Vol. 28. No. 4. pp. 49-57. 3. Gonzalez R., Woods R. Digital image processing. Moscow: Technosphere, 2019. 1104 p. 4. Kim N.V. Image processing and analysis in technical vision systems: Textbook. M. Pub-lishing House MAI, 2014. 144 p. 5. Image processing in aviation vision systems / Edited by L.N. Kostyashkin, M.B. Nikifo-rov. M.: Fizmatlit, 2016. 240 p. 6. Antonushkina S.V., Eremeev V.V., Makarenkov A.A., Moskovitin A.E. Peculiarities of analysis and processing of information from hyperspectral survey systems of the Earth's surface / Digital signal processing. 2010. No. 4. pp. 38-43 7. Modern technologies for processing Earth remote sensing data / Edited by V.V. Eremeev. M.: Fizmatlit, 2015. 460 p. 8. Sheremetyeva T.A., Filippov G.N., Malov A.M. Application of the target visualization method for hyperspectral image processing // Optical Journal. 2015. Vol. 82. No. 1. pp. 32-36. 9. Rytov S.M. Introduction to statistical radiophysics. Part 1. Random processes M.: Nauka. 1976. 496 p. 10. Prokhorov S.A., Grafkin V.V. Structural and spectral analysis of random processes. Sa-mara: SNC RAS, 2010. 128 p. 11. Levin B.R. Theoretical foundations of statistical radio engineering. Moscow: Sovetskoe radio, 1968. 504 p. Visualization of convolutional neural network patterns in the noisy radar images classification problem Kupryashkin I.F., e-mail: ifk78@mail.ru Mazin A.S., e-mail: mazinant@rambler.ru Military Educational and Scientific Center of the Air Force “N.E.Zhukovsky and Y.A.Gagarin Air Force Academy”, Russia, Voronezh Keywords: synthetic aperture radar, deep convolutional neural network, object classification. It is known that deep convolutional neural networks (DCNN) are successfully used for object recognition on radar images. However, to date, insufficient attention is paid to the study of the accuracy of object marks classification under the radar interference influence. The article describes the results of a study on the efficiency of object marks classification by a deep convolutional neural network under the intentional radar interference influence. The article deals with an algorithm of training data preparation procedure, a description of input images (pattern), providing maximum activation of convolution layer filters at different interference (noise) intensities, as well as evaluation of classification accuracy of ground object marks on radar images at different interference/signal ratios on training and test sets The procedure of training data preparation includes elimination of surface marks (terrain back-ground) on the MSTAR radar images set, decreasing the size of images to the objects size and addition of image noise. From comparison of the received input images (patterns), providing maximum activation of the convolutional network filters, trained on a set of images without interference and on a set with interference/signal ratio q=0 dB, it is clear that under the interference influence the texture diversity of the features in higher layers became much smaller, and the patterns themselves - more homogeneous. This fact proves the decrease of the network sensitivity to the classification features of a particular set of images under the influence of interference. The influence of interference is quite expectedly manifested in a decrease in the accuracy of classification of object marks on the radar images. The maximum accuracy value in an interference-free condition is 97.91%, at an interference level comparable to the average level of object marks (q = 0 dB) it remains quite high - 86.13%, but with a further increase in the signal-to-noise ratio it decreases rapidly. For example, if the q = 5 dB a correct network operation is seen in about a half of cases (55,01 %), and if the q = 15 dB and more - 13,18 %, that practically comes down to a simple guess (for the ten-alternative classification the accuracy is about 10 %). 1. Zhu X., Montazeri S., Ali M., Hua Yu., Wang Yu., Mou L., Shi Yi., Xu F., Bamler R. Deep Learn-ing Meets SAR. arXiv:2006.10027v2 [eess.IV] 5 Jan 2021. 2. Wang H., Chen S., Xu F., Jin Y.-Q. Application of Deep-Learning Algorithms to MSTAR Data. 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2015, pp. 3743-3745. DOI: 10.1109/IGARSS. 2015.7326637. 3. Chen S., Wang H., Xu F., Jin Y.-Q. Target Classification Using the Deep Convolutional Networks for SAR Images. IEEE Transaction Geoscience and Remote Sensing, 2016, vol. 54, no. 8, pp. 4806-4817. DOI: 10.1109/TGRS. 2016.2551720. 4. Anas H., Majdoulayne H., Chaimae A., Nabil S.M. Deep Learning for SAR Image Classification. 2020. DOI: 10.1007/978-3-030-29516-5_67. 5. Chen S., Wang H. SAR Target Recognition Based on Deep Learning. 2014 International Conference on Data Science and Advanced Analytics (DSAA), 2014, pp. 541-547. DOI: 10.1109/DSAA.2014.7058124. 6. Coman C., Thaens R. A Deep Learning SAR Target Classification Experiment on MSTAR Dataset. 2018 19th International Radar Symposium (IRS), 2018, pp. 1-6. DOI: 10.23919/IRS.2018.8448048. 7. Furukawa H. Deep Learning for End-to-End Automatic Target Recognition from Syn-thetic Aperture Radar Imagery. arXiv:1801.08558v1 [cs.CV] 25 Jan 2018. 8. Profeta A., Rodriguez A., Clouse H.S. Convolutional Neural Networks for Synthetic Ap-erture Radar Classification. Proc. SPIE 9843, Algorithms for Synthetic Aperture Radar Imagery XXIII, 98430M (14 May 2016). https://doi.org/10.1117/12.2225934. 9. Borodinov A.A., Myasnikov V.V. Comparison of radar image classification algorithms for various preprocessing methods based on MSTAR data. IV International Conference and Youth School "Information Technologies and Nanotechnologies" (ITNT-2018). 10. Zhang C., Li P., Sun G., Guan Y., Xiao B., Cong J. Optimizing FPGA-based accelerator design for deep convolutional neural networks. Proceedings of the 2015 ACM/SIGDA Internation-al Symposium on Field Programmable Gate Arrays, 2015.pp. 161-170. DOI: 10.1145/2684746.2689060. 11. I.V. Zoev, N.G. Markov, A.P. Beresnev, T.A. Yagunov. FPGA-based hardware imple-mentation of convolution neural networks for images recognition. GraphiCon, 2018. pp. 200-203. 12. Web-site www.sdms.afrl.af.mil/index.php?collection=mstar. 13. Chollet F., "Deep Learning with Python". Saint Petersburg: Piter, 2018. 400 p. Rediscretization the contours of digital images of objects Okhotnikov S.A., e-mail: OhotnikovSA@volgatech.net Khafizov D.G., e-mail: HafizovDG@volgatech.net Egoshina I.L., e-mail: EgoshinaIL@volgatech.net Khafizov R.G.,, e-mail: HafizovRG@volgatech.net The Volga State Technological University (VSUT, Volgatech), Russia, Yoshkar-Ola Keywords: rediscretization, equalization, spectrum shape consistency, alignment of the contour dimensions. The article deals with the issues of changing the dimensionality of the contours of digital images of objects. Contour image analysis is one of the effective methods of image processing which has a number of advantages over stream processing, including the ability to switch from two-dimensional to one-dimensional processing. The use of the normalized scalar product of image contours as a measure of similarity makes it possible to construct processing algorithms invariant to linear transformations of images. However, the calculation of the normalized scalar product requires the alignment of the dimensions of the analyzed contours. When evaluating resampling methods, continuous complex-valued contours on the complex plane were specified as reference images, followed by their discretization. In the next step the dimension alignment procedure was performed. The normalized scalar product of the contours was used to determine the similarity measure. Simulation results show that the method of maximum preservation of the contour spectrum realized by the scheme "interpolation - filtering - thinning" provides the value of the normalized scalar product of the contours close to the value of the normalized scalar product of the original continuous contours. In addition, the value of the normalized scalar product of contours at decreasing the dimensionality is higher than at oversampling with increasing the dimensionality. 1. Gonsales R. Cifrovaya obrabotka izobrazhenij (Digital image processing) / edited by R. Gonzales, R. Woods. – Moscow: Tehnosfera, 2005. – 1072 p. ISBN 5-94836-028-8. 2. Karim S. A. A. Rational Bi-Quartic Spline with Six Parameters for Surface Interpolation with Application in Image Enlargement // IEEE Access, 2020. – Vol. 8. – PP. 115621–115633. 3. Pyavchenko A.O., Petrenko E.V. Analiz i vybor cifrovyh fil'trov dlya perediskretizacii kadrov cifrovogo izobrazheniya dlya rezhima vosproizvedeniya «kartinka-v-kartinke (Analysis and choosing of the digital filters for digital image frames resampling for displaying in “picture-in-picture” mode) // IZVESTIYA SFedU. Engineering sciences, 2012. – no. 5 (130). – PP. 185-189. 4. Malakhin V.A., Tereshin A.A., Goncharov S.N., Pisetskiy V.V., Goncharov E.S. Issledovanie i modelirovanie algoritmov vosstanovleniya cifrovogo signala mezhdu otschetami (Research and modeling of algorithms for recovering a digital signal between samples) // Trudy mezhdunarodnogo simpoziuma «Nadezhnost' i kachestvo», 2018. – Vol. 1. – PP. 344-349. 5. Mikheev S. Å., Morozov P. D. Primenenie kvaziermitovyh kubicheskih splajnov dlya perediskretizacii zvukovyh fajlov (Application of quasihermitian cubic splines for oversampling of audio files) // Transactions of Karelian Research Centre of Russian Academy of Science. No 4. Mathematical Modeling and Information Technologies. 2014. Pp. 106-115. 6. Spazhakin M. I. Primenenie mnogokanal'nyh resemplerov farrou v zadachah radiomonitoringa (Application of multichannel farrow resamplers in radio monitoring tasks) // RadioEngineering, – 2018. No. 7. – Pp. 29-34. DOI: 10.18127/j00338486-201807-06. 7. Petukhov K. Yu., Shayakhmetov M. R. Perediskretizaciya kak metod bor'by s shumom (Rediscretizating as a method of anti-noise) // Vestnik KIGIT, 2012, no. 7 (25), pp. 4-8. 8. Cheng X., et al. Efficient L0 resampling of point sets. Comput. Aided Geom. Des. (2019), 101790. DOI: https://doi.org/10.1016/j.cagd.2019.101790. 9. Vvedenie v konturnyj analiz i ego prilozhenie k obrabotke izobrazhenij i signalov (Contour analysis introduction and its image and signal processing application) / edited by Ya.A. Furman – Moscow: Fizmatlit, 2002. – 592 p. 10. Sergienko A.B. Cifrovaya obrabotka signalov: ucheb. posobie (Digital Signal Processing: Tutoria). 3-ed – SPb.: BHV-Peterburg, 2011. 11. Osnovy teorii obrabotki nepreryvnyh konturov izobrazhenij (Fundamentals of the theory of processing continuous contours of images: monograph) / edited by R.G. Khafizov – Yoshkar-Ola: VSUT, Volga Tech, 2015, 171 p. ISBN 978-5-8158-1606-0. 12. Khafizov R.G., Okhotnikov S.A. Raspoznavanie nepreryvnyh kompleksnoznachnyh konturov izobrazhenij (Recognition of continuous complex-valued image contours) // Izvestiya vysshikh uchebnykh zavedeniy. Priborostroenie, 2012, no. 5, pp. 3-9. 13. Khafizov R.G., Okhotnikov S.A. Diskretizaciya nepreryvnyh konturov izobrazhenij, zadannyh v kompleksnoznachnom vide (Discretization of continuous contours of images, defined in a complex-valued form) // Computer Optics 2018, vol. 36, no. 2, pp. 274-278. Corresponding objects search algorithm in Earth remote sensing images aware of low-informative areas A.E. Kuznetsov, e-mail: foton@rsreu.ru A.S. Ryzhikov, e-mail: foton@rsreu.ru The Ryazan State Radio Engineering University (RSREU), Russia, Ryazan Keywords: corresponding objects, Earth remote sensing images, low-informative areas, GPU, CPU. Increasing performance of the algorithm for corresponding objects search on analyzed and reference images by preliminary rejection of low-informative areas is investigated. Computationally simple methods for revealing textured homogeneous fragments on image are considered. The results of the experimental use of the modified search algorithm for corresponding objects are presented. Detection of low-informative areas is carried out at the preliminary stage of the algorithm. Further, the search for objects with the same name was carried out only in informative areas. In terms of the ratio calculation speed / proportion of excessively rejected the same name objects, the algorithm for constructing a mask of low-informative areas based on difference of Gaussians has proven to be optimal. The sample size of successfully identified the same name objects in most cases is sufficient to solve the problem. Also, it is comparable with the standard algorithm but provides significantly better performance. Unfortunately, all the considered algorithms for determining low-informative areas showed unsatisfactory results in the problem of detecting cloud objects. In this regard, it is recommended to use specialized solutions for detecting cloud objects. The issue of fast rejection of cloud objects is planned to be considered in the next work. 1. Kuznetsov A.E., Poshehonov V.I., Ryzhikov A.S. Tehnologija avtomaticheskogo kontrolja tochnosti geoprivjazki sputnikovyh izobrazhenij po opornym snimkam ot KA «Landsat-8» // Öèô-ðîâàÿ îáðàáîòêà ñèãíàëîâ, 2015, no. 3, pp. 37-42. 2. Asmus V.V., Bunchev A.A., Pjatkin V.P. Klasternyj analiz v obrabotke dannyh distancionnogo zondirovanija Zemli // Interjekspo GEO-Sibir', 2015, pp. 71-78. 3. Vetrov A.A., Kuznetsov A.E. Avtomaticheskaja segmentacija oblachnyh obektov na snimkah zemnoj poverhnosti vysokogo prostranstvennogo razreshenija // Issledovanija Zemli iz kosmosa, 2014, pp. 4. Astafurov V.G., Skorohodov A.V. Segmentacija sputnikovyh snimkov oblachnosti po teksturnym priznakam na osnove nejrosetevyh tehnologij // Issledovanie Zemli iz kosmosa, 2011, no. 6, pp. 10-20. 5. Gonzalez R., Woods R. Cifrovaja obrabotka izobrazhenij. –Tehnosfera, 2000, 1072 s. 6. Bay H., Tuytelaars T., Van Gool L. SURF: Speeded Up Robust Features / Lecture Notes in Computer Science, 2006, vol 3951. Springer, Berlin, Heidelberg. 7. Chandelier L., Coeurdeve L., Bosc S., Fave P., Gachet R., Orsoni A., Tilak T., Barot A. A worldwide 3d GCP database inherited from 20 years of massive multi-satellite observations // ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-2-2020, pp. 15–23. A priori estimation of accuracy of TDI CCD images stitching based on spacecraft measurement equipment data O.A. Presnyakov, e-mail: foton@rsreu.ru The Ryazan State Radio Engineering University (RSREU), Russia, Ryazan Keywords: remote sensing, stitching accuracy, staggered TDI CCD. This article presents the method for geometric accuracy apriori estimation in the task of stitching of satellite images obtained by staggered TDI CCD. The method can be applied in design of spacecraft cameras to check the ability of automatic images stitching based on spacecraft measurement equipment data. The method uses the following parameters: Earth parameters, apogee and perigee of the satellite orbit, look angle, spacecraft measurement equipment accuracy (the number, precision and placement of star trackers, precision of angular velocity sensors), interior orientation parameters (focus, pixel size, distance between rows of even and odd CCDs, photozone width, CCD position estimation accuracy), accuracy of DEM, satellite vibration amplitude, geometric processing accuracy. The error of the satellite spatial position determining is assumed to be negligible to affect the stitching. The displacement of images from neighboring CCDs (scans) have a complex form. It is caused by the fact that one ground point is shot by neighboring matrices with a certain time interval. During this time, the orientation of the spacecraft changes, the Earth rotates, and the spacecraft moves in orbit. Due to movement along the orbit, among other things, parallactic distortions between scans To estimate the total stitching error, the orbital velocity of the satellite and the interval between shooting one point by neighboring CCDs are first calculated. Using this interval, the influence of the angular velocity measurement random error on the stitching is determined. Then, the total accuracy of the spacecraft orientation angles measuring using star trackers is found, taking into account star trackers relative position. Using the orientation measurement error, the error of the angular velocity sensors “drift” and the stitching error caused by it are found. Next, the stitching errors due to DEM error, CCD placement error and geometric processing error are calculated. Stitching error due to vibrations is then evaluated. Stitching error due to vibrations of a known frequency and amplitude, which may not be foreseen in advance, is considered separately; such distortions was detected in the “Aist-2D” small spacecraft images using spectral analysis. Finally, the errors from all considered factors are summarized. The method has been tested on real images from “Aist-2D” small spacecraft. The test results confirmed the adequacy of the proposed model. 1. Baklanov A.I. Sistemy nablyudeniya i monitoringa: uchebnoe posobie [Observation and Monitoring Systems] Ìoscow: BINOM. Laboratoriya znanij [BINOM. Knowledge Lab], 2009. 234 P. 2. Kuznetcov A.E., Presniakov O.A., Myatov G.N. Stitching of remote sensing images from staggered TDI CCD. “Digital Signal Processing”. 2015. ¹ 3. P. 29–36. 3. Tang X.; Hu F.; Wang M.; Pan J.; Jin S.; Lu G. Inner FoV Stitching of Spaceborne TDI CCD Images Based on Sensor Geometry and Projection Plane in Object Space. Remote Sens. 2014, 6, 6386-6406. 4. Kirilin A.N., Akhmetov, Shakhmatov E.V. et al. Technology small AIST-2D spacecraft. – Samara: Publishing house SamNZ RAN, 2017. 324 P. 5. Kuznetcov A.E., Poshekhonov V. I. Structural and parametric synthesis of cartographic small spacecraft components. RSREU journal. 2019. ¹ 4 (issue 69). 6. Akhmetov R. N., Eremeev V.V., Kuznetcov A.E., Myatov G.N., Poshekhonov V. I., Stratitatov N.R.. Organization of high-precision geolocation of Earth surface images from the Spacecraft “Resurs-P”. Issledovanie Zemli iz Kosmosa. 2017, ¹ 1. 7. Draper N.R., Smith H. Applied regression analysis. 2nd edition. Vol. 1. [Russian translation] Moscow, Finance and statistics, 1986. 366 P. 8. Eremeev V.V. (ed.) Sovremennye tekhnologii obrabotki dannyh distancionnogo zondirovaniya Zemli [Modern technologies for Earth remote sensing data processing]. Moscow. Fizmatlit. 2015 9. Igolkin A.A., Safin A.I., Filipov A.G. Modal analysis of the dynamic mockup of “AIST–2D” small spacecraft. Vestnik Samarskogo universiteta. Aerokosmicheskaya tekhnika, tekhnologii i mashinostroenie. [Bulletin of the Samara University. Aerospace engineering, technology and mechanical engineering] 2018. vol. 17, ¹ 2. P. 100–108. 10. Shortridge A., Messina J. Spatial structure and landscape associations of SRTM error. Remote Sens. Environ., 2011, vol. 115, no. 6, pp. 1576-1587. doi: 10.1016/j.rse.2011.02.017. Organization of remote sensing imaging systems geometric calibration A.E. Kuznetsov, e-mail: foton@rsreu.ru V.I. Poshekhonov, e-mail: foton@rsreu.ru The Ryazan State Radio Engineering University (RSREU), Russia, Ryazan Keywords: remote sensing camera, star tracker, ground control points, exterior and interior orientation parameters. The technological scheme of the in-flight geometric calibration of Earth remote sensing systems is considered. The technology includes two stages of calibration activities. The first stage is performed during the satellite flight tests and is associated with the refinement of the interior orientation parameters and the mounting angles of the camera. Formulas are given to refine the relative orientation of star trackers based on a set of their measurements. The task of the second stage is to monitor the mounting angles of the camera. It is shown that the recalibration process should be performed in case of exceeding the permissible misalignment of measurements by star trackers of the camera orientation roll, pitch and yaw angles. A model justifying the number of calibration routes that are necessary during calibration process is given. A conclusion is given on the practical use of the considered technological process and the ways of its improvement. 1. Ahmetov R.N., Eremeev V.V., Kuznetsov A.E., Myatov G.N., Poshekhonov V.I., Stratilatov N.R. Vysokotochnaya geodezicheskaya privyazka izobrazhenij zemnoj poverhnosti ot KA «Resurs-P» // Issledovanie Zemli iz kosmosa. 2017. ¹ 1. pp. 44-53. 2. Eremeev V.V., Zinina I.I., Kuznetsov A.E., Myatov G.N., Poshekhonov V.I., Filatov A.V., YUdakov A.A.Tekhnologiya potokovoj obrabotki dannyh DZZ vysokogo razresheniya // Sovremennye problemy distancionnogo zondirovaniya Zemli iz kosmosa. 2021. Ò. 18. ¹ 1. pp. 11–18. 3. Ahmetov R.N., Zinina I.I., Yudakov A.A., Eremeev V.V., Kuznetsov A.E., Poshekhonov V.I., Presnyakov O.A., Svetelkin P.N. Tochnostnye harakteristiki vyhodnoj produkcii vysokogo razresheniya KA «Resurs-P» // Sovremennye problemy distancionnogo zondirovaniya Zemli iz kosmosa. 2020. Ò. 17. ¹ 3. pp. 41–47. 4. Greslou D., de Lussy F., Delvit J.M., Dechoz C., Amberg V. Pleiades-HR innovative techniques for Geometric Image Quality Commissioning // ISPRS Melbourne. 2012. If you have any question please write: info@dspa.ru
{"url":"http://www.dspa.ru/en/2021/dsp-2021-4.htm","timestamp":"2024-11-13T21:38:10Z","content_type":"text/html","content_length":"73229","record_id":"<urn:uuid:9c51c2e0-ae2c-49a7-820f-9d3bd3c7bc34>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00585.warc.gz"}
Handling Missing Data with Code A missing data solution has two goals: • minimal loss of statistical power • unbiased estimates and standard errors. Which missing data solution you take depends in part on the distribution of missing data and the analyses you'll be doing. The default “solution” is listwise deletion, which means you drop any case with a missing value. Listwise deletion and complete case analysis are two names for the same concept. For listwise deletion to not cause bias, all missing data must be missing completely at random. 2 Recommended Solutions for Missing Data: Multiple Imputation and Maximum Likelihood Maximum likelihood works very well for missing data and requires no extra work, but can only be used in certain models. If you’re running a linear or log-linear model, (like a regression or linear mixed model), maximum likelihood techniques give the same great, unbiased, uninflated, full power results that multiple imputation does. Mean substitution is about the worst thing you can do for missing data. A data analyst worth his or her salt will report which missing data technique used and why. Handle missing data with SQL, R and Python You can leave the data as is and go for a model which can handle missing data (such as XGBoost, Random Forest). For some machine learning algorithms such as Linear Discriminant Analysis (LDA), having missing values in a dataset can cause errors. In SQL, NULL represents a missing or unknown value. You can check for NULL values using the expression IS NULL. For example, to count the number of missing birth dates in the people table: SELECT COUNT(*) FROM people WHERE birthdate IS NULL; There is a R package dealing with missing data named Amelia (yes after the famous missing Aviator) install.packages("Amelia", repos="http://r.iq.harvard.edu", type = "source") from pandas import read_csv import numpy dataset = read_csv('pima-indians-diabetes.csv', header=None) # mark zero values as missing or NaN dataset[[1,2,3,4,5]] = dataset[[1,2,3,4,5]].replace(0, numpy.NaN) # print the first 20 rows of data Source: Etsy Listing
{"url":"https://testandoptimize.com/handling-missing-data-with-code","timestamp":"2024-11-08T06:01:28Z","content_type":"text/html","content_length":"105818","record_id":"<urn:uuid:f7d33050-1454-484a-9c69-689215057314>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00774.warc.gz"}
An Alternating Minimization Method for Matrix Completion Problem In this paper, we focus on solving matrix completion problem arising from applications in the fields of information theory, statistics, engineering, etc. However, the matrix completion problem involves nonconvex rank constraints which make this type of problem difficult to handle. Traditional approaches use a nuclear norm surrogate to replace the rank constraints. The relaxed model is convex, and hence can be solved by a bunch of existing algorithms. However, these algorithms need to compute the costly singular value decomposition (SVD) which makes them impractical for handling large-scale problems. We retain the rank constraints in the optimization model, and propose an alternating minimization method for solving it. The resulting algorithm does not need SVD computation, and shows satisfactory speed performance. As a nonconvex algorithm, the new algorithm has better theoretical property than competing algorithms. Tech report 1801, Nanjing University of Finance & Economics, 01/2018 View An Alternating Minimization Method for Matrix Completion Problem
{"url":"https://optimization-online.org/2018/01/6442/","timestamp":"2024-11-10T05:24:27Z","content_type":"text/html","content_length":"84627","record_id":"<urn:uuid:9f5b6cea-bf5f-4488-ba45-958c0d94ccac>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00637.warc.gz"}