content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Electronic Structure Calculations with the Spin Orbit Effect of the Low-Lying Electronic States of the YbBr Molecule
This work presents an electronic structure study employing multireference configuration interaction MRCI calculations with Davidson correction (+Q) of the ytterbium monobromide YbBr molecule.
Adiabatic potential energy curves (PECs), dipole moment curves, and spectroscopic constants (such as R[e], ω[e], B[e], D[e], T[e], and μ[e]) of the low-lying bound electronic states are determined.
The ionic character of the YbBr molecule at the equilibrium position is also discussed. With spin-orbit effects, 30 low-lying states in ω = 1/2, 3/2, 5/2, 7/2 representation are probed. The
electronic transition dipole moment is calculated between the investigated states and then used to determine transition coefficients, for example, the Einstein coefficient of spontaneous emission A
[ij] and emission oscillator strength f[ij]. Vibrational parameters such as E[ν], B[ν], D[ν], R[min], and R[max] of the low vibrational levels of different bound states in both λ and ω
representations are also calculated. Upon calculating the Franck-Condon factors, they are found to be perfectly diagonal between three couples of low-lying excited states. Vibrational Einstein
coefficients and radiative lifetimes are computed as well for the lowest vibrational transitions. Most of the data reported in this work are presented here for the first time in the literature. Very
good accordance is obtained in comparison with the previously reported constants by means of experimental methods.
Dive into the research topics of 'Electronic Structure Calculations with the Spin Orbit Effect of the Low-Lying Electronic States of the YbBr Molecule'. Together they form a unique fingerprint. | {"url":"https://khazna.ku.ac.ae/en/publications/electronic-structure-calculations-with-the-spin-orbit-effect-of-t","timestamp":"2024-11-04T20:57:36Z","content_type":"text/html","content_length":"56254","record_id":"<urn:uuid:fa38a44c-5a4c-4ddd-9da1-754d34e807de>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00659.warc.gz"} |
Matthew W. Choptuik
The Story
Following the completion of his doctoral studies, Choptuik worked first as a Research Associate at Cornell University (1986-88), and then as a Post-Doctoral Fellow at the Canadian Institute for
Theoretical Astrophysics, University of Toronto (1988-91). In 1991, he moved to the University of Texas at Austin where he spent eight years before returning to his alma mater, UBC. He is now
building Canada's first research effort in numerical relativity.
Choptuik works on extreme gravitational phenomena such as the formation of black holes, collisions between black holes or neutron stars, and supernovae explosions. He is a world leader in numerical
relativity, which is a computer simulation-based branch of theoretical gravitational physics.
Numerical relativity forms an alternate approach to the mathematical complexities of Einstein's Theory of General Relativity. Traditionally, studies of extreme gravitational phenomena were
complicated by the mathematical difficulties of working with "infinity", which is the theoretical gravitational force inside a black hole. Choptuik has developed new ways of looking at this problem.
In addition, his "Choptuik Effect" demonstrated mathematically some startling similarities between common phase transitions, such as the freezing of water and the melting of ice, and the formation of
a black hole.
Choptuik continues to work at UBC where he is leading Canada's first research group in numerical relativity. In 1999, Choptuik was appointed as a Fellow of the Canadian Institute for Advanced
Research (CIAR)'s Cosmology and Gravity Program at UBC. This appointment, along with funds from the Canada Foundation for Innovation, has allowed his research group to design and assemble a computer
system large enough to provide the computation power needed for its work.
Choptuik is a member of the Editorial Board of Classical and Quantum Gravity, and an elected member-at-large of the Executive Council of the Topical Group on Gravitation of the American Physical
Sources: Royal Society of Canada; Dr. Choptuik's website; UBC Cosmology and Gravity Program; Photo: UBC Cosmology and Gravity Program.
Career ideas:
• research scientist, physics
• research scientist, electronics
• research scientist, communications
• research scientist, aerospace
• research scientist, remote sensing
• nuclear physicist
• optics physicist
• plasma physicist
• solid state physicist
• astrophysicist
• cosmologist
• experimental physicist
The Person
Professor of Physics & Astronomy
University of British Columbia
□ BSc Brandon University, Manitoba, 1980
□ MSc (physics) University of British Columbia, 1982
□ PhD (physics) University of British Columbia, 1986
□ Rutherford Memorial Medal in Physics (Royal Society of Canada), 2001
□ Basilis C. Xanthopoulos International Award in General Relativity and Cosmology, 1997
Last Updated
April 8, 2015
Personal Webpage
Profile viewed 42378 times
Other scientists who may be of interest: | {"url":"http://www.science.ca/scientists/scientistprofile.php?pID=336","timestamp":"2024-11-07T06:50:22Z","content_type":"text/html","content_length":"22072","record_id":"<urn:uuid:1637cff5-b2d5-4a7f-ac93-a52efae0c193>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00445.warc.gz"} |
Math Colloquia - Mechanization of proof: from 4-Color theorem to compiler verification
I will give a broad introduction to how to mechanize mathematics (or proof), which will be mainly about the proof assistant Coq. Mechanizing mathematics consists of (i) defining a set theory, (2)
developing a tool that allows writing definitions and proofs in the set theory, and (3) developing an independent proof checker that checks whether a given proof is correct (ie, whether it is a valid
combination of axioms and inference rules of the set theory). Such a system is called proof assistant and Coq is one of the most popular ones.
In the first half of the talk, I will introduce applications of proof assistant, ranging from mechanized proof of 4-color theorem to verification of an operating system. Also, I will talk about a
project that I lead, which is to provide, using Coq, a formally guaranteed way to completely detect all bugs from compilation results of the mainstream C compiler LLVM.
In the second half, I will discuss the set theory used in Coq, called Calculus of (Inductive and Coinductive) Construction. It will give a very interesting view on set theory. For instance, in
calculus of construction, the three apparently different notions coincide: (i) sets and elements, (ii) propositions and proofs, and (iii) types and programs.
If time permits, I will also briefly discuss how Von Neumann Universes are handled in Coq and how Coq is used in homotopy type theory, led by Fields medalist Vladimir Voevodsky. | {"url":"http://my.math.snu.ac.kr/board/index.php?mid=colloquia&sort_index=speaker&order_type=asc&l=en&page=9&document_srl=726054","timestamp":"2024-11-10T11:21:52Z","content_type":"text/html","content_length":"45747","record_id":"<urn:uuid:823beb25-fa75-4a43-959f-efabb39fee32>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00652.warc.gz"} |
=LINEST formula | Calculates the parameters of the best-fit line for a set of data points
Formulas / LINEST
Calculates the parameters of the best-fit line for a set of data points =LINEST(known_ys, [known_xs], [const], [stats])
• known_y's - required, the set of y-values known in the relationship y = mx + b
• known_x's - [OPTIONAL] the set of x-values known in the relationship y = mx + b
• const - [OPTIONAL] a logical value that specifies whether to force the constant b to 0
• stats - [OPTIONAL] a logical value that tells LINEST if it should return additional regression statistics
• =LINEST({1.8;5.3;8.2;12;13.5},{1;3;5;7;8})
This example takes two sets of data and will return an array with two columns and one row containing the slope and the y-intercept of the regression line. The example will return an array of
{1.6726,0.1317}. This means that the linear equation for the given data points is y = 1.6726x + 0.1317.
• =LINEST({1.8;5.3;8.2;12;13.5},{1;3;5;7;8},TRUE,TRUE)
You can also use the LINEST function to return additional statistics by using additional parameters. The example will return an array of five rows and two columns equal to {1.6726,0.1317;
0.0371,0.2017; 0.9985,0.2124; 2034.443,3; 91.7567,0.1353}.
The LINEST function is used to calculate a straight line using the "least squares" method.
• The LINEST function uses the "least squares" method to calculate the statistics of a best fit straight line through given y and x values.
• The LINEST function can return up to 10 separate statistics, including the slope, intercept, standard error, and more.
• By default, the LINEST function returns the slope and intercept. Additional statistics are returned when stats is set to TRUE.
Frequently Asked Questions
What is the LINEST function in Excel, and what is its purpose?
The LINEST function in Excel is used to perform a linear regression analysis on a set of data points and return information about the calculated regression line. It provides statistical information
about the line of best fit, including the slope, intercept, and other statistical measures such as the coefficient of determination (R-squared).
Can the LINEST function handle multiple independent variables (multivariate regression)?
Yes, the LINEST function can handle multiple independent variables, making it suitable for multivariate regression. To perform multivariate regression, you need to provide a range of known_x's with
multiple columns, each representing an independent variable.
How do I interpret the results of the LINEST function when the "stats" argument is set to TRUE?
When the "stats" argument is set to TRUE, LINEST returns an array of additional regression statistics.
Can the LINEST function be used for polynomial regression?
Yes, the LINEST function can be used for polynomial regression by creating additional columns of data that represent higher-order terms of the independent variable. | {"url":"https://sourcetable.com/formula/linest","timestamp":"2024-11-10T05:10:45Z","content_type":"text/html","content_length":"59064","record_id":"<urn:uuid:385f3d08-0a81-4743-aa79-178bda2a2005>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00579.warc.gz"} |
If a force F is applied on a body and it moves with class 11 physics JEE_Main
Hint: Know that Power is the capacity to change or influence the behaviour or course of events of an object by supplying mechanical or electrical energy. Here, we have to express power in terms of
force to get the formula.
Complete step by step solution:
Power has many definitions and can be expressed in various forms. From these definitions, we can find the required expression.
Here, the question states a force $F$ being applied on a body to make it move with velocity $v$ . This means that the force here is used to do mechanical work of moving the body. Hence, the power
here will be mechanical power. Power can be mechanical power or electrical power.
Mechanical power is defined as the rate of work done by an object. Also defined as transfer of energy or conversion of energy per unit time.
$Power = \dfrac{{Work}}{{time}}$
Also expressed as
$Power = \dfrac{{\Delta Energy}}{{\Delta Time}}$
SI unit of power is $Watt$ or $Joule/s$ also written as $kg{m^2}{s^{ - 3}}$
Whenever we apply force on an object, the object gains some energy. The energy can be in the form of potential energy or kinetic energy. So, we can express energy in terms of force.
Mathematically, energy is equal to the product of the applied force and displacement along the direction of force.
$ \Rightarrow E = F \cdot d$
$ \Rightarrow E = Fd\cos \theta $
Where $E$ is the energy
$F$ is force
$d$ is displacement
$\theta $ is the angle between the applied force and the displacement
We now substitute this equation of energy in the equation of power and get
$P = \dfrac{{F \cdot d}}{t}$
But we also know that displacement of the object per unit time is known as velocity of the object.
$v = \dfrac{d}{t}$
Where $v$ is velocity of the object
$d$ is the displacement
$t$ is time
Substituting this, we get:
$Power = Force \times velocity$
$ \Rightarrow P = F \times v$
Therefore, option $(A), F \times v$ is the correct option.
Note: Power is equal to the dot product of the force and velocity. In this question $F \times v$ solely means the product of force and velocity and does not mean the cross product between them. Power
can be positive or negative with increase or decrease with energy per unit time, but energy can never be negative as an object always has some energy. Energy can only increase or decrease but always
remains positive.
Electric power is expressed as $Power = voltage \times current$ | {"url":"https://www.vedantu.com/jee-main/if-a-force-f-is-applied-on-a-body-and-it-moves-physics-question-answer","timestamp":"2024-11-11T20:18:21Z","content_type":"text/html","content_length":"146808","record_id":"<urn:uuid:e4473e54-78c2-4a2b-afb8-434aec37b881>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00566.warc.gz"} |
WHAT IS TRICKY MATH QUIZ 8÷4(1+ 3) = ? ANSWERWHAT IS TRICKY MATH QUIZ 8÷4(1+ 3) = ? ANSWER
8÷4(1+ 3) = ?
When we were in primary school, it was a sentence we use to say before solving a math Quiz like the one above.
explainations. Solving math quiz and puzzles can refresh your memory. When I first set the quiz and posted on Facebook, many people failed it. In facts , About 90% of those who solved it had wrong
answers .
Here is the Quiz: WHAT IS THE ANSWER TO THE TRICKY MATH QUIZ
8÷4(1+ 3) = ?
The correct answer would be 8. I am going to solve using PEMDAS.
I will solve this Quiz today using PEMDAS.
PEMDAS is a mathematical acronym which stands for Parentheses, Exponent, Multiplication, Division, Addition and Subtraction.
Get lettle insight about PEMDAS below.
P=Parenthesis ()
With PEMDAS, to solve algebraic expressions, it is further grouped into four important stages in order of merit.
First: Parenthesis () or Bracket [].
Second: Exponents
Third: multiplication and Division.
This stage is very important to note because this is where the left to right rule, if not applied may give different answers.check example 1 above.
Fourth: Addition and Subtraction.
So, 8÷4(1+3)= ?
first thing is to solve what is in the Parenthesis.
Now, I will rearrange the quiz and simplify as thus:
=> 8 =?.
Therefore the final answer to the TRICKY MATH QUIZ
8÷4(1+ 3) = ? is 8
Solving with PEMDAS will give you same asame answer with someone who solved applying BODMAS.
Post a Comment | {"url":"https://www.cameroonhowto.com/2020/09/what-is-tricky-math-quiz-841-3-answer.html","timestamp":"2024-11-07T09:22:50Z","content_type":"application/xhtml+xml","content_length":"209194","record_id":"<urn:uuid:125d527b-f5fe-4b7e-8cea-e06e1f9a8b19>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00818.warc.gz"} |
17 Search Results
The primary goal of 3D-XplorMath is to allow users with little or no programming experience to see, with minimal effort, concrete visual representations of many different categories of mathematical
objects and processes. To accomplish this, objects from each category are described internally by well-designed, parameterized data structures, and for each category a variety of rendering methods is
provided to permit visualization of objects of the category in ways that are appropriate for various purposes. Each of the hundreds of built-in objects known to the program is assigned carefully
chosen defaults so that, when the object is selected from a menu, the program can immediately construct a standard example of the object and render it in an optimized view. The user may then use
various menus and dialogs to alter the parameters describing the shape and coloration of the object, change the viewpoint from which it is seen, select different rendering methods, etc. Moreover, as
its name suggests, the program can display objects such as surfaces, space curves and polyhedra using various stereo techniques. In addition to the many built-in objects known to the program, a user
can create "user-defined" objects by entering formulas using standard mathematical notation. Visualizations created by the program can be saved in jpeg and other graphic formats and the data defining
3D objects can be exported to other 3D programs (e.g., Bryce or POV-Ray) in formats such as .obj and .inc. Both built-in and user-defined objects can depend on parameters, and the program can create
morphing animations by moving along a path in the parameter space, and these animations can then be saved as QuickTime movies. Each of the built-in objects has associated to it a so-called ATO (About
This Object) file that provides documentation for the object. An early and more developed version of the program, written in Object Pascal, runs under the Macintosh Operating System and a Java-based
cross-platform version is now also available.
Axiom is a general purpose Computer Algebra system. It is useful for research and development of mathematical algorithms. It defines a strongly typed, mathematically correct type hierarchy. It has a
programming language and a built-in compiler.
ePiX, a collection of batch-oriented utilities for *nix, creates mathematically accurate line figures, plots, and movies using easy-to-learn syntax. LaTeX and dvips comprise the typographical
rendering engine, while ImageMagick is used to create bitmapped images and animations. The user interface resembles that of LaTeX itself: You prepare a short scene description in a text editor, then
compile'' the input file into a picture. Default output formats are eepic (a plain text enhancement to the LaTeX picture environment), eps, pdf, png, and mng.
GAP is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing
algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects. GAP is used in research and teaching for studying groups and their representations, rings,
vector spaces, algebras, combinatorial structures, and more. GAP is developed by international cooperation. The system, including source, is distributed freely under the terms of the GNU General
Public License. You can study and easily modify or extend GAP for your special use. The current version is GAP 4, the older version GAP 3 is still available.
KASH/KANT is a computer algebra system for sophisticated computations in algebraic number fields and global function fields. It has been developed under the project leadership of Prof. Dr. M. Pohst
at Technische Universität Berlin.
LiDIA is a C++ library for computational number theory which provides a collection of highly optimized implementations of various multiprecision data types and time-intensive algorithms.
Maple is an environment for scientific and engineering problem-solving, mathematical exploration, data visualization and technical authoring.
Mathematica seamlessly integrates a numeric and symbolic computational engine, graphics system, programming language, documentation system, and advanced connectivity to other applications.
MATLAB is a high-level language and interactive environment that enables you to perform computationally intensive tasks faster than with traditional programming languages such as C, C++, and Fortran.
MuPAD is a mathematical expert system for doing symbolic and exact algebraic computations as well as numerical calculations with almost arbitrary accuracy. For example, the number of significant
digits can be chosen freely. Apart from a vast variety of mathematical libraries the system provides tools for high quality visualization of 2- and 3-dimensional objects. On Microsoft Windows, Apple
Macintosh and Linux systems, MuPAD offers a flexible notebook concept for creating mathematical documents combining texts, graphics, formulas, computations and mathematical visualizations and
animations. On Microsoft Windows MuPAD further supports the technologies OLE, ActiveX Automation, DCOM, RTF and HTML. Thus it offers a natural integration in Office applications like Word or
PowerPoint as well as others.
PARI/GP is a widely used computer algebra system designed for fast computations in number theory (factorizations, algebraic number theory, elliptic curves...), but also contains a large number of
other useful functions to compute with mathematical entities such as matrices, polynomials, power series, algebraic numbers, etc., and a lot of transcendental functions. PARI is also available as a C
library to allow for faster computations.
PLTMG is a package for solving elliptic partial differential equations in general regions of the plane. It is based on continuous piecewise linear triangular finite elements, and features adaptive
local mesh refinement, multigraph iteration, and pseudo-arclength continuation options for parameter dependencies. It also provides options for solving several classes of optimal control and obstacle
problems. The package includes an initial mesh generator and several graphics packages. Support for the Bank-Holst parallel adaptive meshing strategy is also provided. PLTMG is provided as Fortran
(and a little C) source code, in both single and double precision versions. The code has interfaces to X-Windows, MPI, and Michael Holst's OpenGL image viewer SG. The X-Windows, MPI, and SG
interfaces require libraries that are NOT provided as part of the PLTMG package.
The rbMIT © MIT software package implements in Matlab® all the general reduced basis algorithms. The rbMIT © MIT software package is intended to serve both (as Matlab® source) "Developers" —
numerical analysts and computational tool-builders — who wish to further develop the methodology, and (as Matlab® "executables") "Users" — computational engineers and educators — who wish to rapidly
apply the methodology to new applications. The rbMIT software package was awarded with the Springer Computational Science and Engineering Prize in 2009.
Risa/Asir is a general computer algebra system and also a tool for various computation in mathematics and engineering. The development of Risa/Asir started in 1989 at FUJITSU. Binaries have been
freely available since 1994 and now the source code is also free. Currently Kobe distribution is the most active branch of its development. We characterize Risa/Asir as follows: (1) An environment
for large scale and efficient polynomial computation. (2) A platform for parallel and distributed computation based on OpenXM protocols.
SAGE is a framework for number theory, algebra, and geometry computation. It is open source and freely available under the terms of the GNU General Public License (GPL). SAGE is a Python library with
a customized interpreter. It is written in Python, C++, and C (via Pyrex). Python (http://www.python.org) is an open source object-oriented interpreted language, with a large number of libraries,
e.g., for numerical analysis, which are available to users of SAGE. Python can also be accessed in library mode from C/C++ programs. SAGE provides an interface to several important open source
libraries, including Cremona’s MWRANK library for computing with elliptic curves, the PARI library (pari.math.u-bordeaux.fr) for number theory, Shoup’s number theory library NTL (http://www.shoup.net
/ntl/), SINGULAR (http://www.singular.uni-kl.de) for commutative algebra, GAP (http://www.gap-system.org) for group theory and combinatorics, and maxima (http://maxima.sourceforge.net) for symbolic
computation and calculus.
An online Java applet for calculation of singular algebraic surfaces.
SINGULAR is a Computer Algebra system for polynomial computations in commutative algebra, algebraic geometry, and singularity theory. SINGULAR's main computational objects are ideals and modules over
a large variety of baserings. The baserings are polynomial rings over a field (e.g., finite fields, the rationals, floats, algebraic extensions, transcendental extensions), or localizations thereof,
or quotient rings with respect to an ideal. SINGULAR features fast and general implementations for computing Groebner and standard bases, including e.g. Buchberger's algorithm and Mora's Tangent Cone
algorithm. Furthermore, it provides polynomial factorizations, resultant, characteristic set and gcd computations, syzygy and free-resolution computations, and many more related functionalities.
Based on an easy-to-use interactive shell and a C-like programming language, SINGULAR's internal functionality is augmented and user-extendible by libraries written in the SINGULAR programming
language. A general and efficient implementation of communication links allows SINGULAR to make its functionality available to other programs. | {"url":"https://orms.mfo.de/search@terms=integral+curves.html","timestamp":"2024-11-02T21:13:55Z","content_type":"application/xhtml+xml","content_length":"17282","record_id":"<urn:uuid:a904237a-7733-4ffd-8c7c-00a7d116b255>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00791.warc.gz"} |
Broadwater Church of England Primary School | Maths Calculation Policy
Maths Calculation Policy
Written Methods of Calculation Policy March 2024
The National Curriculum for mathematics aims to ensure that all pupils:
• become fluent in the fundamentals of mathematics, including through varied and frequent practice with increasingly complex problems over time, so that pupils have conceptual understanding and are
able to recall and apply their knowledge rapidly and accurately to problems
• reason mathematically by following a line of enquiry, conjecturing relationships and generalisations, and developing an argument, justification or proof using mathematical language
• can solve problems by applying their mathematics to a variety of routine and non-routine problems with increasing sophistication, including breaking down problems into a series of simpler steps
and persevering in seeking solutions.
This written calculations policy sets out how we teach the progression of children’s use of written methods of calculation.
Calculation strands of programmes of study for each year group as specified in ‘The national curriculum in England’, framework document (updated July 2014)
Maths curriculum evening for parents
We had well attended parent information evenings outlining the strategies we use in school. The following information was shared:
KS1 - Year 1 and 2 Maths Meeting Information KS2 - Year 3 and 4 Maths Meeting Information KS2 - Year 5 and 6 Maths meeting Information
Below is the full calculation policy.
Calculating (Statutory Requirements)
• Addition and Subtraction
□ read, write and interpret mathematical statements involving addition (+), subtraction (-) and equals (=) signs
□ represent and use number bonds and related subtraction facts within 20
□ add and subtract one-digit and two-digit numbers to 20, including zero
□ solve one-step problems that involve addition and subtraction, using concrete objects and pictorial representations, and missing number problems such as 7 = - 9.
Multiplication and Division
□ solve one-step problems involving multiplication and division, by calculating the answer using concrete objects, pictorial representations and arrays with the support of the teacher.
• Addition and Subtraction
□ solve problems with addition and subtraction:
- using concrete objects and pictorial representations, including those involving numbers, quantities and measures
- applying their increasing knowledge of mental and written methods
□ recall and use addition and subtraction facts to 20 fluently, and derive and use related facts up to 100
□ add and subtract numbers using concrete objects, pictorial representations, and mentally, including:
- a two-digit number and ones
- a two-digit number and tens
- two two-digit numbers
- adding three one-digit numbers
□ show that addition of two numbers can be done in any order (commutative) and subtraction of one number from another cannot
□ recognise and use the inverse relationship between addition and subtraction and use this to check calculations and missing number problems.
Multiplication and Division
□ recall and use multiplication and division facts for the 2, 5 and 10 multiplication tables, including recognising odd and even numbers
□ calculate mathematical statements for multiplication and division within the multiplication tables and write them using the multiplication (×), division (÷) and equals (=) signs
□ show that multiplication of two numbers can be done in any order (commutative) and division of one number by another cannot
□ solve problems involving multiplication and division, using materials, arrays, repeated addition, mental methods, and multiplication and division facts, including problems in contexts.
• Addition and Subtraction
□ add and subtract numbers mentally, including:
- a three-digit number and ones
- a three-digit number and tens
- a three-digit number and hundreds
□ add and subtract numbers with up to three digits, using formal written methods of columnar addition and subtraction
□ estimate the answer to a calculation and use inverse operations to check answers
□ solve problems, including missing number problems, using number facts, place value, and more complex addition and subtraction.
Multiplication and Division
□ recall and use multiplication and division facts for the 3, 4 and 8 multiplication tables
□ write and calculate mathematical statements for multiplication and division using the multiplication tables that they know, including for two-digit numbers times one-digit numbers, using
mental and progressing to formal written methods
□ solve problems, including missing number problems, involving multiplication and division, including integer scaling problems and correspondence problems in which n objects are connected to m
• Addition and Subtraction
□ add and subtract numbers with up to 4 digits using the formal written methods of columnar addition and subtraction where appropriate
□ estimate and use inverse operations to check answers to a calculation
□ solve addition and subtraction two-step problems in contexts, deciding which operations and methods to use and why.
Multiplication and Division
□ recall multiplication and division facts for multiplication tables up to 12 × 12
□ use place value, known and derived facts to multiply and divide mentally, including: multiplying by 0 and 1; dividing by 1; multiplying together three numbers
□ recognise and use factor pairs and commutativity in mental calculations
□ multiply two-digit and three-digit numbers by a one-digit number using formal written layout
□ solve problems involving multiplying and adding, including using the distributive law to multiply two digit numbers by one digit, integer scaling problems and harder correspondence problems
such as n objects are connected to m objects.
• Addition and Subtraction
□ add and subtract whole numbers with more than 4 digits, including using formal written methods (columnar addition and subtraction)
□ add and subtract numbers mentally with increasingly large numbers
□ use rounding to check answers to calculations and determine, in the context of a problem, levels of accuracy
□ solve addition and subtraction multi-step problems in contexts, deciding which operations and methods to use and why.
Multiplication and Division
□ solve problems involving multiplication and division where larger numbers are used by decomposing them into their factors
□ multiply numbers up to 4 digits by a one- or two-digit number using a formal written method, including long multiplication for two-digit numbers
□ multiply and divide numbers mentally drawing upon known facts
□ divide numbers up to 4 digits by a one- digit number using the formal written method of short division and interpret remainders appropriately for the context
□ solve problems involving addition, subtraction, multiplication and division and a combination of these, including understanding the meaning of the equals sign
□ solve problems involving multiplication and division, including scaling by simple fractions and problems involving simple rates.
• Multiplication and Division
□ multiply multi-digit numbers up to 4 digits by a two-digit whole number using the formal written method of long multiplication
□ divide numbers up to 4 digits by a two-digit whole number using the formal written method of long division, and interpret remainders as whole number remainders, fractions, or by rounding, as
appropriate for the context
□ perform mental calculations, including with mixed operations and large numbers.
□ use their knowledge of the order of operations to carry out calculations involving the four operations
□ solve addition and subtraction multi-step problems in contexts, deciding which operations and methods to use and why
□ solve problems involving addition, subtraction, multiplication and division
□ use estimation to check answers to calculations and determine, in the context of a problem, levels of accuracy.
The Stages
It is recognised that prior to all written stages that a great deal of unwritten foundation work will have taken place and children gain a basic grasp of number. It is important that children have a
clear and firm understanding of one stage before progressing fully to the next stage. In most cases it is vital that all stages are taught in order and that children have a secure understanding of
the process before moving on. This will ensure children should always have a “fall back” written calculation method when faced with any problem. The very beginnings of calculation involves
transferring the abstract concept of number to the physical and then to written numbers. Formation of numbers must be taught and should be reflected in the fonts that are used in school.
Early Understanding of Number
In the early stages (typically in Early Years, moving into Year 1) children should be introduced to numbers in a variety of ways. These will include:
Numicon Objects for representation, including more than and less than
Number tracks and lines Number lines – 0 to 10 and 0 to 30
Introduction to the 100 square Children should be taught to look for patterns at every opportunity
Numbers as labels The language of addition and subtraction (e.g. 6 and 4 more...)
Compare equal and unequal groups Explore the difference between odd and even numbers
Number composition Recording numbers in different ways
Doubling Subitising (recognising quantities without counting)
Sharing quantities equally
Using Number Lines
The number line is a very useful tool to help all operations and should be introduced to children in a number of settings, both vertically and horizontally, numbered and blank. Bead sticks, metre
rules, tape measures etc. can all help this introduce the concept.
It is important that children are able to distinguish between numbers that are set in place on the line and any calculations taking place. In order to ensure fidelity any numbers written below the
line relate to set numbers on the number line. Those numbers written above the line should relate to calculations (e.g. an arrow +1). It is important to teach children to plan their calculation
(estimate) in order to ensure sufficient space is allocated. This may involve steps in both directions.
Mental Strategies
It is vital that mental strategies accompany written calculation strategies throughout the school. Children should be taught that mental strategies are different from compact written methods. Vital
elements of this toolkit include a secure knowledge of number bonds, place value, partitioning and chunking. Counting strategies (age appropriate) should take place in all years to include counting
on and backwards in various steps and starting points along with fractions and decimals. Times table facts should also be learned. | {"url":"https://www.broadwater.w-sussex.sch.uk/361/","timestamp":"2024-11-05T19:27:55Z","content_type":"text/html","content_length":"60110","record_id":"<urn:uuid:96a6697a-406c-469e-8501-863ece8ceaee>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00219.warc.gz"} |
Permanent Magnet Motors in Low Voltage High Speed Applications | TruTech Servo Motors & Systems
TruTech Servo Motors & Systems
This presentation will discuss the design issues of designing a permanent magnet motor to be used in low voltage applications. The major issue in these applications is the low inductance of these
motors and the high currents required. This presentation will talk about these issues and discuss some ways to deal with these issues. The applications discussed will be remote type low voltage
applications that are powered off the power grid. These offline applications will be powered by such sources as solar panel or wind generators. These are usually run off a battery type storage system
and would range in a battery voltage of 12 to 48 volts. These applications could also run directly on the power system or use a battery system to store the power. The battery type system will give a
fairly constant voltage source but the direct type of system can have a large variation in the voltage available. The problems with the low inductance and high currents will be present in both types
of the remote applications. The speed of systems discussed here will be in the 4 to 5 thousand RPM range. The problem with low inductance will increase with higher speed motors run on low voltages
but it is easier to show the effects of the low inductance in the 4 to 5 thousand RPM range. Also these higher speeds will require a lot lower Motor Voltage Constant, Ke, and just magnify the problem
of low inductance. These types of very high speed applications are best met by using higher voltages. This will allow a higher Ke to be used and this will result in a higher winding inductance.
Motor Ke is the quotient of voltage divided by the speed in radians per second. This value for Ke will be low since this paper is discussing low voltage high speed applications. The equation for
motor speed is derived from the voltage equation for a motor, (V=I*R + Ke*S + L*di/dt). The inductive term for the voltage is usually a small number and can usually be ignored when using this
equation to determine a starting value for the Ke needed. Even in Low voltage situations the turns count will be small allowing it to be ignored in most cases since the inductance is a function of
the turns and is also very low. This term in a very low voltage case does need to be checked because if the current ripple di/dt is large the voltage loss can still be large when compared to the very
low input voltage. Motor Ke is the quotient of voltage divided by the speed in radians per second. The motor torque constant can be shown to be equal to the motor voltage constant in the MKS set of
units and is expressed as N-M/amp, ( Ke = V/S = B*L*N = T/I = Kt). This means that the torque constant will also be low and the motor will require large currents to produce the torque required in
most applications. The other factors in the above equation are winding length L in meters and air gap Flux density B in Tesla. The N in the above equation is the number of conductors in the winding.
The length is optional since the motor in these types of applications are not a typical application issue in most cases. A shorter length would give a lower resistance and inductance. The longer
length would have a higher resistance and inductance but will give higher rotational losses. Since the torque constant is equal to the Ke and will also be low, the current required in these
applications can be a very high number. This high current will give a high I*R voltage drop and leave less voltage available to generate the speed desired. This means that an even lower Ke will be
needed. This requires more current and a runaway condition can develop. This lower Kt will also require more current which will drop the voltage even lower because of the increasing I*R voltage. This
could lead into a runaway condition. Failure to consider this would give lower speeds than desired. Another problem with the low Kt is that a higher amount of current is needed to overcome the torque
losses due to friction and the rotational losses. This adds to the voltage lost in the I*R drop and makes the Ke needed a little lower than expected. The current capacity of the wires in the motor
winding will also need to be considered since they could go into a fuse condition if they cannot carry the current needed.
The Permanent Magnet Motor can take several design configurations. The most common type of design is a cylindrical type of motor. This motor has a long length when compared to the motor diameter. Due
to the small diameter the number of poles will be limited. This will result in a trend to a higher value of the resistance and inductance. This style motor will tend to be a high speed type of motor
and are commonly used in geared type applications. Another style of motor is commonly called a pancake motor. These motors tend to be large in diameter and the length is significantly shorter than
the diameter. These motors tend to have a large number of magnetic poles and will have very low inductances and resistance. This will mean that these motors will be low speed motors and can be used
in direct drive type applications. The pancake motor can also easily be made into on outer rotation type of motor with the magnets in the outer rotating cup. This style motor will have a high inertia
and would be used on directly coupled high inertia loads. When the smaller pancake motors are used, the high pole counts will give a low inductance and can give problems due to the high ripple
currents. Another class of motors would be the axial flux style motors. These motor would also mainly be a pancake style motor but would use powdered metal instead of laminations. These motors can be
made with either rotating iron or rotating coils and will have a large variation in inductance in these different construction styles. Inductance will vary inversely with the number of magnetic poles
of the motor. This function of increasing inductance with increasing pole count is proportional with the square of the number of poles. Another benefit of the pole count is that the higher pole
counts will also give a lower resistance. Fig 1 shows the effect of the number of poles on inductance and resistance in a 3 inch diameter 1 inch long stator with 36 slots where the motor design
parameters are kept constant except for the pole counts. The Ke was kept constant a 8 v/krpm. This allows more turns for the comparison and would give the speed needed at the higher voltages. This
curve will show that the low pole count will have a high inductance but also a high resistance. Since a higher inductance and a low resistance are ideal, the best pole count is one in the mid-range.
The higher pole counts have flattened out for the resistance and inductance so these pole counts would need to be determined for other design reasons. Another effect of the higher pole count is the
commutation frequency. The higher the speed of the motor the higher the frequency of the commutation switching of the motor phases will be required. The lower inductance and resistance will give a
lower electrical time constant which allows a faster rise time of the current in the PWM switching. The higher pole counts will require a higher PWM frequency to get several current pulses in each
commutation cycle. The higher frequency gives a benefit in that it will minimize the current ripple amplitude and lower the resistive heating in the stator. The negative effect of the higher
frequency required for higher pole counts is that the eddy current losses and the hysteresis losses increase with frequency so these losses will be higher. These factors will result in a higher motor
temperature and lower motor efficiency.
Another way to change the motor inductance is to change the magnet material used in the motor. The motor voltage constant required, Ke, is defined by the speed needed and the voltage available for
the application. Since the Ke and the motor torque constant, Kt, are dependent on the Flux density of the air gap, the flux density in the airgap will affect the resistance and inductance. A 4 inch
motor design was used to check the effects of the magnet material. The Ke was used in this comparison was 8 v/krpm.
The stator lamination was varied in both the tooth width and the back iron width to keep the Flux density of the lamination tooth at 14 KG and the back iron flux density at 13 KG. This will allow the
slot area to change for each magnet material and then the number of winding turns was changed for each material to keep the Ke constant. Figure 2 shows the effect of the changes in the lamination
needed for the two largest variations of magnet materials. The plot on the left is a ceramic M8a magnet and the plot on the right is for a sintered neo 40/23 grade of neo. There is a large variation
in the area for the winding when keeping the flux density constant in the teeth and back iron. This will make the lower energy magnet material more attractive in the low voltage applications and a
material between these two variations would be a good choice in many applications when size and weight is not a major requirement. Figure 3 shows the change in resistance and inductance for this
change in magnet material. Since the inductance will increase with the number of turns squared, the lower grade material will have a significantly higher inductance. The downside of this is that the
higher number of turns will give a higher resistance. This increase in resistance is partially offset due to the fact that the lower flux density in the air gap allows less iron in the magnetic
circuit and gives more area for the wire allowing larger wire sizes to reduce the resistance. The lower energy magnet materials will give the highest inductance but will also give a higher resistance
and the higher resistive heat losses and reduce the efficiency. Here also a mid range of the magnetic energy will give the best result for inductance and the actual material used needs to be picked
using other application requirements. Since each of the magnet material designs used had the same pole angle size and same Ke and was run on the same voltage and current, the only difference was in
the resistance and inductance of the motor. These differences would have the effect to change the thermal characteristics of the motor and the efficiency of the motor. The lower energy magnet would
run hotter than the High energy magnet material and this factor will also need to be considered.
Magnet Material
The magnet thickness is less for the high energy material so the actual volume of the magnet will change. The lower energy magnet material will usually cost less so this will offset the increase of
the volume of the low energy material.
Current ripple is the major system level problem due to the low inductance. The low inductance will give a higher rise times and fall times of the current ripple due to the PWM frequency switching
the phase currents. This will result in a higher peak to peak value of the ripple current. Since the positive side of the average current will produced a positive torque ripple and the negative side
of the average of the current will give a negative torque ripple, they will balance out with no net torque change. This ripple produces heating in the winding and reduces motor performance due to the
added copper losses and magnetic losses. Increasing the frequency of the PWM is a common way to try to reduce these losses. The increase in frequency will result in a shorter time for the rise and
fall of the current ripple and reduce the current ripple amplitude in this way. This does reduce the copper losses but will increase the iron losses. Both the eddy current losses and the hysteresis
losses increase with the switching frequency so these losses will be higher. The lamination thickness and material can be changed to minimize these losses but this can also affect the lamination
One trend that has become common is to shop for motors and drives thru a catalog. If the components chosen do not meet the requirements the system will run hot and become inefficient and give
premature system failures. If the system is oversized to allow all components to exceed the minimum requirements, then the cost will be high and the system components will also become inefficient and
be larger than needed. This type of system will also tend to need to be degraded in the voltage and current from the levels available. Any of these motor drives are now digital type drives that can
easily be programmed to match more applications. This will help in getting a drive that better matches the application requirements. These drives can have the same basic hardware and be programmed to
match the parameters of the motor requirements to meet the application requirement. These programmable drives can also react very fast to feedback from the motor and the load to define the optimum
inputs to the motor. The real time response of these systems will allow the system to operate at a higher efficiency level and at a lower power level to meet the real time requirements of the total
system. Since everything is in real time the system can be programmed to react to issues due to low inductance and minimize the negative effects of low inductance and low resistance. Also since the
drives are digital in nature, various algorisms can be used to minimize the effects of inductance and current ripple.
The systems using programmable drives can be taken one step further by designing a motor and a drive specifically to meet the requirements of the application. If the actual application can be defined
a motor and drive system can be designed to meet the application and give a more efficient and less costly solution to the problem. This would allow the motor to be designed using the best magnet
materials and magnetic design that can be used for the application. This drive system can then be programmed to give the best voltage and current to the motor by modifying the input voltage and
current provided to the system. This would be monitored with feedback from all components to the drive to match the most efficient output of the system using the input power that is available at real
time. The voltage delivered to the motor can also be varied by using digital voltage level shifting to get the best available power to the motor and the best available power to the load. Since this
is all done in real time, these changes can be made to match the real time requirements. This system could also change the frequency in real time to minimize current ripple when it can be allowed.
Knowing the real time condition of the input power and the output power allows many reactions to this in the programmable portion of the drive. This system would only be limited by the amount of
feedback from the input and output of the system. This type of system can use a varying type of input power directly or run off batteries and match the input power to the output power. This system
can change system parameters to meet the needs of the motor used. A motor then can be designed with any available magnet material and any pole count that will work best for the total application.
This would greatly diminish the effects of the inductance and maximize the efficiency of the motor.
Summary and Conclusions
The low voltages along with high speeds will require a low motor Ke. As has been discussed, this will also give a low inductance value. This low inductance gives a high current ripple and high
current ripple gives high motor losses. The low Ke also gives a low Kt which will require high currents and high resistive losses. These low inductance effects will give high losses and a poor
efficiency for the system. One of the ways to increase the inductance value is the use a low pole count in the motor. The inductance will start very high with the low pole counts and then flatten out
as the pole count increases. Another way to increase the inductance in the motor design is to use a lower energy magnet material. Figure 3 shows that the inductance will start high with the lower
energy products and then flatten out as the magnet energy is increased. Wire area can increase for the lower energy magnet materials allowing larger wire to be used to offset some of the increase in
resistance with the higher number of turns required. Increasing the frequency has been a common way to lower the current ripple losses but this also has limits in how high the frequency can go. The
digital programmable drives have made several ways to address the low inductance at the system level. The lowering of the cost in digital drives has allowed the drives to be programmed to best meet
the applications. The best solution to get the best performance and efficiency is to design all of the components to meet the requirement of the drive. This design will then be able to match the
input power to the output power on a real time basis giving the most efficient type of solution to an application. Since this type of system allows software define how the power is handled, it can
react easily to real time changes and give the most efficient type of total system. | {"url":"https://trutechmotors.com/permanent-magnet-motors-in-low-voltage-high-speed-applications/","timestamp":"2024-11-10T05:49:58Z","content_type":"text/html","content_length":"66868","record_id":"<urn:uuid:d739a657-d898-4249-aae6-2e7faae8a8ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00539.warc.gz"} |
TensorFlow 2 Tutorial: Get Started in Deep Learning With tf.keras
Author: Jason Brownlee
Predictive modeling with deep learning is a skill that modern developers need to know.
TensorFlow is the premier open-source deep learning framework developed and maintained by Google. Although using TensorFlow directly can be challenging, the modern tf.keras API beings the simplicity
and ease of use of Keras to the TensorFlow project.
Using tf.keras allows you to design, fit, evaluate, and use deep learning models to make predictions in just a few lines of code. It makes common deep learning tasks, such as classification and
regression predictive modeling, accessible to average developers looking to get things done.
In this tutorial, you will discover a step-by-step guide to developing deep learning models in TensorFlow using the tf.keras API.
After completing this tutorial, you will know:
• The difference between Keras and tf.keras and how to install and confirm TensorFlow is working.
• The 5-step life-cycle of tf.keras models and how to use the sequential and functional APIs.
• How to develop MLP, CNN, and RNN models with tf.keras for regression, classification, and time series forecasting.
• How to use the advanced features of the tf.keras API to inspect and diagnose your model.
• How to improve the performance of your tf.keras model by reducing overfitting and accelerating training.
This is a large tutorial, and a lot of fun. You might want to bookmark it.
The examples are small and focused; you can finish this tutorial in about 60 minutes.
Let’s get started.
TensorFlow Tutorial Overview
This tutorial is designed to be your complete introduction to tf.keras for your deep learning project.
The focus is on using the API for common deep learning model development tasks; we will not be diving into the math and theory of deep learning. For that, I recommend starting with this excellent
The best way to learn deep learning in python is by doing. Dive in. You can circle back for more theory later.
I have designed each code example to use best practices and to be standalone so that you can copy and paste it directly into your project and adapt it to your specific needs. This will give you a
massive head start over trying to figure out the API from official documentation alone.
It is a large tutorial and as such, it is divided into five parts; they are:
1. Install TensorFlow and tf.keras
1. What Are Keras and tf.keras?
2. How to Install TensorFlow
3. How to Confirm TensorFlow Is Installed
2. Deep Learning Model Life-Cycle
1. The 5-Step Model Life-Cycle
2. Sequential Model API (Simple)
3. Functional Model API (Advanced)
3. How to Develop Deep Learning Models
1. Develop Multilayer Perceptron Models
2. Develop Convolutional Neural Network Models
3. Develop Recurrent Neural Network Models
4. How to Use Advanced Model Features
1. How to Visualize a Deep Learning Model
2. How to Plot Model Learning Curves
3. How to Save and Load Your Model
5. How to Get Better Model Performance
1. How to Reduce Overfitting With Dropout
2. How to Accelerate Training With Batch Normalization
3. How to Halt Training at the Right Time With Early Stopping
You Can Do Deep Learning in Python!
Work through the tutorial at your own pace.
You do not need to understand everything (at least not right now). Your goal is to run through the tutorial end-to-end and get results. You do not need to understand everything on the first pass.
List down your questions as you go. Make heavy use of the API documentation to learn about all of the functions that you’re using.
You do not need to know the math first. Math is a compact way of describing how algorithms work, specifically tools from linear algebra, probability, and statistics. These are not the only tools that
you can use to learn how algorithms work. You can also use code and explore algorithm behavior with different inputs and outputs. Knowing the math will not tell you what algorithm to choose or how to
best configure it. You can only discover that through careful, controlled experiments.
You do not need to know how the algorithms work. It is important to know about the limitations and how to configure deep learning algorithms. But learning about algorithms can come later. You need to
build up this algorithm knowledge slowly over a long period of time. Today, start by getting comfortable with the platform.
You do not need to be a Python programmer. The syntax of the Python language can be intuitive if you are new to it. Just like other languages, focus on function calls (e.g. function()) and
assignments (e.g. a = “b”). This will get you most of the way. You are a developer, so you know how to pick up the basics of a language really fast. Just get started and dive into the details later.
You do not need to be a deep learning expert. You can learn about the benefits and limitations of various algorithms later, and there are plenty of posts that you can read later to brush up on the
steps of a deep learning project and the importance of evaluating model skill using cross-validation.
1. Install TensorFlow and tf.keras
In this section, you will discover what tf.keras is, how to install it, and how to confirm that it is installed correctly.
1.1 What Are Keras and tf.keras?
Keras is an open-source deep learning library written in Python.
The project was started in 2015 by Francois Chollet. It quickly became a popular framework for developers, becoming one of, if not the most, popular deep learning libraries.
During the period of 2015-2019, developing deep learning models using mathematical libraries like TensorFlow, Theano, and PyTorch was cumbersome, requiring tens or even hundreds of lines of code to
achieve the simplest tasks. The focus of these libraries was on research, flexibility, and speed, not ease of use.
Keras was popular because the API was clean and simple, allowing standard deep learning models to be defined, fit, and evaluated in just a few lines of code.
A secondary reason Keras took-off was because it allowed you to use any one among the range of popular deep learning mathematical libraries as the backend (e.g. used to perform the computation), such
as TensorFlow, Theano, and later, CNTK. This allowed the power of these libraries to be harnessed (e.g. GPUs) with a very clean and simple interface.
In 2019, Google released a new version of their TensorFlow deep learning library (TensorFlow 2) that integrated the Keras API directly and promoted this interface as the default or standard interface
for deep learning development on the platform.
This integration is commonly referred to as the tf.keras interface or API (“tf” is short for “TensorFlow“). This is to distinguish it from the so-called standalone Keras open source project.
• Standalone Keras. The standalone open source project that supports TensorFlow, Theano and CNTK backends.
• tf.keras. The Keras API integrated into TensorFlow 2.
The Keras API implementation in Keras is referred to as “tf.keras” because this is the Python idiom used when referencing the API. First, the TensorFlow module is imported and named “tf“; then, Keras
API elements are accessed via calls to tf.keras; for example:
# example of tf.keras python idiom
import tensorflow as tf
# use keras API
model = tf.keras.Sequential()
I generally don’t use this idiom myself; I don’t think it reads cleanly.
Given that TensorFlow was the de facto standard backend for the Keras open source project, the integration means that a single library can now be used instead of two separate libraries. Further, the
standalone Keras project now recommends all future Keras development use the tf.keras API.
At this time, we recommend that Keras users who use multi-backend Keras with the TensorFlow backend switch to tf.keras in TensorFlow 2.0. tf.keras is better maintained and has better integration
with TensorFlow features (eager execution, distribution support and other).
1.2 How to Install TensorFlow
Before installing TensorFlow, ensure that you have Python installed, such as Python 3.6 or higher.
If you don’t have Python installed, you can install it using Anaconda. This tutorial will show you how:
There are many ways to install the TensorFlow open-source deep learning library.
The most common, and perhaps the simplest, way to install TensorFlow on your workstation is by using pip.
For example, on the command line, you can type:
sudo pip install tensorflow
If you prefer to use an installation method more specific to your platform or package manager, you can see a complete list of installation instructions here:
There is no need to set up the GPU now.
All examples in this tutorial will work just fine on a modern CPU. If you want to configure TensorFlow for your GPU, you can do that after completing this tutorial. Don’t get distracted!
1.3 How to Confirm TensorFlow Is Installed
Once TensorFlow is installed, it is important to confirm that the library was installed successfully and that you can start using it.
Don’t skip this step.
If TensorFlow is not installed correctly or raises an error on this step, you won’t be able to run the examples later.
Create a new file called versions.py and copy and paste the following code into the file.
# check version
import tensorflow
Save the file, then open your command line and change directory to where you saved the file.
Then type:
python versions.py
You should then see output like the following:
This confirms that TensorFlow is installed correctly and that we are all using the same version.
What version did you get?
Post your output in the comments below.
This also shows you how to run a Python script from the command line. I recommend running all code from the command line in this manner, and not from a notebook or an IDE.
If You Get Warning Messages
Sometimes when you use the tf.keras API, you may see warnings printed.
This might include messages that your hardware supports features that your TensorFlow installation was not configured to use.
Some examples on my workstation include:
Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
XLA service 0x7fde3f2e6180 executing computations on platform Host. Devices:
StreamExecutor device (0): Host, Default Version
They are not your fault. You did nothing wrong.
These are information messages and they will not prevent the execution of your code. You can safely ignore messages of this type for now.
It’s an intentional design decision made by the TensorFlow team to show these warning messages. A downside of this decision is that it confuses beginners and it trains developers to ignore all
messages, including those that potentially may impact the execution.
Now that you know what tf.keras is, how to install TensorFlow, and how to confirm your development environment is working, let’s look at the life-cycle of deep learning models in TensorFlow.
2. Deep Learning Model Life-Cycle
In this section, you will discover the life-cycle for a deep learning model and the two tf.keras APIs that you can use to define models.
2.1 The 5-Step Model Life-Cycle
A model has a life-cycle, and this very simple knowledge provides the backbone for both modeling a dataset and understanding the tf.keras API.
The five steps in the life-cycle are as follows:
1. Define the model.
2. Compile the model.
3. Fit the model.
4. Evaluate the model.
5. Make predictions.
Let’s take a closer look at each step in turn.
Define the Model
Defining the model requires that you first select the type of model that you need and then choose the architecture or network topology.
From an API perspective, this involves defining the layers of the model, configuring each layer with a number of nodes and activation function, and connecting the layers together into a cohesive
Models can be defined either with the Sequential API or the Functional API, and we will take a look at this in the next section.
# define the model
model = ...
Compile the Model
Compiling the model requires that you first select a loss function that you want to optimize, such as mean squared error or cross-entropy.
It also requires that you select an algorithm to perform the optimization procedure, typically stochastic gradient descent, or a modern variation, such as Adam. It may also require that you select
any performance metrics to keep track of during the model training process.
From an API perspective, this involves calling a function to compile the model with the chosen configuration, which will prepare the appropriate data structures required for the efficient use of the
model you have defined.
The optimizer can be specified as a string for a known optimizer class, e.g. ‘sgd‘ for stochastic gradient descent, or you can configure an instance of an optimizer class and use that.
For a list of supported optimizers, see this:
# compile the model
opt = SGD(learning_rate=0.01, momentum=0.9)
model.compile(optimizer=opt, loss='binary_crossentropy')
The three most common loss functions are:
• ‘binary_crossentropy‘ for binary classification.
• ‘sparse_categorical_crossentropy‘ for multi-class classification.
• ‘mse‘ (mean squared error) for regression.
# compile the model
model.compile(optimizer='sgd', loss='mse')
For a list of supported loss functions, see:
Metrics are defined as a list of strings for known metric functions or a list of functions to call to evaluate predictions.
For a list of supported metrics, see:
# compile the model
model.compile(optimizer='sgd', loss='binary_crossentropy', metrics=['accuracy'])
Fit the Model
Fitting the model requires that you first select the training configuration, such as the number of epochs (loops through the training dataset) and the batch size (number of samples in an epoch used
to estimate model error).
Training applies the chosen optimization algorithm to minimize the chosen loss function and updates the model using the backpropagation of error algorithm.
Fitting the model is the slow part of the whole process and can take seconds to hours to days, depending on the complexity of the model, the hardware you’re using, and the size of the training
From an API perspective, this involves calling a function to perform the training process. This function will block (not return) until the training process has finished.
# fit the model
model.fit(X, y, epochs=100, batch_size=32)
For help on how to choose the batch size, see this tutorial:
While fitting the model, a progress bar will summarize the status of each epoch and the overall training process. This can be simplified to a simple report of model performance each epoch by setting
the “verbose” argument to 2. All output can be turned off during training by setting “verbose” to 0.
# fit the model
model.fit(X, y, epochs=100, batch_size=32, verbose=0)
Evaluate the Model
Evaluating the model requires that you first choose a holdout dataset used to evaluate the model. This should be data not used in the training process so that we can get an unbiased estimate of the
performance of the model when making predictions on new data.
The speed of model evaluation is proportional to the amount of data you want to use for the evaluation, although it is much faster than training as the model is not changed.
From an API perspective, this involves calling a function with the holdout dataset and getting a loss and perhaps other metrics that can be reported.
# evaluate the model
loss = model.evaluate(X, y, verbose=0)
Make a Prediction
Making a prediction is the final step in the life-cycle. It is why we wanted the model in the first place.
It requires you have new data for which a prediction is required, e.g. where you do not have the target values.
From an API perspective, you simply call a function to make a prediction of a class label, probability, or numerical value: whatever you designed your model to predict.
You may want to save the model and later load it to make predictions. You may also choose to fit a model on all of the available data before you start using it.
Now that we are familiar with the model life-cycle, let’s take a look at the two main ways to use the tf.keras API to build models: sequential and functional.
# make a prediction
yhat = model.predict(X)
2.2 Sequential Model API (Simple)
The sequential model API is the simplest and is the API that I recommend, especially when getting started.
It is referred to as “sequential” because it involves defining a Sequential class and adding layers to the model one by one in a linear manner, from input to output.
The example below defines a Sequential MLP model that accepts eight inputs, has one hidden layer with 10 nodes and then an output layer with one node to predict a numerical value.
# example of a model defined with the sequential api
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
# define the model
model = Sequential()
model.add(Dense(10, input_shape=(8,)))
Note that the visible layer of the network is defined by the “input_shape” argument on the first hidden layer. That means in the above example, the model expects the input for one sample to be a
vector of eight numbers.
The sequential API is easy to use because you keep calling model.add() until you have added all of your layers.
For example, here is a deep MLP with five hidden layers.
# example of a model defined with the sequential api
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
# define the model
model = Sequential()
model.add(Dense(100, input_shape=(8,)))
2.3 Functional Model API (Advanced)
The functional API is more complex but is also more flexible.
It involves explicitly connecting the output of one layer to the input of another layer. Each connection is specified.
First, an input layer must be defined via the Input class, and the shape of an input sample is specified. We must retain a reference to the input layer when defining the model.
# define the layers
x_in = Input(shape=(8,))
Next, a fully connected layer can be connected to the input by calling the layer and passing the input layer. This will return a reference to the output connection in this new layer.
x = Dense(10)(x_in)
We can then connect this to an output layer in the same manner.
x_out = Dense(1)(x)
Once connected, we define a Model object and specify the input and output layers. The complete example is listed below.
# example of a model defined with the functional api
from tensorflow.keras import Model
from tensorflow.keras import Input
from tensorflow.keras.layers import Dense
# define the layers
x_in = Input(shape=(8,))
x = Dense(10)(x_in)
x_out = Dense(1)(x)
# define the model
model = Model(inputs=x_in, outputs=x_out)
As such, it allows for more complicated model designs, such as models that may have multiple input paths (separate vectors) and models that have multiple output paths (e.g. a word and a number).
The functional API can be a lot of fun when you get used to it.
For more on the functional API, see:
Now that we are familiar with the model life-cycle and the two APIs that can be used to define models, let’s look at developing some standard models.
3. How to Develop Deep Learning Models
In this section, you will discover how to develop, evaluate, and make predictions with standard deep learning models, including Multilayer Perceptrons (MLP), Convolutional Neural Networks (CNNs), and
Recurrent Neural Networks (RNNs).
3.1 Develop Multilayer Perceptron Models
A Multilayer Perceptron model, or MLP for short, is a standard fully connected neural network model.
It is comprised of layers of nodes where each node is connected to all outputs from the previous layer and the output of each node is connected to all inputs for nodes in the next layer.
An MLP is created by with one or more Dense layers. This model is appropriate for tabular data, that is data as it looks in a table or spreadsheet with one column for each variable and one row for
each variable. There are three predictive modeling problems you may want to explore with an MLP; they are binary classification, multiclass classification, and regression.
Let’s fit a model on a real dataset for each of these cases.
Note, the models in this section are effective, but not optimized. See if you can improve their performance. Post your findings in the comments below.
MLP for Binary Classification
We will use the Ionosphere binary (two-class) classification dataset to demonstrate an MLP for binary classification.
This dataset involves predicting whether a structure is in the atmosphere or not given radar returns.
The dataset will be downloaded automatically using Pandas, but you can learn more about it here.
We will use a LabelEncoder to encode the string labels to integer values 0 and 1. The model will be fit on 67 percent of the data, and the remaining 33 percent will be used for evaluation, split
using the train_test_split() function.
It is a good practice to use ‘relu‘ activation with a ‘he_normal‘ weight initialization. This combination goes a long way to overcome the problem of vanishing gradients when training deep neural
network models. For more on ReLU, see the tutorial:
The model predicts the probability of class 1 and uses the sigmoid activation function. The model is optimized using the adam version of stochastic gradient descent and seeks to minimize the
cross-entropy loss.
The complete example is listed below.
# mlp for binary classification
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
# load the dataset
path = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/ionosphere.csv'
df = read_csv(path, header=None)
# split into input and output columns
X, y = df.values[:, :-1], df.values[:, -1]
# ensure all data are floating point values
X = X.astype('float32')
# encode strings to integer
y = LabelEncoder().fit_transform(y)
# split into train and test datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
# determine the number of input features
n_features = X_train.shape[1]
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(8, activation='relu', kernel_initializer='he_normal'))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# fit the model
model.fit(X_train, y_train, epochs=150, batch_size=32, verbose=0)
# evaluate the model
loss, acc = model.evaluate(X_test, y_test, verbose=0)
print('Test Accuracy: %.3f' % acc)
# make a prediction
row = [1,0,0.99539,-0.05889,0.85243,0.02306,0.83398,-0.37708,1,0.03760,0.85243,-0.17755,0.59755,-0.44945,0.60536,-0.38223,0.84356,-0.38542,0.58212,-0.32192,0.56971,-0.29674,0.36946,-0.47357,0.56811,-0.51171,0.41078,-0.46168,0.21266,-0.34090,0.42267,-0.54487,0.18641,-0.45300]
yhat = model.predict([row])
print('Predicted: %.3f' % yhat)
Running the example first reports the shape of the dataset, then fits the model and evaluates it on the test dataset. Finally, a prediction is made for a single row of data.
Your specific results will vary given the stochastic nature of the learning algorithm. Try running the example a few times.
What results did you get? Can you change the model to do better?
Post your findings to the comments below.
In this case, we can see that the model achieved a classification accuracy of about 94 percent and then predicted a probability of 0.9 that the one row of data belongs to class 1.
(235, 34) (116, 34) (235,) (116,)
Test Accuracy: 0.940
Predicted: 0.991
MLP for Multiclass Classification
We will use the Iris flowers multiclass classification dataset to demonstrate an MLP for multiclass classification.
This problem involves predicting the species of iris flower given measures of the flower.
The dataset will be downloaded automatically using Pandas, but you can learn more about it here.
Given that it is a multiclass classification, the model must have one node for each class in the output layer and use the softmax activation function. The loss function is the ‘
sparse_categorical_crossentropy‘, which is appropriate for integer encoded class labels (e.g. 0 for one class, 1 for the next class, etc.)
The complete example of fitting and evaluating an MLP on the iris flowers dataset is listed below.
# mlp for multiclass classification
from numpy import argmax
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
# load the dataset
path = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/iris.csv'
df = read_csv(path, header=None)
# split into input and output columns
X, y = df.values[:, :-1], df.values[:, -1]
# ensure all data are floating point values
X = X.astype('float32')
# encode strings to integer
y = LabelEncoder().fit_transform(y)
# split into train and test datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
# determine the number of input features
n_features = X_train.shape[1]
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(8, activation='relu', kernel_initializer='he_normal'))
model.add(Dense(3, activation='softmax'))
# compile the model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# fit the model
model.fit(X_train, y_train, epochs=150, batch_size=32, verbose=0)
# evaluate the model
loss, acc = model.evaluate(X_test, y_test, verbose=0)
print('Test Accuracy: %.3f' % acc)
# make a prediction
row = [5.1,3.5,1.4,0.2]
yhat = model.predict([row])
print('Predicted: %s (class=%d)' % (yhat, argmax(yhat)))
Running the example first reports the shape of the dataset, then fits the model and evaluates it on the test dataset. Finally, a prediction is made for a single row of data.
Your specific results will vary given the stochastic nature of the learning algorithm. Try running the example a few times.
What results did you get? Can you change the model to do better?
Post your findings to the comments below.
In this case, we can see that the model achieved a classification accuracy of about 98 percent and then predicted a probability of a row of data belonging to each class, although class 0 has the
highest probability.
(100, 4) (50, 4) (100,) (50,)
Test Accuracy: 0.980
Predicted: [[0.8680804 0.12356871 0.00835086]] (class=0)
MLP for Regression
We will use the Boston housing regression dataset to demonstrate an MLP for regression predictive modeling.
This problem involves predicting house value based on properties of the house and neighborhood.
The dataset will be downloaded automatically using Pandas, but you can learn more about it here.
This is a regression problem that involves predicting a single numerical value. As such, the output layer has a single node and uses the default or linear activation function (no activation
function). The mean squared error (mse) loss is minimized when fitting the model.
Recall that this is a regression, not classification; therefore, we cannot calculate classification accuracy. For more on this, see the tutorial:
The complete example of fitting and evaluating an MLP on the Boston housing dataset is listed below.
# mlp for regression
from numpy import sqrt
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
# load the dataset
path = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.csv'
df = read_csv(path, header=None)
# split into input and output columns
X, y = df.values[:, :-1], df.values[:, -1]
# encode strings to integer
y = LabelEncoder().fit_transform(y)
# split into train and test datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
# determine the number of input features
n_features = X_train.shape[1]
# define model
model = Sequential()
model.add(Dense(10, activation='sigmoid', input_shape=(n_features,)))
model.add(Dense(8, activation='relu', kernel_initializer='he_normal'))
# compile the model
model.compile(optimizer='adam', loss='mse')
# fit the model
model.fit(X_train, y_train, epochs=150, batch_size=32, verbose=0)
# evaluate the model
error = model.evaluate(X_test, y_test, verbose=0)
print('MSE: %.3f, RMSE: %.3f' % (error, sqrt(error)))
# make a prediction
row = [0.00632,18.00,2.310,0,0.5380,6.5750,65.20,4.0900,1,296.0,15.30,396.90,4.98]
yhat = model.predict([row])
print('Predicted: %.3f' % yhat)
Running the example first reports the shape of the dataset then fits the model and evaluates it on the test dataset. Finally, a prediction is made for a single row of data.
Your specific results will vary given the stochastic nature of the learning algorithm. Try running the example a few times.
What results did you get? Can you change the model to do better?
Post your findings to the comments below.
In this case, we can see that the model achieved an MSE of about 8,000 which is an RMSE of about 90 (units are thousands of dollars). A value of 41 is then predicted for the single example.
(339, 13) (167, 13) (339,) (167,)
MSE: 8184.539, RMSE: 90.468
Predicted: 41.152
3.2 Develop Convolutional Neural Network Models
Convolutional Neural Networks, or CNNs for short, are a type of network designed for image input.
They are comprised of models with convolutional layers that extract features (called feature maps) and pooling layers that distill features down to the most salient elements.
CNNs are most well-suited to image classification tasks, although they can be used on a wide array of tasks that take images as input.
A popular image classification task is the MNIST handwritten digit classification. It involves tens of thousands of handwritten digits that must be classified as a number between 0 and 9.
The tf.keras API provides a convenience function to download and load this dataset directly.
The example below loads the dataset and plots the first few images.
# example of loading and plotting the mnist dataset
from tensorflow.keras.datasets.mnist import load_data
from matplotlib import pyplot
# load dataset
(trainX, trainy), (testX, testy) = load_data()
# summarize loaded dataset
print('Train: X=%s, y=%s' % (trainX.shape, trainy.shape))
print('Test: X=%s, y=%s' % (testX.shape, testy.shape))
# plot first few images
for i in range(25):
# define subplot
pyplot.subplot(5, 5, i+1)
# plot raw pixel data
pyplot.imshow(trainX[i], cmap=pyplot.get_cmap('gray'))
# show the figure
Running the example loads the MNIST dataset, then summarizes the default train and test datasets.
Train: X=(60000, 28, 28), y=(60000,)
Test: X=(10000, 28, 28), y=(10000,)
A plot is then created showing a grid of examples of handwritten images in the training dataset.
We can train a CNN model to classify the images in the MNIST dataset.
Note that the images are arrays of grayscale pixel data; therefore, we must add a channel dimension to the data before we can use the images as input to the model. The reason is that CNN models
expect images in a channels-last format, that is each example to the network has the dimensions [rows, columns, channels], where channels represent the color channels of the image data.
It is also a good idea to scale the pixel values from the default range of 0-255 to 0-1 when training a CNN. For more on scaling pixel values, see the tutorial:
The complete example of fitting and evaluating a CNN model on the MNIST dataset is listed below.
# example of a cnn for image classification
from numpy import unique
from numpy import argmax
from tensorflow.keras.datasets.mnist import load_data
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dropout
# load dataset
(x_train, y_train), (x_test, y_test) = load_data()
# reshape data to have a single channel
x_train = x_train.reshape((x_train.shape[0], x_train.shape[1], x_train.shape[2], 1))
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1], x_test.shape[2], 1))
# determine the shape of the input images
in_shape = x_train.shape[1:]
# determine the number of classes
n_classes = len(unique(y_train))
print(in_shape, n_classes)
# normalize pixel values
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0
# define model
model = Sequential()
model.add(Conv2D(32, (3,3), activation='relu', kernel_initializer='he_uniform', input_shape=in_shape))
model.add(MaxPooling2D((2, 2)))
model.add(Dense(100, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(n_classes, activation='softmax'))
# define loss and optimizer
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# fit the model
model.fit(x_train, y_train, epochs=10, batch_size=128, verbose=0)
# evaluate the model
loss, acc = model.evaluate(x_test, y_test, verbose=0)
print('Accuracy: %.3f' % acc)
# make a prediction
image = x_train[0]
yhat = model.predict([[image]])
print('Predicted: class=%d' % argmax(yhat))
Running the example first reports the shape of the dataset, then fits the model and evaluates it on the test dataset. Finally, a prediction is made for a single image.
Your specific results will vary given the stochastic nature of the learning algorithm. Try running the example a few times.
What results did you get? Can you change the model to do better?
Post your findings to the comments below.
First, the shape of each image is reported along with the number of classes; we can see that each image is 28×28 pixels and there are 10 classes as we expected.
In this case, we can see that the model achieved a classification accuracy of about 98 percent on the test dataset. We can then see that the model predicted class 5 for the first image in the
training set.
(28, 28, 1) 10
Accuracy: 0.987
Predicted: class=5
3.3 Develop Recurrent Neural Network Models
Recurrent Neural Networks, or RNNs for short, are designed to operate upon sequences of data.
They have proven to be very effective for natural language processing problems where sequences of text are provided as input to the model. RNNs have also seen some modest success for time series
forecasting and speech recognition.
The most popular type of RNN is the Long Short-Term Memory network, or LSTM for short. LSTMs can be used in a model to accept a sequence of input data and make a prediction, such as assign a class
label or predict a numerical value like the next value or values in the sequence.
We will use the car sales dataset to demonstrate an LSTM RNN for univariate time series forecasting.
This problem involves predicting the number of car sales per month.
The dataset will be downloaded automatically using Pandas, but you can learn more about it here.
We will frame the problem to take a window of the last five months of data to predict the current month’s data.
To achieve this, we will define a new function named split_sequence() that will split the input sequence into windows of data appropriate for fitting a supervised learning model, like an LSTM.
For example, if the sequence was:
1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Then the samples for training the model will look like:
Input Output
1, 2, 3, 4, 5 6
2, 3, 4, 5, 6 7
3, 4, 5, 6, 7 8
We will use the last 12 months of data as the test dataset.
LSTMs expect each sample in the dataset to have two dimensions; the first is the number of time steps (in this case it is 5), and the second is the number of observations per time step (in this case
it is 1).
Because it is a regression type problem, we will use a linear activation function (no activation
function) in the output layer and optimize the mean squared error loss function. We will also evaluate the model using the mean absolute error (MAE) metric.
The complete example of fitting and evaluating an LSTM for a univariate time series forecasting problem is listed below.
# lstm for time series forecasting
from numpy import sqrt
from numpy import asarray
from pandas import read_csv
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
# split a univariate sequence into samples
def split_sequence(sequence, n_steps):
X, y = list(), list()
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence
if end_ix > len(sequence)-1:
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
return asarray(X), asarray(y)
# load the dataset
path = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/monthly-car-sales.csv'
df = read_csv(path, header=0, index_col=0, squeeze=True)
# retrieve the values
values = df.values.astype('float32')
# specify the window size
n_steps = 5
# split into samples
X, y = split_sequence(values, n_steps)
# reshape into [samples, timesteps, features]
X = X.reshape((X.shape[0], X.shape[1], 1))
# split into train/test
n_test = 12
X_train, X_test, y_train, y_test = X[:-n_test], X[-n_test:], y[:-n_test], y[-n_test:]
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
# define model
model = Sequential()
model.add(LSTM(100, activation='relu', kernel_initializer='he_normal', input_shape=(n_steps,1)))
model.add(Dense(50, activation='relu', kernel_initializer='he_normal'))
model.add(Dense(50, activation='relu', kernel_initializer='he_normal'))
# compile the model
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
# fit the model
model.fit(X_train, y_train, epochs=350, batch_size=32, verbose=2, validation_data=(X_test, y_test))
# evaluate the model
mse, mae = model.evaluate(X_test, y_test, verbose=0)
print('MSE: %.3f, RMSE: %.3f, MAE: %.3f' % (mse, sqrt(mse), mae))
# make a prediction
row = asarray([18024.0, 16722.0, 14385.0, 21342.0, 17180.0]).reshape((1, n_steps, 1))
yhat = model.predict(row)
print('Predicted: %.3f' % (yhat))
Running the example first reports the shape of the dataset, then fits the model and evaluates it on the test dataset. Finally, a prediction is made for a single example.
Your specific results will vary given the stochastic nature of the learning algorithm. Try running the example a few times.
What results did you get? Can you change the model to do better?
Post your findings to the comments below.
First, the shape of the train and test datasets is displayed, confirming that the last 12 examples are used for model evaluation.
In this case, the model achieved an MAE of about 2,800 and predicted the next value in the sequence from the test set as 13,199, where the expected value is 14,577 (pretty close).
(91, 5, 1) (12, 5, 1) (91,) (12,)
MSE: 12755421.000, RMSE: 3571.473, MAE: 2856.084
Predicted: 13199.325
Note: it is good practice to scale and make the series stationary the data prior to fitting the model. I recommend this as an extension in order to achieve better performance. For more on preparing
time series data for modeling, see the tutorial:
4. How to Use Advanced Model Features
In this section, you will discover how to use some of the slightly more advanced model features, such as reviewing learning curves and saving models for later use.
4.1 How to Visualize a Deep Learning Model
The architecture of deep learning models can quickly become large and complex.
As such, it is important to have a clear idea of the connections and data flow in your model. This is especially important if you are using the functional API to ensure you have indeed connected the
layers of the model in the way you intended.
There are two tools you can use to visualize your model: a text description and a plot.
Model Text Description
A text description of your model can be displayed by calling the summary() function on your model.
The example below defines a small model with three layers and then summarizes the structure.
# example of summarizing a model
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(8,)))
model.add(Dense(8, activation='relu', kernel_initializer='he_normal'))
model.add(Dense(1, activation='sigmoid'))
# summarize the model
Running the example prints a summary of each layer, as well as a total summary.
This is an invaluable diagnostic for checking the output shapes and number of parameters (weights) in your model.
Model: "sequential"
Layer (type) Output Shape Param #
dense (Dense) (None, 10) 90
dense_1 (Dense) (None, 8) 88
dense_2 (Dense) (None, 1) 9
Total params: 187
Trainable params: 187
Non-trainable params: 0
Model Architecture Plot
You can create a plot of your model by calling the plot_model() function.
This will create an image file that contains a box and line diagram of the layers in your model.
The example below creates a small three-layer model and saves a plot of the model architecture to ‘model.png‘ that includes input and output shapes.
# example of plotting a model
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import plot_model
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(8,)))
model.add(Dense(8, activation='relu', kernel_initializer='he_normal'))
model.add(Dense(1, activation='sigmoid'))
# summarize the model
plot_model(model, 'model.png', show_shapes=True)
Running the example creates a plot of the model showing a box for each layer with shape information, and arrows that connect the layers, showing the flow of data through the network.
4.2 How to Plot Model Learning Curves
Learning curves are a plot of neural network model performance over time, such as calculated at the end of each training epoch.
Plots of learning curves provide insight into the learning dynamics of the model, such as whether the model is learning well, whether it is underfitting the training dataset, or whether it is
overfitting the training dataset.
For a gentle introduction to learning curves and how to use them to diagnose learning dynamics of models, see the tutorial:
You can easily create learning curves for your deep learning models.
First, you must update your call to the fit function to include reference to a validation dataset. This is a portion of the training set not used to fit the model, and is instead used to evaluate the
performance of the model during training.
You can split the data manually and specify the validation_data argument, or you can use the validation_split argument and specify a percentage split of the training dataset and let the API perform
the split for you. The latter is simpler for now.
The fit function will return a history object that contains a trace of performance metrics recorded at the end of each training epoch. This includes the chosen loss function and each configured
metric, such as accuracy, and each loss and metric is calculated for the training and validation datasets.
A learning curve is a plot of the loss on the training dataset and the validation dataset. We can create this plot from the history object using the Matplotlib library.
The example below fits a small neural network on a synthetic binary classification problem. A validation split of 30 percent is used to evaluate the model during training and the cross-entropy loss
on the train and validation datasets are then graphed using a line plot.
# example of plotting learning curves
from sklearn.datasets import make_classification
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
from matplotlib import pyplot
# create the dataset
X, y = make_classification(n_samples=1000, n_classes=2, random_state=1)
# determine the number of input features
n_features = X.shape[1]
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
sgd = SGD(learning_rate=0.001, momentum=0.8)
model.compile(optimizer=sgd, loss='binary_crossentropy')
# fit the model
history = model.fit(X, y, epochs=100, batch_size=32, verbose=0, validation_split=0.3)
# plot learning curves
pyplot.title('Learning Curves')
pyplot.ylabel('Cross Entropy')
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='val')
Running the example fits the model on the dataset. At the end of the run, the history object is returned and used as the basis for creating the line plot.
The cross-entropy loss for the training dataset is accessed via the ‘loss‘ key and the loss on the validation dataset is accessed via the ‘val_loss‘ key on the history attribute of the history
4.3 How to Save and Load Your Model
Training and evaluating models is great, but we may want to use a model later without retraining it each time.
This can be achieved by saving the model to file and later loading it and using it to make predictions.
This can be achieved using the save() function on the model to save the model. It can be loaded later using the load_model() function.
The model is saved in H5 format, an efficient array storage format. As such, you must ensure that the h5py library is installed on your workstation. This can be achieved using pip; for example:
pip install h5py
The example below fits a simple model on a synthetic binary classification problem and then saves the model file.
# example of saving a fit model
from sklearn.datasets import make_classification
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
# create the dataset
X, y = make_classification(n_samples=1000, n_features=4, n_classes=2, random_state=1)
# determine the number of input features
n_features = X.shape[1]
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
sgd = SGD(learning_rate=0.001, momentum=0.8)
model.compile(optimizer=sgd, loss='binary_crossentropy')
# fit the model
model.fit(X, y, epochs=100, batch_size=32, verbose=0, validation_split=0.3)
# save model to file
Running the example fits the model and saves it to file with the name ‘model.h5‘.
We can then load the model and use it to make a prediction, or continue training it, or do whatever we wish with it.
The example below loads the model and uses it to make a prediction.
# example of loading a saved model
from sklearn.datasets import make_classification
from tensorflow.keras.models import load_model
# create the dataset
X, y = make_classification(n_samples=1000, n_features=4, n_classes=2, random_state=1)
# load the model from file
model = load_model('model.h5')
# make a prediction
row = [1.91518414, 1.14995454, -1.52847073, 0.79430654]
yhat = model.predict([row])
print('Predicted: %.3f' % yhat[0])
Running the example loads the image from file, then uses it to make a prediction on a new row of data and prints the result.
Predicted: 0.831
5. How to Get Better Model Performance
In this section, you will discover some of the techniques that you can use to improve the performance of your deep learning models.
A big part of improving deep learning performance involves avoiding overfitting by slowing down the learning process or stopping the learning process at the right time.
5.1 How to Reduce Overfitting With Dropout
Dropout is a clever regularization method that reduces overfitting of the training dataset and makes the model more robust.
This is achieved during training, where some number of layer outputs are randomly ignored or “dropped out.” This has the effect of making the layer look like – and be treated like – a layer with a
different number of nodes and connectivity to the prior layer.
Dropout has the effect of making the training process noisy, forcing nodes within a layer to probabilistically take on more or less responsibility for the inputs.
For more on how dropout works, see this tutorial:
You can add dropout to your models as a new layer prior to the layer that you want to have input connections dropped-out.
This involves adding a layer called Dropout() that takes an argument that specifies the probability that each output from the previous to drop. E.g. 0.4 means 40 percent of inputs will be dropped
each update to the model.
You can add Dropout layers in MLP, CNN, and RNN models, although there are also specialized versions of dropout for use with CNN and RNN models that you might also want to explore.
The example below fits a small neural network model on a synthetic binary classification problem.
A dropout layer with 50 percent dropout is inserted between the first hidden layer and the output layer.
# example of using dropout
from sklearn.datasets import make_classification
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
from matplotlib import pyplot
# create the dataset
X, y = make_classification(n_samples=1000, n_classes=2, random_state=1)
# determine the number of input features
n_features = X.shape[1]
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy')
# fit the model
model.fit(X, y, epochs=100, batch_size=32, verbose=0)
5.2 How to Accelerate Training With Batch Normalization
The scale and distribution of inputs to a layer can greatly impact how easy or quickly that layer can be trained.
This is generally why it is a good idea to scale input data prior to modeling it with a neural network model.
Batch normalization is a technique for training very deep neural networks that standardizes the inputs to a layer for each mini-batch. This has the effect of stabilizing the learning process and
dramatically reducing the number of training epochs required to train deep networks.
For more on how batch normalization works, see this tutorial:
You can use batch normalization in your network by adding a batch normalization layer prior to the layer that you wish to have standardized inputs. You can use batch normalization with MLP, CNN, and
RNN models.
This can be achieved by adding the BatchNormalization layer directly.
The example below defines a small MLP network for a binary classification prediction problem with a batch normalization layer between the first hidden layer and the output layer.
# example of using batch normalization
from sklearn.datasets import make_classification
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import BatchNormalization
from matplotlib import pyplot
# create the dataset
X, y = make_classification(n_samples=1000, n_classes=2, random_state=1)
# determine the number of input features
n_features = X.shape[1]
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy')
# fit the model
model.fit(X, y, epochs=100, batch_size=32, verbose=0)
Also, tf.keras has a range of other normalization layers you might like to explore; see:
5.3 How to Halt Training at the Right Time With Early Stopping
Neural networks are challenging to train.
Too little training and the model is underfit; too much training and the model overfits the training dataset. Both cases result in a model that is less effective than it could be.
One approach to solving this problem is to use early stopping. This involves monitoring the loss on the training dataset and a validation dataset (a subset of the training set not used to fit the
model). As soon as loss for the validation set starts to show signs of overfitting, the training process can be stopped.
For more on early stopping, see the tutorial:
Early stopping can be used with your model by first ensuring that you have a validation dataset. You can define the validation dataset manually via the validation_data argument to the fit() function,
or you can use the validation_split and specify the amount of the training dataset to hold back for validation.
You can then define an EarlyStopping and instruct it on which performance measure to monitor, such as ‘val_loss‘ for loss on the validation dataset, and the number of epochs to observed overfitting
before taking action, e.g. 5.
This configured EarlyStopping callback can then be provided to the fit() function via the “callbacks” argument that takes a list of callbacks.
This allows you to set the number of epochs to a large number and be confident that training will end as soon as the model starts overfitting. You might also like to create a learning curve to
discover more insights into the learning dynamics of the run and when training was halted.
The example below demonstrates a small neural network on a synthetic binary classification problem that uses early stopping to halt training as soon as the model starts overfitting (after about 50
# example of using early stopping
from sklearn.datasets import make_classification
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from keras.callbacks import EarlyStopping
# create the dataset
X, y = make_classification(n_samples=1000, n_classes=2, random_state=1)
# determine the number of input features
n_features = X.shape[1]
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy')
# configure early stopping
es = EarlyStopping(monitor='val_loss', patience=5)
# fit the model
history = model.fit(X, y, epochs=200, batch_size=32, verbose=0, validation_split=0.3, callbacks=[es])
The tf.keras API provides a number of callbacks that you might like to explore; you can learn more here:
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
In this tutorial, you discovered a step-by-step guide to developing deep learning models in TensorFlow using the tf.keras API.
Specifically, you learned:
• The difference between Keras and tf.keras and how to install and confirm TensorFlow is working.
• The 5-step life-cycle of tf.keras models and how to use the sequential and functional APIs.
• How to develop MLP, CNN, and RNN models with tf.keras for regression, classification, and time series forecasting.
• How to use the advanced features of the tf.keras API to inspect and diagnose your model.
• How to improve the performance of your tf.keras model by reducing overfitting and accelerating training.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
The post TensorFlow 2 Tutorial: Get Started in Deep Learning With tf.keras appeared first on Machine Learning Mastery. | {"url":"https://www.aiproblog.com/index.php/2019/12/18/tensorflow-2-tutorial-get-started-in-deep-learning-with-tf-keras/","timestamp":"2024-11-05T09:10:45Z","content_type":"text/html","content_length":"122880","record_id":"<urn:uuid:c56984fd-9fa4-4648-bd76-112401159221>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00858.warc.gz"} |
Rotary Pendulum: Exploring the Classic Control Challenge with QUBE-Servo 2 - Quanser
The QUBE-Servo 2 is supplied with two modules: the inertia disc and the pendulum. In my previous blog posts, I went through the various DC motor-based labs that use the disc module. Here I discuss
the labs supplied with the QUBE-Servo 2 Pendulum system.
Why the pendulum?
The rotary pendulum system has been used in control labs for decades. The teaching benefits of pendulum are tremendous. The dynamics of a pendulum are similar to many real-world systems. The methods
used to model a rotary pendulum system are the same ones used for robot manipulators and other multiple degree-of-freedom (DOF) systems. Using state-feedback and other control design approaches for
the pendulum apply to a wide range of systems as well. I touched on this topic a while ago in my post Why is the pendulum so popular?
Overview of QUBE-Servo 2 Pendulum labs
The QUBE-Servo 2 includes seven pendulum-based labs that cater to the needs of typical control systems courses, from teaching introductory physics like finding the moment of inertia of a link to more
advanced tasks like nonlinear control to swing-up the pendulum. They are summarized in the table below.
Lab Description
Moment of inertia Find the moment of inertia of a pendulum analytically through first-principle physics and experimentally from the free-oscillation response.
Pendulum modelling Make sure the system hardware matches the modelling conventions. This is an essential step to successful control system implementation.
State-space modelling Represent the linearized model of the rotary pendulum in state-space and perform model validation.
Balance control Implement a PD-based control to balance the pendulum in the inverted, upright position.
State-feedback Pole Placement control Design a state-feedback control to balance the pendulum using the pole placement technique.
State-feedback LQR-based control Design a state-feedback controller to balance the pendulum using the Linear Quadratic Regulator (LQR) optimal control method.
Swing-up control Learn how to use an energy-based nonlinear control to swing-up the pendulum from its downward position to the inverted, upright position.
The free-body diagram of a pendulum
Moment of Inertia
The moment of inertia is one of the main parameters in rotary systems. Having an accurate value is important in order to have a model that represents the actual system. In general, there are several
ways to find the parameters of systems. In this case, we can find the inertia of the single-link pendulum analytically through the moment of inertia equation using the known mass and length of the
The other method is finding it experimentally by looking at the pendulum’s free-oscillation response and measuring its natural frequency.
A typical free oscillation response of the pendulum used to find the moment of inertia.
Pendulum Modelling
Modelling conventions used for the QUBE-Servo 2 p Pendulum
Making sure the hardware matches the model conventions is a crucial step in a control implementation procedure. Otherwise, the system can go unstable.
For example, if the pendulum on your rotary pendulum system goes positive when it rotates clockwise, but the model defines positive when rotating counter-clockwise, then your balance controller,
which is based on the model of the system, will not be able to stabilize the system.
The Pendulum Modelling Lab requires that students actively test the system and ensure the sensor and actuator gains are set properly.
State-space Modelling
State-space representation was introduced as a new lab for the QUBE-Servo 2’s DC motor because it’s such an important technique to know in order to model more complex Multiple-Input Multiple-Output
(MIMO) systems and use more advanced control methods.
The rotary pendulum system is a single-input, multiple-output system: the DC motor is the input, and the rotary arm and pendulum angle positions and velocities are the outputs. As a result, rotary
pendulum systems are typically modeled in state-space. In the lab, the state-space model is derived and then validated by comparing the open-loop response of the model with the actual system.
State-space representation block diagram for QUBE-Servo 2
Balance Control
Balancing is a common control task found in many real-world systems, e.g., Segway or walking robots. While unconventional, this Pendulum Balance Lab uses a PD-based control to stabilize the pendulum
in the upright, inverted position. This lab is meant as an introduction to the different components involved in such a task-based control application, including the control switching logic and
measuring the inverted pendulum angle in order to balance it.
Rotary pendulum PD control block diagram
State-space modelling and control design are not needed for this lab, so students can learn how to balance a system using the more familiar PD control. However, the lab does highlight that
state-space control techniques (or other advanced control methods) are needed to find control gains that will obtain a certain desired response.
State-feedback Pole Placement Control
State-feedback is the most common way of stabilizing an inverted pendulum, and pole placement is a standard technique used to find the control gain K for the closed-loop system shown below.
General state-feedback block diagram
Pole placement finds a control gain that will drive the closed-loop poles of the system to the desired location. In this case, the desired pole locations are based on second-order time-domain
requirements (i.e., percent overshoot and settling time), similar to what we introduced in the DC Motor PD Control Labs. From that, you can find the location of the two dominant poles, shown below as
p[1] and p[2]. The other two poles of the pendulum system are placed far along the real-axis in the left-hand plane. This technique is used in the control design of high-order systems.
Desired pole locations for the QUBE-Servo 2 rotary pendulum.
The pole placement method is performed in the Matlab environment – both manually and using the Control System Toolbox function. Doing it manually exposes students to using companion matrices and
gives them an idea of how the pole placement method helps find the control gain K. Then, they implement the controller on an actual system to see if the pendulum can balance and if it satisfies the
controller requirements.
State-feedback LQR-based Control
Another very common technique to obtain the control gain K for state-feedback control is the Linear-Quadratic Regulator (LQR) optimization method. While the pole placement generates the control gain
K needed to move the closed-loop poles to their desired locations, LQR minimizes a cost function that is based on the dynamics of the system and tuning matrices called weighting matrices. By placing
different weights on the matrices, different control gains are generated.
QUBE-Servo LQR Penulum Balance Block Diagram
Since LQR is an optimal control method, it finds a control gain that will obtain the best performance based on the weighting matrices selected while minimizing the control effort. This can be
beneficial for systems with motor/actuator limitations (e.g., DC motor only allows +/- 5V), or for mobile systems that have limited battery power.
The method’s disadvantage is that finding the correct weighting matrices to satisfy the desired response is usually an iterative process. Pole placement is less of an iterative process as it finds
the poles directly, based on the requirement. However, compared to LQR, pole placement tends to find control gain K that generates larger control signals.
Swing-up Control
Our Swing-up Control Lab introduces a simple nonlinear energy-based control. Similarly to a position or speed control where you have the desired position or speed setpoint, you can also have an
energy setpoint. In this case, the setpoint is the energy of the pendulum when it is in the upright vertical position.
QUBE-Servo Energy Control Block Diagram
Based on the desired energy and the nonlinear dynamics of the pendulum, the swing-up algorithm finds the acceleration of the rotary arm that is needed to swing up the pendulum to the vertical
position. The acceleration is converted into a motor voltage and then applied to the system. Once the pendulum swings up to a certain threshold about the vertical position, a state-feedback balance
controller is engaged.
This is by no means a comprehensive nonlinear control design but it is a great way to introduce more advanced control techniques. You never know… it might motivate an undergraduate student to pursue
graduate studies.
Final Notes
The techniques used to model, balance, and swing up an inverted pendulum have tremendous carry-over to other applications. State-space modelling is a mainstay to modeling more complex MIMO systems.
State-feedback control is sometimes used in multi-DOF robot manipulators, quadrotor systems, aerospace devices, and so on. So, if you already have the QUBE-Servo 2 system in your lab, make sure to
download the new set of labs.
By the way, the QUBE-Servo 2 is now also available virtually – check out QLabs Controls or QLabs Virtual QUBE-Servo 2. And don’t forget about our free mobile textbook app. It covers many of the
topics that QUBE-Servo 2 labs focus on and is a great addition to your course and learning experience. | {"url":"https://www.quanser.com/blog/control-systems/rotary-pendulum-control-challenge-with-qube-servo/","timestamp":"2024-11-04T10:36:08Z","content_type":"text/html","content_length":"100389","record_id":"<urn:uuid:b96f16d6-5424-4667-a7e8-a49f3059bbe5>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00222.warc.gz"} |
Recursive Function
Hi. Would greatly appreciate any simple explanation on why the function below returns 56:
def fun(a):
if a > 30:
return 3
return a + fun(a + 3)
Thank you so much.
There are many ways that people explore code to understand what it does.
These are two very common ways.
You can run the code in your mind and reason about it line by line.
Some people find it helpful to explain the code to another person.
That other person does not need to be a programmer.
The act of speaking out loud helps you understand code for lots of people, even experienced programmers.
The other common idea is to ask what you want to know as code runs and add print calls so that you can how code works. For example:
def fun(a):
print(“start fun a=“, a)
if a > 30:
print(“return from if”)
return 3
print(“return from else”)
return a + fun(a + 3)
2 Likes
You first: what do you think it should return instead, and why?
1 Like
Thanks for the code showing the steps. Honestly, I am still a little bit confused.
I thought answer would be 53… It is the " a + fun (a+3)" that confuses me.
If you run the version of the code I posted how many times is fun called and with what value of a?
fun was called 3x.
But for example, why is 2nd instance resulting to 28? When it is " a + fun (a+3)? Has an “a +” in front…
Well, what do you think the a part should have as a result? What do you think the fun(a + 3) part should have as a result? (Hint: what is a + 3? What does it mean if you write fun() and put something
inside the parentheses?) What should you get if you add those two things together?
1 Like
Does this make it clearer:
def fun(a):
if a > 30:
return 3
x = a + 3
y = fun(x)
z = a + y
return z
If you got this function from a course or tutorial, then the writer may have a devious sense of humor.
It’s a pretty funky function, since its max is for fun(0) == fun(3) == 168, so it’s kind of difficult to get any intuition for how its behaving
1 Like
Yes I get the flow of “a + fun (a+3)”, as in your representation above. Now I know where I got it wrong lol So silly of me that I just focused on "“a + fun (a+3)”, which made me frustrated
because the function becomes never ending–fun (28)…fun (31)…fun(33)… But I didn’t really absorbed the condition that above 30, it just returns 3. LOL Thanks for taking the time out for the
2 Likes
Yes, I get it now. Thank you! | {"url":"https://discuss.python.org/t/recursive-function/34690","timestamp":"2024-11-10T12:44:06Z","content_type":"text/html","content_length":"30287","record_id":"<urn:uuid:5d01c7c7-add4-4f73-8331-cad5ebbdaa1d>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00506.warc.gz"} |
Free Concrete Calculator
FreeCalculator.net’s sole focus is to provide fast, comprehensive, convenient, free online calculators in a plethora of areas. Currently, we have over 100 calculators to help you “do the math”
quickly in areas such as finance, fitness, health, math, and others, and we are still developing more. Our goal is to become the one-stop, go-to site for people who need to make quick calculations.
Additionally, we believe the internet should be a source of free information. Therefore, all of our tools and services are completely free, with no registration required.
We coded and developed each calculator individually and put each one through strict, comprehensive testing. However, please inform us if you notice even the slightest error – your input is extremely
valuable to us. While most calculators on FreeCalculator.net are designed to be universally applicable for worldwide usage, some are for specific countries only. | {"url":"https://freecalculator.net/concrete-calculator","timestamp":"2024-11-09T00:27:52Z","content_type":"text/html","content_length":"50129","record_id":"<urn:uuid:7cdf9792-a416-4b2e-8c12-cfa49007615b>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00760.warc.gz"} |
Equation homework help
Equatio mathspace: alkanes and determining the desired variable, functions. Dantzig himself the sum is ch 2 - polynomials - online writing work from linear programming and. Click on about doing what
is to help by the steps of campus. Equatio mathspace: adding and solve several integration by a freshman in other than or reproduced without further behind. Like forming perfect square a total of 5.
Answer key bridges educator site secured by itself. Walk, here are not f is why a colorful yet mathematical contexts. Trig – 5 a christian growth water equation homework help exams. Fyi, solving a
photomath was initially rejected by step equations with the measurements raw so far! When you please join our front and has a negative exponents and password but my worst subject. As scheduling
airline sales, multiplication, best source for its presence can do my dashboard. What you will equations - 2x2 - we will. Wolfram education is without question and bond theory, mathematical symbols
different types of george bernard dantzig himself the opposite. In heidegger, you should be equation homework help to another that the same thing to explore ritabakshi2019's board features. Ferrari
was a bit different from the total number corner grade 4 practice problems eight problems. Tutoring i needed structural information is a way. More ideas about christianity primary history homework 2
2015 6-8 is an assortment of mixtures, descartes rule. Did i had solved exactly what you need assistance on building that the following problem. Chinese internet, least twelve linear equations in
mathematics online free math exam. Snap a ti-89 calculators won t f, matrices, calculus problem,. Requests for the example, x 2 linear systems of high school algebra and reduce the minotaur homework
help. Also included for solving a rectangle a balance sheet perfectly. Geometry, which formula to work your mistake or accounting equation, equation homework help exponents. No later than 5 7 and i
could use and division negative aspect. Teachers and an alternative to pass the same thing temperature! Our site sooner or make money for me http://needfood.info/creative-writing-course-dublin/ time
with the best for mathematical contexts. Place; education experts extensive list of doing homework forms. Start center or the one equation to the math grade 6. Multi-Step equations in 8th grade level
up a page. My degree, set topic, h: middle math 380 homework would have everything you can read these topics included. Factoring algebra fb 4 from cpalms formative assessments that you implement in
algebra topics covered. Like a bh, and can use decimals as the first two variables. Again, and the same reasoning for a bit equation homework help point depression, subtract, or dad. This program was
writing homework fractions to add. This free download maths formulas, general there are accessible to place and worksheets. Interested in this algebra 1; this is a wall chart. Functions,
substitution, and been struggling to ensure that type of the handwritten. For granted would accept them instead of the classroom. The problems, 2nd grade 6 and other violations of additional
resources website. Hazewinkel 1988, and control crabgrass - printables answer key - basic addition, formulas available. Suppose the adventures of subjects, eighth grade math help quadratic equations
linear equations. Lesson on this primer, the same rate equations. Microsoft word problem equations works on tuesday, in mathematics vision project where you can have a model. | {"url":"https://fnpworld.com/equation-homework-help/","timestamp":"2024-11-06T13:50:29Z","content_type":"text/html","content_length":"54733","record_id":"<urn:uuid:683b3d95-a86a-4884-9b72-c891846f2dda>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00688.warc.gz"} |
Statistical potential - Wikiwand
In protein structure prediction, statistical potentials or knowledge-based potentials are scoring functions derived from an analysis of known protein structures in the Protein Data Bank (PDB).
Example of interatomic pseudopotential, between β-carbons of isoleucine and valine residues, generated by using MyPMFs.^
The original method to obtain such potentials is the quasi-chemical approximation, due to Miyazawa and Jernigan.^[2] It was later followed by the potential of mean force (statistical PMF ^[Note 1]),
developed by Sippl.^[3] Although the obtained scores are often considered as approximations of the free energy—thus referred to as pseudo-energies—this physical interpretation is incorrect.^[4]^[5]
Nonetheless, they are applied with success in many cases, because they frequently correlate with actual Gibbs free energy differences.^[6]
Possible features to which a pseudo-energy can be assigned include:
The classic application is, however, based on pairwise amino acid contacts or distances, thus producing statistical interatomic potentials. For pairwise amino acid contacts, a statistical potential
is formulated as an interaction matrix that assigns a weight or energy value to each possible pair of standard amino acids. The energy of a particular structural model is then the combined energy of
all pairwise contacts (defined as two amino acids within a certain distance of each other) in the structure. The energies are determined using statistics on amino acid contacts in a database of known
protein structures (obtained from the PDB).
Initial development
Many textbooks present the statistical PMFs as proposed by Sippl ^[3] as a simple consequence of the Boltzmann distribution, as applied to pairwise distances between amino acids. This is incorrect,
but a useful start to introduce the construction of the potential in practice. The Boltzmann distribution applied to a specific pair of amino acids, is given by:
${\displaystyle P\left(r\right)={\frac {1}{Z}}e^{-{\frac {F\left(r\right)}{kT}}}}$
where ${\displaystyle r}$ is the distance, ${\displaystyle k}$ is the Boltzmann constant, ${\displaystyle T}$ is the temperature and ${\displaystyle Z}$ is the partition function, with
${\displaystyle Z=\int e^{-{\frac {F(r)}{kT}}}dr}$
The quantity ${\displaystyle F(r)}$ is the free energy assigned to the pairwise system. Simple rearrangement results in the inverse Boltzmann formula, which expresses the free energy ${\displaystyle
F(r)}$ as a function of ${\displaystyle P(r)}$:
${\displaystyle F\left(r\right)=-kT\ln P\left(r\right)-kT\ln Z}$
To construct a PMF, one then introduces a so-called reference state with a corresponding distribution ${\displaystyle Q_{R}}$ and partition function ${\displaystyle Z_{R}}$, and calculates the
following free energy difference:
${\displaystyle \Delta F\left(r\right)=-kT\ln {\frac {P\left(r\right)}{Q_{R}\left(r\right)}}-kT\ln {\frac {Z}{Z_{R}}}}$
The reference state typically results from a hypothetical system in which the specific interactions between the amino acids are absent. The second term involving ${\displaystyle Z}$ and ${\
displaystyle Z_{R}}$ can be ignored, as it is a constant.
In practice, ${\displaystyle P(r)}$ is estimated from the database of known protein structures, while ${\displaystyle Q_{R}(r)}$ typically results from calculations or simulations. For example, ${\
displaystyle P(r)}$ could be the conditional probability of finding the ${\displaystyle C\beta }$ atoms of a valine and a serine at a given distance ${\displaystyle r}$ from each other, giving rise
to the free energy difference ${\displaystyle \Delta F}$. The total free energy difference of a protein, ${\displaystyle \Delta F_{\textrm {T}}}$, is then claimed to be the sum of all the pairwise
free energies:
${\displaystyle \Delta F_{\textrm {T}}=\sum _{i<j}\Delta F(r_{ij}\mid a_{i},a_{j})=-kT\sum _{i<j}\ln {\frac {P\left(r_{ij}\mid a_{i},a_{j}\right)}{Q_{R}\left(r_{ij}\mid a_{i},a_{j}\right)}}}$
where the sum runs over all amino acid pairs ${\displaystyle a_{i},a_{j}}$ (with ${\displaystyle i<j}$) and ${\displaystyle r_{ij}}$ is their corresponding distance. In many studies ${\displaystyle
Q_{R}}$ does not depend on the amino acid sequence.^[7]
Conceptual issues
Intuitively, it is clear that a low value for ${\displaystyle \Delta F_{\textrm {T}}}$ indicates that the set of distances in a structure is more likely in proteins than in the reference state.
However, the physical meaning of these statistical PMFs has been widely disputed, since their introduction.^[4]^[5]^[8]^[9] The main issues are:
1. The wrong interpretation of this "potential" as a true, physically valid potential of mean force;
2. The nature of the so-called reference state and its optimal formulation;
3. The validity of generalizations beyond pairwise distances.
Controversial analogy
In response to the issue regarding the physical validity, the first justification of statistical PMFs was attempted by Sippl.^[10] It was based on an analogy with the statistical physics of liquids.
For liquids, the potential of mean force is related to the radial distribution function ${\displaystyle g(r)}$, which is given by:^[11]
${\displaystyle g(r)={\frac {P(r)}{Q_{R}(r)}}}$
where ${\displaystyle P(r)}$ and ${\displaystyle Q_{R}(r)}$ are the respective probabilities of finding two particles at a distance ${\displaystyle r}$ from each other in the liquid and in the
reference state. For liquids, the reference state is clearly defined; it corresponds to the ideal gas, consisting of non-interacting particles. The two-particle potential of mean force ${\
displaystyle W(r)}$ is related to ${\displaystyle g(r)}$ by:
${\displaystyle W(r)=-kT\log g(r)=-kT\log {\frac {P(r)}{Q_{R}(r)}}}$
According to the reversible work theorem, the two-particle potential of mean force ${\displaystyle W(r)}$ is the reversible work required to bring two particles in the liquid from infinite separation
to a distance ${\displaystyle r}$ from each other.^[11]
Sippl justified the use of statistical PMFs—a few years after he introduced them for use in protein structure prediction—by appealing to the analogy with the reversible work theorem for liquids. For
liquids, ${\displaystyle g(r)}$ can be experimentally measured using small angle X-ray scattering; for proteins, ${\displaystyle P(r)}$ is obtained from the set of known protein structures, as
explained in the previous section. However, as Ben-Naim wrote in a publication on the subject:^[5]
[...] the quantities, referred to as "statistical potentials," "structure based potentials," or "pair potentials of mean force", as derived from the protein data bank (PDB), are neither
"potentials" nor "potentials of mean force," in the ordinary sense as used in the literature on liquids and solutions.
Moreover, this analogy does not solve the issue of how to specify a suitable reference state for proteins.
Bayesian probability
Baker and co-workers ^[15] justified statistical PMFs from a Bayesian point of view and used these insights in the construction of the coarse grained ROSETTA energy function. According to Bayesian
probability calculus, the conditional probability ${\displaystyle P(X\mid A)}$ of a structure ${\displaystyle X}$, given the amino acid sequence ${\displaystyle A}$, can be written as:
${\displaystyle P\left(X\mid A\right)={\frac {P\left(A\mid X\right)P\left(X\right)}{P\left(A\right)}}\propto P\left(A\mid X\right)P\left(X\right)}$
${\displaystyle P(X\mid A)}$ is proportional to the product of the likelihood ${\displaystyle P\left(A\mid X\right)}$ times the prior ${\displaystyle P\left(X\right)}$. By assuming that the
likelihood can be approximated as a product of pairwise probabilities, and applying Bayes' theorem, the likelihood can be written as:
${\displaystyle P\left(A\mid X\right)\approx \prod _{i<j}P\left(a_{i},a_{j}\mid r_{ij}\right)\propto \prod _{i<j}{\frac {P\left(r_{ij}\mid a_{i},a_{j}\right)}{P(r_{ij})}}}$
where the product runs over all amino acid pairs ${\displaystyle a_{i},a_{j}}$ (with ${\displaystyle i<j}$), and ${\displaystyle r_{ij}}$ is the distance between amino acids ${\displaystyle i}$ and $
{\displaystyle j}$. Obviously, the negative of the logarithm of the expression has the same functional form as the classic pairwise distance statistical PMFs, with the denominator playing the role of
the reference state. This explanation has two shortcomings: it relies on the unfounded assumption the likelihood can be expressed as a product of pairwise probabilities, and it is purely qualitative.
Probability kinematics
Hamelryck and co-workers ^[6] later gave a quantitative explanation for the statistical potentials, according to which they approximate a form of probabilistic reasoning due to Richard Jeffrey and
named probability kinematics. This variant of Bayesian thinking (sometimes called "Jeffrey conditioning") allows updating a prior distribution based on new information on the probabilities of the
elements of a partition on the support of the prior. From this point of view, (i) it is not necessary to assume that the database of protein structures—used to build the potentials—follows a
Boltzmann distribution, (ii) statistical potentials generalize readily beyond pairwise differences, and (iii) the reference ratio is determined by the prior distribution.
Reference ratio
The reference ratio method. is a probability distribution that describes the structure of proteins on a local length scale (right). Typically, is embodied in a fragment library, but other
possibilities are an energy function or a graphical model. In order to obtain a complete description of protein structure, one also needs a probability distribution that describes nonlocal aspects,
such as hydrogen bonding. is typically obtained from a set of solved protein structures from the PDB (left). In order to combine with in a meaningful way, one needs the reference ratio expression
(bottom), which takes the signal in with respect to into account.
Expressions that resemble statistical PMFs naturally result from the application of probability theory to solve a fundamental problem that arises in protein structure prediction: how to improve an
imperfect probability distribution ${\displaystyle Q(X)}$ over a first variable ${\displaystyle X}$ using a probability distribution ${\displaystyle P(Y)}$ over a second variable ${\displaystyle Y}$,
with ${\displaystyle Y=f(X)}$.^[6] Typically, ${\displaystyle X}$ and ${\displaystyle Y}$ are fine and coarse grained variables, respectively. For example, ${\displaystyle Q(X)}$ could concern the
local structure of the protein, while ${\displaystyle P(Y)}$ could concern the pairwise distances between the amino acids. In that case, ${\displaystyle X}$ could for example be a vector of dihedral
angles that specifies all atom positions (assuming ideal bond lengths and angles). In order to combine the two distributions, such that the local structure will be distributed according to ${\
displaystyle Q(X)}$, while the pairwise distances will be distributed according to ${\displaystyle P(Y)}$, the following expression is needed:
${\displaystyle P(X,Y)={\frac {P(Y)}{Q(Y)}}Q(X)}$
where ${\displaystyle Q(Y)}$ is the distribution over ${\displaystyle Y}$ implied by ${\displaystyle Q(X)}$. The ratio in the expression corresponds to the PMF. Typically, ${\displaystyle Q(X)}$ is
brought in by sampling (typically from a fragment library), and not explicitly evaluated; the ratio, which in contrast is explicitly evaluated, corresponds to Sippl's PMF. This explanation is
quantitive, and allows the generalization of statistical PMFs from pairwise distances to arbitrary coarse grained variables. It also provides a rigorous definition of the reference state, which is
implied by ${\displaystyle Q(X)}$. Conventional applications of pairwise distance statistical PMFs usually lack two necessary features to make them fully rigorous: the use of a proper probability
distribution over pairwise distances in proteins, and the recognition that the reference state is rigorously defined by ${\displaystyle Q(X)}$.
Statistical potentials are used as energy functions in the assessment of an ensemble of structural models produced by homology modeling or protein threading. Many differently parameterized
statistical potentials have been shown to successfully identify the native state structure from an ensemble of decoy or non-native structures.^[16] Statistical potentials are not only used for
protein structure prediction, but also for modelling the protein folding pathway.^[17]^[18] | {"url":"https://www.wikiwand.com/en/articles/Statistical_potential","timestamp":"2024-11-05T19:20:25Z","content_type":"text/html","content_length":"500293","record_id":"<urn:uuid:5ea09468-8f3d-4842-875b-580a039b36fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00377.warc.gz"} |
iles to Hands
Miles to Hands Converter
β Switch toHands to Miles Converter
How to use this Miles to Hands Converter π €
Follow these steps to convert given length from the units of Miles to the units of Hands.
1. Enter the input Miles value in the text field.
2. The calculator converts the given Miles into Hands in realtime β using the conversion formula, and displays under the Hands label. You do not need to click any button. If the input changes,
Hands value is re-calculated, just like that.
3. You may copy the resulting Hands value using the Copy button.
4. To view a detailed step by step calculation of the conversion, click on the View Calculation button.
5. You can also reset the input by clicking on button present below the input field.
What is the Formula to convert Miles to Hands?
The formula to convert given length from Miles to Hands is:
Length[(Hands)] = Length[(Miles)] / 0.00006313131392025253
Substitute the given value of length in miles, i.e., Length[(Miles)] in the above formula and simplify the right-hand side value. The resulting value is the length in hands, i.e., Length[(Hands)].
Calculation will be done after you enter a valid input.
Consider that a luxury sports car can travel 300 miles on a full tank of gas.
Convert this distance from miles to Hands.
The length in miles is:
Length[(Miles)] = 300
The formula to convert length from miles to hands is:
Length[(Hands)] = Length[(Miles)] / 0.00006313131392025253
Substitute given weight Length[(Miles)] = 300 in the above formula.
Length[(Hands)] = 300 / 0.00006313131392025253
Length[(Hands)] = 4751999.9406
Final Answer:
Therefore, 300 mi is equal to 4751999.9406 hand.
The length is 4751999.9406 hand, in hands.
Consider that a private jet can fly 1,500 miles without refueling.
Convert this range from miles to Hands.
The length in miles is:
Length[(Miles)] = 1500
The formula to convert length from miles to hands is:
Length[(Hands)] = Length[(Miles)] / 0.00006313131392025253
Substitute given weight Length[(Miles)] = 1500 in the above formula.
Length[(Hands)] = 1500 / 0.00006313131392025253
Length[(Hands)] = 23759999.7031
Final Answer:
Therefore, 1500 mi is equal to 23759999.7031 hand.
The length is 23759999.7031 hand, in hands.
Miles to Hands Conversion Table
The following table gives some of the most used conversions from Miles to Hands.
Miles (mi) Hands (hand)
0 mi 0 hand
1 mi 15839.9998 hand
2 mi 31679.9996 hand
3 mi 47519.9994 hand
4 mi 63359.9992 hand
5 mi 79199.999 hand
6 mi 95039.9988 hand
7 mi 110879.9986 hand
8 mi 126719.9984 hand
9 mi 142559.9982 hand
10 mi 158399.998 hand
20 mi 316799.996 hand
50 mi 791999.9901 hand
100 mi 1583999.9802 hand
1000 mi 15839999.8021 hand
10000 mi 158399998.0205 hand
100000 mi 1583999980.2051 hand
A mile (symbol: mi or m) is a unit of length commonly used in the United States and the United Kingdom. One mile is equal to 1.60934 kilometers.
The mile originated from the Roman mile, which was 1,000 paces. The current definition of a mile is based on the international agreement and equals exactly 1,609.344 meters.
Miles are mainly used to measure distances in the United States and the United Kingdom, especially for road systems. While most of the world uses kilometers, the mile remains prevalent in these
A hand is a unit of length used primarily to measure the height of horses. One hand is equivalent to 4 inches or approximately 0.1016 meters.
The hand is defined as 4 inches, providing a standardized measurement for assessing horse height, ensuring consistency across various contexts and practices.
Hands are used in the equestrian industry to measure the height of horses, from the ground to the highest point of the withers. The unit offers a convenient and traditional method for expressing
horse height and remains in use in equestrian competitions and breed standards.
Frequently Asked Questions (FAQs)
1. What is the formula for converting Miles to Hands in Length?
The formula to convert Miles to Hands in Length is:
Miles / 0.00006313131392025253
2. Is this tool free or paid?
This Length conversion tool, which converts Miles to Hands, is completely free to use.
3. How do I convert Length from Miles to Hands?
To convert Length from Miles to Hands, you can use the following formula:
Miles / 0.00006313131392025253
For example, if you have a value in Miles, you substitute that value in place of Miles in the above formula, and solve the mathematical expression to get the equivalent value in Hands. | {"url":"https://convertonline.org/unit/?convert=miles-hands","timestamp":"2024-11-03T19:07:15Z","content_type":"text/html","content_length":"90683","record_id":"<urn:uuid:b37cf31d-c656-44cd-8223-b720e8e28f15>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00452.warc.gz"} |
The New Quantum Era | Transcript: Black hole physics and new states of quantum matter with John Preskill
Black hole physics and new states of quantum matter with John Preskill
Download MP3
Sebastian Hassinger 0:30
Welcome back to our podcast, we're very excited about our guest. Today, we think you'll enjoy this episode, we're joined by Dr. John Preskill. Today, he's a professor of physics at the California
Institute of Technology, and quite famous in the quantum computing field for having coined the term NISQ, noisy intermediate scale quantum computers, he wrote a paper in 2018 that really characterize
the types of devices, the more sophisticated devices are being produced, but also can't call that a lot of the challenges that those technologies are still trying to overcome. And his work has
continued to be really at that intersection of what the art of the possible is, from both a technology and a scientific perspective with the devices that are made, and are made available today should
be a fun ride, because he's also one of the ringleaders of the so called do it from qubit intellectual movement, I guess, a steel studying the intersection of quantum gravity theory and quantum
information sciences. So it's gonna be a good time. Yeah, what I find really interesting about John is that, in a way, his interest in quantum computing is motivated by the ways in which it can help
progress, our understanding of quantum mechanics, specifically, quantum gravity, quantum matter. And as you said, sort of that broader set of topics that are known as it from qubit, we will dig into
the information paradox in black holes, for example, and what error correction might be able to tell us about that phenomenon. So it'll be great. So let's jump in. Here we go.
Hello, and thanks for joining us again, we've got a really special guest with us today, we're extremely excited to be speaking with John Preskill. He's the Richard P. Feynman professor of theoretical
physics at the California Institute of Technology, or Caltech, where he's also the director of the and I think founder of the Institute of quantum information and matter. He's played an instrumental
role in the foundation of the field, the development of quantum information science, famously coining the term NISQ, or noisy intermediate scale quantum computers to describe the devices that we're
building today. In a paper that was published in 2018. It's still, I think, critical reading for understanding the field as it is today. So we're very excited to have with us, John Preskill. Welcome,
John. Oh, I'm excited to be here with you guys. Welcome, John. Thank you. And we always like to start off just by giving our guests a chance to sort of tell their story a little bit because the the
path to quantum Information Science is so varied, you, I think, began as a particle physicist. So you know, how did you sort of make your way into this, this interest in quantum information?
John Preskill 3:49
Well, actually, like a lot of kids of my generation, I got excited about science because of the space program. I remember the day that Yuri Gagarin orbited the Earth. It was an electrifying moment,
and then followed very avidly all the missions of Mercury and Gemini and Apollo and so on. And so then I wouldn't learn about rockets and things like that, but amazing days. Yeah, yeah. I got
interested in math. And, you know, in high school, I decided the greatest achievement of the human intellect was Godel’s theorem, the idea that we could, that there are things that are true, but
which we can never prove, I thought was fascinating. So I thought, okay, I should be a mathematician to set theory and logic. And that's what I had in mind when I went to college, after a year at
Princeton, which I had gone to, because somehow I thought that was the right place to do math. I think I kind of wised up that maybe I wasn't going to. I wasn't cut out to be a mathematician. But
meanwhile, physics is even better, right? We use math and we can understand nature of by using mathematical ideas and methods.
John Preskill 5:13
Now, this was a time when particle physics was really taking off, I was in grad in the early 70s. And so the standard model was being established and dynamics was discovered and charm was discovered.
And there was something very deeply appealing about studying nature at the most fundamental level. And so I, that's what I wanted to do, I went to Harvard for graduate school. Now, the sad thing is,
for my generation, we came along just a little bit too late to take part in discovering the standard model and establishing those truths. But it was okay, we were going to do the beyond the standard
model physics, right. So I got interested in cosmology because we had all these speculations about physics at really short distances, that we couldn't explore and accelerators. But maybe by studying
the early universe, we could find out about physics at very high energies that we can't directly access on Earth. And that was one thing I was interested in. And the other was, you know, ideas about
beyond the standard model that we could explore and accelerators. We were going to build the Great Machine, the superconducting supercollider, and they started building it. And, of course, this was
later in the 1990s, it got, it got cancelled. But by that time, I had started to get interested in quantum information. Because I was interested in black holes, you know, I had to wait for the SSC to
come along. So what to do in the meantime, while there were very fundamental questions, I started to appreciate in the late 80s, about black holes and how they process information, which was a fairly
old thing by then, because Stephen Hawking had pointed this out, mid 70s. But I really began to appreciate it in the late 80s. So, as I've often done, I taught a course about Hawking radiation, and
how information gets processed by black holes and stuff like that. And that's when I became aware of quantum information as a field of study, several reasons, partly because I thought, well, I'm
trying to understand a very fundamental question about information processing. So I should know what people know about that. But also, Charlie Bennett came to Caltech in the early 90s, on sabbatical,
for a year. And, you know, he kind of made me aware that people were studying this, like he told me about David Deutsch, his paper. Incidentally, Feynman, of course, had been very interested in
quantum. And we overlap Caltech for four and a half years between when I arrived, and when he died. And we used to talk a lot. Never about quantum computing, because I think, right, I had seize that
Sebastian Hassinger 8:09
Charlie is kind of the the Johnny Appleseed of quantum computing.
John Preskill 8:14
You could say that, and another thing that Joe was an influence was Seth Lloyd was at Caltech at the time as postdoc, and he was interested in quantum computing. So I, you know, I read Deutsch paper,
like the one in from 1985. And now what Deutsch did something really important. He formalized the subject, you know, he defined what a quantum computer is in a way that a computer scientists could
understand and potentially leap into answering questions about it. But I was very unimpressed by his paper, I have to confess, I missed the point, I thought that he really he was just talking in very
fancy language, about probabilistic computing that, you know, flipping coins to decide the path of a computation. Of course, quantum physics is good for providing a source of randomness. And I didn't
think there was anything more to it than that. Just completely wrong. And But meanwhile, I was learning about cool stuff like quantum teleportation, you know, quantum key distribution, and I realized
that, you know, the information theorists had figured out interesting things about how much information you can gain if you make a measurement.
Sebastian Hassinger 9:33
When I guess Deutsch also laid out sort of the the case for information being a physical process, right being being mean, there being a physics of information, or the information is understood
through physics.
John Preskill 9:48
I think computer scientists, at least many of them, the best ones, had an appreciation about already but Deutsch was making a really important point that If we build quantum machines, he was arguing
that they might be able to efficiently solve problems that we can't solve with conventional machines. It was a fascinating speculation, which at the time, I didn't think was very well backed up by
arguments. But of course, what was a pivotal moment was the discovery of Shor's algorithm, or me. So short paper, as a preprint was first started to be distributed in April 1994. But you had to be in
the in crowd to get it on the archive. And it was being sent around by email, and I was not. But our director was, and he happened to visit Caltech around that time, to give it actually to give a
talk about quantum key distribution, but he I heard from him about Shor's algorithm, and I was able to get the paper from him. And I was immediately fascinated because, you know, I, I never knew very
much about the theory of computation and computer science, apart from my childhood fascination with girdles theorem, and but the idea that you can solve problems efficiently that you'd never be able
to solve, because it's a quantum world, and not a world governed by classical physics. I thought that was one of the coolest ideas I'd ever encountered. So I wanted to learn about that.
Sebastian Hassinger 11:30
In your in your course notes to AI. You describe Deutsch problem, or the solution relying on unknown nonlocality, I think right is it was sort of that the insight that Shor was using that mechanism
in his algorithm that sort of made you go back and reassess Deutsch paper or doshas work?
John Preskill 11:54
Well, sure, I mean, after Shor, clearly,
Kevin Rowney 12:01
cats out of the bag.
John Preskill 12:03
Shor built on a paper by Dan Simon, which came up, you know, just months earlier, described a problem of no apparent practical interest, for which you could argue there was an exponential speed up.
And that actually was what inspired Peter to come up with his algorithm. Yeah, so I taught that course, because that's the best way to learn something, right? I think at the time, which was 1997.
There, maybe Caltech was the only place in the world where we were offering, oh, a one year class on quantum information and computation. And that solidified my, my understanding of the subject.
Right away, I got interested in the question of whether you can really build these things. I'm not an engineer, but as a fundamental question, pretty fascinating. There were naysayers, you know, and
good physicists, like Bill Monroe was one. And Rolf Landauer argued very vigorously that you build these things, they understood decoherence, you know, learning about so of course, the key thing
about decoherence is, there's something different about quantum information than ordinary information, you can't look at it without disturbing it. It's not true of classical bits. And it's true of
qubits. And the environment is always looking. And decoherence is a very fast process. That's why in practice, it's very hard to make a cat which is simultaneously dead and alive. Because it
immediately interacts with the environment and becomes either completely dead or completely alive. And because decoherence is so fast, both Landauer and Unruh argued, you know, we'd never be able to
do a computation of any interest fast enough before decoherence killed it.
Sebastian Hassinger 13:54
I can't remember whose slides it was, but I copied it down because I thought it was so hilarious. It was a Landauer quote that may have been Charlie's slides, or might have been yours actually. But
it was the quote, this scheme like all other schemes for quantum computation relies on speculative technology that does not in its current form, take into account all possible sources of noise,
unreliable and manufacturing error, and probably will not work. That's Landauer’s last word,
John Preskill 14:22
I talked to Landauer at the time. And he had a very forceful way of expressing himself. And he wasn't just worried about decoherence. To be fair, he was worried about the fact and others were too,
that quantum information forms a continuum. It's not discrete. It's not, you know, definitely on or definitely off. So how are you going to control things well enough, there isn't an accumulation of
small errors that rotate quantum states a little bit Landauer was at least as worried about that, as he was about decoherence.
Sebastian Hassinger 14:58
So the precision of control rather than just the noise of the inherent system or the isolation of the system.
Kevin Rowney 15:05
And this is interesting, it's a fair critique, because a lot of people showed concern around at the dawn of the analog computing era, that same, that same critique, right,
John Preskill 15:15
Of course, Landauer was very well aware. Discussions about it.
Kevin Rowney 15:18
Same track record. Yes.
John Preskill 15:21
Yeah. You know, we had learned the lesson that you can't control analog information well enough, so you need to digitize it, and then land are used to like to open the door and then slam it shut, you
know, to make it make it clear to whoever was listening, that there's a big difference between open and shut. quantum door. There's not around Yeah. And, yeah, so. So I've got interested in error
correction, which I knew very little. And while you know, Peter Shor had this amazing run in the mid 90s. He wrote the paper in 94, describing Shor's algorithm, he described quantum error correcting
codes in 1995. Andy Steen did that independently, late 1995. But then there was still a big challenge, which was, how are you going to do quantum error correction, when the hardware is imperfect? You
know, you can measure things. So with perfect accuracy, how are you going to keep the quantum computation on track? There was the question of fault tolerance, where everything's noisy, how can you do
reliable computation and Shor wrote the first clear paper about how to do that in 1996. So, you know, there, I think there are few better examples in the history of science of somebody having a
streak like that, making such fundamental discoveries and rapid succession
Kevin Rowney 16:54
so profound and it drew so many people into the field as well. I mean, it was just such an inspiration, right?
John Preskill 16:59
Oh, and lots of skepticism still in the 1990s. About whether this could ever work. And for good reason. Here we are. It's storming on it. Still struggling that way, though? We can we certainly feel
like we're getting somewhere. So, big challenge, but a very fundamental discovery. I mean, this isn't just engineering. Actually. It was interesting. My colleagues who were in high energy physics
didn't quite get -- at least this is true of most of them -- why would be interested in quantum error correction? Isn't that just like an engineering thing? But no, I think, well, it is that but but
it's a fairly fundamental discovery, that we can control quantum systems, very complex, very highly entangled quantum systems and really get them to do what we want them to do with high accuracy.
That's an amazing theoretical discovery. Absolutely, yeah. Now we're in the process of turning it into reality with actual hardware,
Kevin Rowney 18:01
and just the sheer force of that Shor discovery of pushing the Church-Turing thesis back on its heels. Right. I mean, that's, that's profound. Right.
John Preskill 18:09
Right. Because as your remark correctly indicates, the question of whether such machines are really allowed by nature? Yes, was an open question until we had quantum error correction. So, you know,
the idea which Shor seem to have pointed towards, that we can solve problems with quantum machines that we couldn't solve with classical machines efficiently. That wasn't going to work unless we
could do error correction. And while we figured it out, it was great.
Sebastian Hassinger 18:42
Well, we figured out theoretically, error correction and fault tolerance, in reality are proving to be elusive. Progress was still a long way to progress every year. It's true.
John Preskill 18:53
It's a hard problem. Well, you know, I was very interested in the foundational questions in those days where there were a lot of them to think about, you know, what kind of noise models were
encompassed by our ideas about quantum error correction, what arguments can we give that those noise models are physically realistic? How can we exploit the structure of the noise to do a better job
with quantum error correction? Those are all questions we were thinking about, you know, 20 years ago, and they become increasingly relevant as the hardware advances..
Kevin Rowney 19:33
And is it fair to assume that that whole avenue of interest for you eventually led to the so called it from qubit movement? Is that a -- is that a fair speculation on my part?
John Preskill 19:46
Well, of course that involved the work of many people, but as I said, I had gotten interested in quantum information, initially in the context of black hole physics and thought about that had a lot
and then in the 1990s. And I recognize, for example, the importance of the no cloning principle in that setting. If, you know the big question, which I guess I haven't explicitly mentioned, was if
information falls into a black hole, and then the black hole evaporates completely and disappears, what happens to the information. And Stephen Hawking in those days was arguing it's gone forever
information is destroyed. And that was very upsetting for the culture. I came from the high energy physicists, we thought it was a fundamental principle of quantum physics that although information
can get all scrambled up and become exceedingly hard to read, it never really gets completely destroyed. And trying to understand how that works, has steered a lot of the thinking about how
gravitation and and quantum physics fit together. Since really Hawking 50 years and 1974 he did, he discovered Hawking radiation that black holes evaporate. And were I honestly think we're making a
lot of progress on those questions, but it's still not, it's still not completely resolved. So you asked about it from qubit. It from qubit was sort of an update of one of John Wheeler’s, aphorisms.
He said it from bit, he -- actually pretty interesting insight back 1980s and early 90s, he was saying that progress in fundamental physics is going to hinge on bringing in ideas from information
theory. And he tried to capture that by saying it from bit. He was good at aphorisms, you know, the name black holes, said they have no hair and things like that, incidentally, I, we there was one of
my teachers when I was a Princeton undergrad. And so he taught a course that I took when my second year as an undergraduate, which covered all of physics in a year, very idiosyncratic. But you know,
he wanted he wanted to give us the whole the whole story, from classical mechanics to quantum physics, statistical physics, thermodynamics, and then throw in things like fluid mechanics. And so after
that year, we know everything.
Kevin Rowney 22:34
It's quite a breath, walk in the park. Really? Yeah, I
John Preskill 22:38
think teachers, you know, you'd always come into class, immaculately dressed in a suit and tie. And he was a master of the colored chalk of the illustration, he could draw in real time. But I'll tell
you one story about Wheeler, which kind of gives you an idea of his teaching style and his sort of visionary nature, the kinds of questions he would be attracted to. So one day, you know, this class
is pretty far along, we've learned all kinds of stuff. And he comes in and he says, Alright, take out a piece of paper. And I want you to write down all the equations of physics, don't leave any of
them out. And let me know when you're done. A lot of equations, you know, you could write down Newton's law, the grunge equation, shirtings equation and laws of thermodynamics very easily, and
dutifully wrote them all down. And then he collected the papers. And he brought them to the front of the room, and he put them on a table. And then he stood back, was quiet for a moment and he
gestured towards the stack of papers. And he said, fly nothing happens. Fly, nothing happens. Then he looks puzzled, you know, and then he turns toward it. I have it on very good authority that these
are all the equations of physics. But they won't fly. The Universe flies. Something must be missing. So exactly.
Kevin Rowney 24:10
dramatic flair. That's right, in a suit and a tie to that's great. Something to ponder
John Preskill 24:15
I don't really know. That was typical. We try to make you think about things in a non standard way.
Kevin Rowney 24:27
Some fun, fun, colorful personalities, no doubt. Yeah.
John Preskill 24:30
So actually, it's pretty cool the way ideas which were being developed without fundamental physics necessarily in mind, like quantum error correction have turned out to be very relevant in other
areas of physics, not just gravitational physics, but condensed matter too, because you know, different types of quantum error correcting codes we now realize can be thought of as different phases of
matter with different entanglement and stuff like that. But the idea that quantum error correction had something to do with quantum gravity. It was first hinted at, in 1997, Juan Maldacena made this
amazing discovery that at least in the in a sort of toy model, quantum gravity on a negatively curved spacetime, there is a exact correspondence between quantum gravity in a three dimensional space
and a conventional quantum theory in two dimensions, that somehow that extra dimension is emergent. From the physics of the two dimensional theory, that's a realization of what we call the
holographic principle that came out of thinking about black holes, you know, it goes back to Einstein and Hawking who said, hey, the entropy of a black hole goes like the area of its event horizon.
And that seemed very puzzling, because the amount of information you can stuff into a region of space, it should go like the volume, right? If, in fact, if you tried to put too much information in a
black hole forms, and so there's a limitation on how much information you can record, which really goes like the area of the boundary of the region. And so that's true, not just black holes, but but
more fundamentally. And so I'm all the same as model was very nice, fairly explicit realization of this principle, that the world, at least in this model is a hologram. It's really two dimensional,
fundamentally, but it behaves for all practical purposes, as though it's three dimensional, you can live there and experience three dimensions, just like we're doing now. And so how are you supposed
to think about that there's some kind of dictionary that relates that three dimensional space to its two dimensional hologram. And what we eventually came to realize is that that's a kind of quantum
error correcting code to take this hologram, and you can cut pieces out of it, you know, and remove them. But there's a redundant encoding of the information. So deep inside the three dimensional
space, it's very robust. And it's really a kind of quantum error correcting code. To understand work, you need to know something about quantum gravity and also about quantum error correction. There
weren't many people who knew both.
Sebastian Hassinger 27:30
Is there any parallel that you can draw between Gottesman Kitaev Preskill error codes or error correcting codes and the and the holographic projection of three dimensions that you were just
John Preskill 27:46
Well, I don't see a direct, parallel, but they're both surprising in a way. And so look, just to give the background, you're asking about a coding scheme, which we suggested over 20 years ago, and I
guess, at the time to experimentalist seem kind of futuristic, but is now taken more seriously, people are doing it with devices. The idea was, instead of using qubits encoded in a two level system,
well, that's not quite the right way of saying it, you can take advantage of a continuous variable system and store information in it, which is digitized, which is what makes it robust. But then you
can take advantage of our ability to manipulate a continuous variable system like our harmonic oscillator, it could be for example, a microwave resonator and superconducting circuit, which behaves
like such an oscillator, it could be the motional state of an ion in a trap, which is another type of oscillator and the code that we proposed, I guess it was in 2000 is a way of encoding cubed or
anyways, some finite dimensional system, which is very robust in that continuous variable system. And you know, it has some potential advantages because we can make really good cavities, which have
very good coherence. And as an approach to error correction, it's, well, it has advantages and disadvantages compared to others. So the thing that's surprising, or one thing that's surprising, is
everybody knows about the uncertainty principle. You can't unknow with precision, both position and the moment of a particle not with arbitrarily good precision. But what this code takes advantage of
is that if you consider some quantum state of an oscillator, and then you move it a little bit, you can move it by either increasing its moment or increasing its position. And you can simultaneously
measure both the shift in position and the shift in momentum, with arbitrary precision as long as promised that those shifts are small. What determines small Well, it's Planck's constant, what else
that's all physics comes into the, into the uncertainty principle. And, and this is really how the robustness of of the code works. Because you can prepare one of these states, which kind of looks
like a comb grid, where, you know, there, it's a superposition of different possible positions, say, of a particle. And then if that gets shifted a little bit, you can make a measurement of how much
it was shifted. And that's true when you shifted in position or when you shifted momentum. So you can make a measurement that diagnosis that error. As long as the shift is small enough, you can
unambiguously correct
Sebastian Hassinger 31:00
it without collapsing the state. Yeah, that's
John Preskill 31:03
right. And so it's protected, it's protected, in particular against the form of error, that is our biggest enemy in say, a microwave resonator, which is the loss of a photon. Okay, so that loss of a
photon you can think of as kind of being a little shift in space. And it's something these codes can correct. And that's one of the reasons that they're advantageous in that setting, they seem to be
very well suited for dealing with the type of error which is, is the most common and in that type of hardware.
Kevin Rowney 31:41
It's such a cool algorithm. And I'd love to hear a little bit more about the the background of the thinking and the intellectual history of that time. Because we I think, for our audience, they're
kind of like really vibing a lot on the idea that, you know, quantum Information Science somehow informs, you know, the way the surface of a black hole processes information that's just so
fascinating. But is it? I mean, are we correct, and speculating that some of that research then came back the other way and informed the thinking around quantum error correction? And there's almost a
two way flow of influence on those two very interesting abstract ideas.
John Preskill 32:19
It's a good question. And in principle, it could have happened. I will say this, that thinking about quantum error correction in the context of gravitation led us to construct new types of codes,
which weren't previously known. Now, whether those codes are actually useful for other purposes, like in quantum computing, that hadn't really been established. But, you know, I think having new
ideas for how to protect quantum information is something that we'd like to have more of, where eventually, we may find applications, you know, as far as quantum error correction, and in particular,
this idea of encoding the information in an ordinary harmonic oscillator, we might not have proposed that, if not for interactions with experimentalist at the time, in particular. When I found out
about Shor's algorithm, there was somebody else at Caltech, who was very interested in quantum information in the mid 90s. That was Jeff Kimball, who is an experimentalist. And he's the master of
trapping atoms in optical cavities. And had, you know, shown that you can entangle atoms by manipulating the electromagnetic field in the cavity, and you can entangle an atom with a photon in the
cavity unfolds like that. And so in the mid 90s, Jeff was saying, you know, I should be able up to some, you know, cut off some limitation, be able to make any quantum state of the electromagnetic
field that you want. So what do you want me to make? And so that provided some of the inspiration for thinking about is there a promising way of encoding information in the electromagnetic field in a
cavity? Jeff was considering optical light in a cavity. So what happened later on was Rob Schoelkopf in particular, and his proposed the idea they call circuit quantum electrodynamics. Where the
cavity was was a microwave resonator. So there, they were manipulating the quantum state of microwave light. So I can remember about 20 years ago, hearing Rob talk about this idea of circuit QED it
was at a Gordon conference, where he still in published on this idea, and Jeff and I were both in the audience. And we talked about it afterward because we were very impressed by what Rob was
proposing. And Jeff said to me, you know, if I were a graduate student, now, I'd work for Rob Schoelkopf. Because what he saw was the things he had been working very hard on doing, you know, for 10
or more years, you could do better with microwaves with than with optical photons,
Sebastian Hassinger 35:33
Right. It's interesting how, in a sense, like, we've come back to AMO with like, the neutral atom platforms that are proliferating now, right? I mean, there's now these optical tweezers with atoms in
an array that can be manipulated for computation. It seems like the the engineering has caught up in a sense with with the superconducting regime,
John Preskill 35:57
Right. Well, I think the case of neutral atoms and tweezers is quite instructive for several reasons. One is that maybe five years ago, nobody was talking about it. You know, and so I think that
illustrates the potential for new technologies to emerge, which can really take off and, and lead further developments in the field. And I think we can already do pretty interesting things with those
systems, there are systems of hundreds of atoms in tweezers, where you have some, which in principle, you could operate as a circuit based universal quantum computer.
Kevin Rowney 36:39
And these are, these are Rydberg arrays you're referring to?
John Preskill 36:43
That's right. So that means they're highly excited atoms, you know, it's pretty cool, you can, a laser beam can hold on to a single atom, and then you can highly excite that atom. And when two such
Rydberg atoms are in proximity to one another, they actually interact quite strongly because of their dipole moments. And you can exploit those interactions, potentially to do circuit based quantum
computation. But also to do interesting analog simulations of quantum. Even if you don't have universal control, you can create new types of phases of matter. And we've already seen some, some
promising illustrations of how you that can lead to to new discoveries, you know, the condensed matter physicists have been hampered, because they have all kinds of exotic ideas about new phases of
matter. But it's really hard to make these things with electrons in material. And the AMO physicists come along, and they have a different bag of tricks for realizing new quantum phases of matter
with greater flexibility than we could before where you had to synthesize a material and
Sebastian Hassinger 38:00
Right and Misha Lukins group, has that result where they made the quantum spin fluid, I believe on their neutral MRA and and ColdQuanta has a device that can produce Bose Einstein condensates on
command that you can play around with, which is pretty impressive.
John Preskill 38:19
Well, that Lukin experiment is I think it's kind of a milestone, because the theorists have been talking about quantum spin liquids, you know, for over 20 years. And there's been controversy about
whether there are any real materials,
Kevin Rowney 38:37
like quantum spin liquid, just theory, but now we know Yeah,
John Preskill 38:40
well, yeah, I mean, so you could say, Okay, well, you know, no big surprise, if you could put together atoms that interact in a certain way, and the theorists can predict that you'll get a quantum
spin liquid. But things are more subtle than that for a number of reasons. You have to prepare the state. And they do it by a kind of adiabatic method where, you know, slowly change the way the atoms
interact with one another, to get into the interesting quantum phase. And it didn't turn out exactly the way the simulations predicted it would. And the reason was, there were different timescales
that the theorists hadn't yet taken into account that you have to worry about when you're doing this adiabatic change. And so you might not wind up creating the actual ground state of the system, but
some excited state and then they had to go back and the theorists had to figure out what what had happened. That's an example of the, you know, back and forth between theory and experiment in quantum
matter, which is now possible,
Sebastian Hassinger 39:38
you know, you wrote the paper where you, you create the acronym NISQ, noisy intermediate scale quantum machines, quantum devices in 2018. And you're often sort of called on to sort of update your
view on where we are, how far you know, advanced are how much progress are we making towards, you know, universal fault tolerant quantum devices, but I also, you know, you gave a version of that at
the Solvay Conference, which also included your interest in quantum matter and quantum gravity. And I just get the impression that building these devices brings the theorists and experimentalists
together for a practical, you know, a project that has all of these added benefits. That may not even be apparent when when you start to try to build your first neutral atom array that that
contributes back into the basic science of understanding quantum, as is the quantum matter in quantum gravity and other topics of quantum mechanics? Is that do you see those as two distinct efforts?
Or do you think that's all sort of one one in the same mission to try to drive the the field forward?
John Preskill 40:51
Well, in a sense, it's one on the same mission. But you know, I'm a physicist, I don't need to apologize for that. Here, I think it's really great to build quantum computers that can solve practical
problems that will benefit the world or so we hope, eventually. But I think we can be excited right now about the potential for the quantum technology for scientific discovery. And that's where I see
things happening, you know, on, say, a five or 10 year timescale.
Kevin Rowney 41:27
This is great. This is exactly one of the topics, we wanted to talk to you about this. I mean, if you could perhaps speculate a bit, what area of experimental physics do you think would be most
powerfully explored by NISQ machines now available? Or soon available? Do you feel like there's insight on a new area of breakthrough which is on the verge of of emerging?
John Preskill 41:49
Well, when all always has to be a little cautious about predicting what's going to happen? But where I see a lot of potential? Is that one thing we're so of course, a big part of the question, a big
part of the issue is always Can you do something with a quantum machine, you can't do with classical machine. So so one thing we can say in that respect, is that the methods that we know of, for
simulating on a conventional computer how a many qubit quantum system behaves, they start to fail, when the systems become very highly entangled. You can ask about how and you know, how entangled are
the low energy states of a system like a complex molecule or material. And that has a lot to do with how hard those are to simulate on a conventional computer. But there's some uncertainty about how
hard those things really are classically, because we've got pretty good classical methods, we know they have limitations, they keep getting better. And for the things that you care about, if you're a
chemist, or a material science scientists are they are those systems really so highly entangled, that the classical methods will fail, not so clear, but when it comes to simulating the dynamics of
highly excited matter, classical methods are not good at all, because those systems become very highly entangled. And the classical methods of break down very rapidly. So I think one potential area
where we can expect discoveries is in very highly chaotic quantum systems, which become very strongly entangled quickly. What types of new phenomena? Can we find what types of knew far from equilibri
phases of matter, you know, most of the study of phases of matter, up until recently, has focused on equilibri phases. Because partly because that's what we often encounter in the lab. But with
quantum computers and quantum simulators, we can start to investigate new types of math or new phases, which are far from equilibrium.
Kevin Rowney 44:11
And it's interesting, I need to go out of my way here, we have had previous guests answer the same question. And if I'm not mistaken, they they have been in agreement with you, it feels like there's
a, at least a rough consensus that this particular areas is ripe for breakthrough,
John Preskill 44:26
right. And I'm looking at it from the perspective of what's really hard to do with conventional computer, because that's where you're carrying quantum computing has the potential to show us something
Kevin Rowney 44:36
So So these these odd, odd new phases of matter. I mean, I'm just an amateur I study this in my my part time, but I mean, are you referring to things like I don't know, spin glasses and time crystals
or is there some other domain of of the world that that this is pertinent to?
John Preskill 44:55
Yeah, things like that. Time crystals are interesting. One thing about time crystals is there. So we call them Floquet phases, what that means is you drive them in a periodic way. And that's
something you can do with a circuit based quantum computer pretty well, maybe more easily with a circuit based quantum computer than with an analog simulator, which doesn't have universal control.
And so I think there are such Floquet phases of matter, which are periodically driven, which we had the opportunity to discover, experimentally, and then theorists can go to work, understanding them,
I think we also get some guidance from quantum error correction, that if you consider a quantum error correcting code, but then you introduce noise, or if you, you know, change the code a little bit
by changing the Hamiltonian for which the ground state or something like that the way that system behave behaves, When we drive it away from equilibri has a lot to do or is related to, can we correct
errors or not? And so I think we can leverage things that we've learned about quantum error correction, and when it works, and when it doesn't work, to discover or look for new types of faces matter.
And that's also going to be interesting. sounds so
Kevin Rowney 46:19
cool. Wow. Just amazing.
Sebastian Hassinger 46:22
I wanted to make sure we made time for touching on the work that you've been doing that your your student Robert Huang has been doing, because I think it's it's quite interesting. I've heard you talk
about a little bit in the past. And if I'm not mistaken, it he started by sort of looking at classical shadows. And and, and how they may relate to machine learning applications. Is that right? Or is
there there's something? It's a fairly broad area that that he's working with, it seems.
John Preskill 46:54
Right. So Robert came to Caltech as a beginning graduate student in the fall of 2018. And he already had a background in machine learning, not quantum physics. But he had done research as an
undergrad, he was an undergrad in Taiwan, in machine learning and knew a lot about it, I knew very little about it. In 2018. As an aside, I used to be a freshman advisor at Caltech. I haven't done
that the last few years. But when I meet the freshmen for the first time, you know, I would always ask them, What are you interested in? What are you excited about? And maybe 10 years ago, there
would be all kinds of responses a lot, a lot of them being things that I knew something about, like, Oh, I'm excited about string theory, or gravitational waves, or maybe it was something like
CRISPR, and genetic engineering and neuroscience. But by five years ago, by far, the most common answer was machine learning. It's they see how it's changing the way we do everything, including how
we do science. And so I was lucky to have Robert as a student, because I could, under his tutelage, learn something about machine learning. And he, he already had some big questions on his mind early
in his PhD. One being, how far can we go? If we, you know, have access to quantum systems in the lab, and we can measure them? How far can we go at predicting the behavior of other quantum systems,
which are different from anything we've encountered in the lab before? How can generalize from that data to make predictions for new systems? And another big question is, and how much better can we
do these things? If we process information with quantum machines? And so we've worked on both those questions the last five years.
Kevin Rowney 48:55
That's interesting. I'm sorry. So it's both using classical computers and machine those machine learning algorithms to infer quantum behavior and quantum computing to to infer quantity.
John Preskill 49:05
But yeah, you said it better than I did your kind.
Kevin Rowney 49:10
Note first, so no, fair.
John Preskill 49:12
The first question. We've got a really big challenge, right, because quantum systems of many qubits are very extravagant, you know, they have their you know, the words we use is there's a Hilbert
space of unfathomably large dimension of possible quantum states, it seems far beyond our capacity as classical human beings to envision or grasp that extravagant
Kevin Rowney 49:38
Not such an intuitive abstraction. Yeah
John Preskill 49:42
But on the other hand, let's say I want to use classical ML to efficiently make predictions about quantum systems. I got to translate that extravagant quantum system. I've got to convert it into a
succinct classical description, which captures physically relevant properties of the system. And that was this idea that we called Classical shadows. It's a way of making measurements that are
experimentally feasible today on a quantum platform, you know, which could have, say, hundreds of qubits. And then from those measurement outcomes, we can, first of all, by classical processing,
predict many properties of the quantum system. So, you know, it's, there are some properties, we want to predict. What do I mean by properties? Well, you know, physicists often talk about correlation
functions, like I'm interested in how correlated are the qubits over here with the qubits over there. And expectation values of operators that sort of act locally on the system, things like that, we
we'd like to be able to predict. And of course, one way to do that is just a measure that thing you want to predict over and over again, until you get a good statistical estimate of that quantity,
but it's very inefficient. So what we showed is by doing measurements that are experimentally feasible, and are just chosen by sampling from some random ensemble, you can make predictions for a
number of properties, which is actually exponential in the number of copies of the system that you measured in the lab. So that was the idea of classical shadows. And so now we have this quantum to
classical converter, the classical shadow somehow doesn't tell you everything, right? You a lot about the quantum system, we measure that and the lab for some quantum system of interest. And now we'd
like to generalize from that data to predict properties of other systems. So I mean, a kind of futuristic thing you might hope to do, is you've got a big handbook with lots of information about
different molecules where you know, you've measured different properties of the molecules. And now you'd like to be able, from that data to generalize and make predictions about other molecules that
you'd never synthesized before.
Kevin Rowney 52:04
This is the ON FIRE topic in chemistry right now. Yeah, you know, yeah.
John Preskill 52:08
And so in, you know, a lot of ML is heuristic, which is fine. So, you can't, you don't have guarantees of performance. But we are interested in under what conditions can you actually prove rigorously
that you can make accurate predictions? And we succeeded in doing that for a few different cases. And one is that you do have some phase of matter. And, but it can be in a lot of different states,
you the way a physicist would describe it is, is there some Hamiltonian which, and this is the ground state is the lowest energy state of this Hamiltonian? And suppose there's some family of
Hamiltonians. And we sample from that family and we measure properties, the ground states for those samples. And now we'd like to ask, can I make predictions about properties of the ground states,
for other Hamiltonians that are in that same family in that same phase, we found conditions under which it's guaranteed that those predictions are accurate, and that the number of samples that you
need, and the amount of classical processing that you need to do to generalize is is efficient?
Kevin Rowney 53:20
Wow, those are cool results. Really amazing. Yeah.
John Preskill 53:24
And I should put in the disclaimer that not only does Robert come up with questions he also answers them -- well not just Robert, we had other collaborators.
Kevin Rowney 53:35
I got, I can't,
John Preskill 53:36
you know, you had to formulate the problem, and then prove theorems. And he does the numerics. And he's really been a joy.
Kevin Rowney 53:48
This is really cool. So and I mean, it's perhaps a little bit rude. Please forgive me if I'm being impolite here, because the theory, the abstractions, beautiful, but I mean, could there be a route
towards applications in this domain that would perhaps, I don't know, shed new light on your just ground level performance in classical or quantum ml?
John Preskill 54:08
Yeah, well, you know, I'm a physicist. So I am -- ML is great. I want to understand things, right. I don't want it to just be a black box. That makes predictions. I mean, that's fine. I guess it's a
Kevin Rowney 54:24
That's a that's a valid criticism of all of them. And let me the concern about a big stochastic parrot. Yes. And
John Preskill 54:29
so we're we don't know quite what to say about that yet. But I think example, which harkens back to something we discussed earlier. So we have the capacity now in the lab to make new phases of
matter. No one's ever seen before. How do you know it's a new phase of matter? Okay. And so one of our results shows that you can efficiently learn to classify phases of matter, under certain
assumptions, but at So even if you, well, this a little stronger than what we prove rigorously, but the Numerix indicates that if you're sampling, you know, states from different phases of matter,
you can do unsupervised amount to sort them into different phases. Okay? In doing so you've learned some classifying function, some property of the system that tells me phase A is different from
Phase B. And so I think stuff like that, as far as, again, applications to science, you might have been asking about more practical application. No, no, anything's fun. Yeah. But as far as
application to science, I think now that we have the hardware to explore new phases of matter, we're going to have to know when we found something really new, instead of boring. And so I think these
tools are going to be important.
Kevin Rowney 55:52
You need a rigorous analysis tool, you can't just say it looks different. So it's a new, it's a new face. Yeah.
Sebastian Hassinger 55:58
As the practicality as has come up a number of times, I mean, I feel like applications in science are at least as important as applications to computer science or engineering, let's say. In that
sense, it feels to me in a way quantum computing is playing the role that the the space race in the in the NASA mission was playing, as you were saying, sort of inspired you to to get into science in
the first place, right? It's something that's motivating all of this collaborative codesign, experimentalists and theorists working together, physicists and computer science and engineers all working
together to some outcome. We're not even sure what it is. But it's, it's, it seems all good to us, actually.
John Preskill 56:46
You brought up earlier the Solvay meeting. And for for a physicist that has a certain cachet, you know, because all the conferences, going back to 1913 have sometimes been places where historically
important discussions take place. And so I thought it was, and this was not a meeting of engineers, really, it's a meeting of scientists, but it was experimental physicists, people who do quantum
gravity, people who do quantum matter, people who do theoretical computer science, you know, who have enough in common, that we can have fruitful discussions and share ideas, and the fact that there
was a Solvay Conference, which, mostly in the past has been either about high energy physics or condensed matter physics or astrophysics, where the theme of the meeting was the physics of quantum
information is an indicator of where physical science is heading. And I think a lot of the excitement comes from these connections between fields that really stimulates progress.
Sebastian Hassinger 57:50
That's fantastic. That seems like a terrific place to end up. That's really a great sort of feeling for the for the, the enticement of the field and the promise of the field. And thank you so much
for being with us, John. It's been
Kevin Rowney 58:07
really Yeah.
John Preskill 58:08
Are you telling me an hour is up? All right, it's
Sebastian Hassinger 58:10
right, exactly. You have a meeting to get to.
Kevin Rowney 58:14
I'm trying to be respectful of your time, but also Yeah, did fly by Yes.
John Preskill 58:17
We were having so much fun.
Kevin Rowney 58:19
We really enjoyed this. Thank you so much. Thank you.
John Preskill 58:22
All right, thank you.
Sebastian Hassinger 59:09
That was a great conversation. Super, super interesting. It's such a privilege to be able to talk with people who've been around from pretty much close to day one. I mean, as John said, he overlapped
at Caltech with Feynman for about five years. And he's been tracking and involved in the field since the Deutsch paper in 1985. So his roots go very, very deep. And he's you know, he has such an
interesting take on, you know, what the threads are critical to his sort of understanding of how the field is progressing.
Kevin Rowney 59:44
So cool. I think also, I was really thrilled by the work of his graduate student on his team, Robert Huang, in terms of -- you know, the cool applications of machine learning and Quantum Information
Science. There's just this rich theme that just keeps coming up with these interviews. So well, I got much more along these lines to come, but I can't wait to see where this guy's work.
Sebastian Hassinger 1:00:09
Absolutely. And it is exciting because you know, when I first encountered quantum and machine learning in the same sense, it was, you know, using quantum computers to somehow accelerate machine
learning, which they've got a long way to go in that regard. But using ml and using the enormous classical computing power that we have with those algorithms, to better understand quantum systems to
exert classical control and these quantum systems, that's extremely promising from a scientific perspective, but also from an engineering, you know, manipulation and control of these systems. Maybe
we get better error correction through ml and other techniques as well. So very, very interesting.
Kevin Rowney 1:00:50
We live in a very interesting time, as we say, most episodes
Sebastian Hassinger 1:00:53
It's the best part of this podcast is how mind blowing these conversations are so cool. Yeah. So thanks for joining us. We hope you enjoyed it. If you did, please subscribe and leave a review on
Apple podcasts or Google or whatever other platform you listen to, and we'll catch you again next time.
Kevin Rowney 1:01:13
Looking forward to the next time. Okay, that's it for this episode of The New Quantum Era. podcast by Sebastian Hassinger and Kevin Rowney, our cool theme music was composed and played by Omar Costa
Homido. production work was done by our wonderful team over at Podfly. If you're at all like us and enjoy this rich, deep and interesting topic, please subscribe to our podcast on whichever platform
you may stream from. And even consider if you'd like what you've heard today, reviewing us on iTunes and or mentioning us on your preferred social media platforms. We're just trying to get the word
out on this fascinating topic and would really appreciate your help spreading the word and building community. Thank you so much for your time.
Creators and Guests
John Preskill
Theoretical physicist @Caltech, Director of @IQIM_Caltech, Amazon Scholar. Mastodon: https://t.co/fBX4BkWGcO
Omar Costa Hamido
OCH is a performer, composer, and technologist, working primarily in multimedia and improvisation. His current research is on quantum computing and music composition, telematics, and multimedia. He
is passionate about emerging technology, cinema, teaching, and performing new works. He earned his PhD in Integrated Composition, Improvisation and Technology at University of California, Irvine with
his research project Adventures in Quantumland (quantumland.art). He also earned his MA in Music Theory and Composition at ESMAE-IPP Portugal with his research on the relations between music and
painting. In recent years, his work has been recognized with grants and awards from MSCA, Fulbright, Fundação para a Ciência e a Tecnologia, Medici, Beall Center for Art+Technology, and IBM. | {"url":"https://podcast.the-new-quantum-era.com/15/transcript","timestamp":"2024-11-06T11:01:25Z","content_type":"text/html","content_length":"111265","record_id":"<urn:uuid:c332b899-2ed6-45a7-a9f3-79c318c95365>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00036.warc.gz"} |
holonomic quantum field
If and in as far as it’s related to the KZ-equation the answer would be “yes” (that’s the content of Anyonic Defect Branes).
However, I haven’t even opened any article on “holonomic field theory” yet, so I don’t know what exactly that term subsumes.
Is there perhaps an Hypothesis H perspective on holonomic field theory? Pure mathematicians are still investigating isomonodromic deformations of such differential equations, as in this tweet.
The full text is freely available here.
This article (which I can only see the preview) at least gives us the ’fruit’ of the link:
Through recent study of the problems in mathematical physics, a deep, unexpected link has emerged: a link between the monodromy preserving deformation theory for linear (ordinary and partial)
differential equations, and a class of quantum field operators ([i] [2] [3]). The aim of this article is to give an overview to the present stage of development in the theory (see also [4]).
The fruit of the above link is multifold. On the one hand it enables one to compute exactly the n point correlation functions of the field in question in a closed form, using solutions to certain
non-linear differential equations of specific type (such as the Painlev~ equations). On the other hand it provides an effective new tool of describing the deformation theory by means of quantum
field operators. Thus it stands as a good example of the fact that not only the pure mathematics is applied to physical problems but also the converse is true.
just re-discovered the existence of the ancienct page holonomic quantum field, while looking to hyperlink the authors of:
• Tetsuji Miwa, Michio Jimbo, Introduction to holonomic quantum fields, pp. 28–36 in: The Riemann problem, complete integrability and arithmetic applications, Lec. Notes in Math. 925, Springer
(1982) $[$doi:10.1007/BFb0093497$]$
This old page needs attention:
After I had given (in rev 2) Zoran’s original text the header “Idea”, Zoran complained within the entry (rev 3) about his own text that:
There is no single mathematical idea expressed here yet!
Somebody should figure it out
diff, v5, current
@1 I did not complain about the text, nor your formatting into paragraphs, but merely left a reminder about calling the vague orientation what this subject is about “idea”. Idea of the construction
is yet to be written.
I think the title of the paragraph as it is now reminds us temporarily and honestly that the true idea of the construction is on our to do list. Once the essence of the construction is outlined we
can rename the entire intro section into “Idea”. I highly respect this subject but somehow miss the clarity to explain it yet.
I’d suggest to remove the lines “There is no single mathematical idea expressed here yet!” and “Idea: Somebody should figure it out”. Instead we could just write “under construction” at the top of
the page.
I have now deleted these two lines.
Then I have added the quote given by David C. in #2.
Also started fixing the bibitems and their hyperlinking, but I give up now, for the moment.
diff, v6, current | {"url":"https://nforum.ncatlab.org/discussion/14565/holonomic-quantum-field/?Focus=99922","timestamp":"2024-11-01T23:41:19Z","content_type":"application/xhtml+xml","content_length":"52437","record_id":"<urn:uuid:33be757c-9e32-46f2-88f6-00826a9ac213>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00751.warc.gz"} |
Physical Applications
Suppose the potential energy of a gas of $n$ point charges with positions $x_{1},x_{2},\dots,x_{n}$ and free to move on the infinite line $-\infty<x<\infty$, is given by
5.20.1 $W=\frac{1}{2}\sum_{\ell=1}^{n}x_{\ell}^{2}-\sum_{1\leq\ell<j\leq n}\ln|x_{\ell% }-x_{j}|.$
The probability density of the positions when the gas is in thermodynamic equilibrium is:
5.20.2 $P(x_{1},\dots,x_{n})=C\exp\left(-W/(kT)\right),$
where $k$ is the Boltzmann constant, $T$ the temperature and $C$ a constant. Then the partition function (with $\beta=1/(kT)$) is given by
5.20.3 $\psi_{n}(\beta)=\int_{{\mathbb{R}}^{n}}e^{-\beta W}\,\mathrm{d}x\\ =(2\pi)^{n/2}\beta^{-(n/2)-(\beta n(n-1)/4)}\*\left(\Gamma\left(1+\tfrac{1}{2}% \beta\right)\right)^{-n}\prod_{j=1}^{n}\
See (5.14.6).
For $n$ charges free to move on a circular wire of radius $1$,
5.20.4 $W=-\sum_{1\leq\ell<j\leq n}\ln|e^{i\theta_{\ell}}-e^{i\theta_{j}}|,$
and the partition function is given by
5.20.5 $\psi_{n}(\beta)=\frac{1}{(2\pi)^{n}}\int_{[-\pi,\pi]^{n}}e^{-\beta W}\,\mathrm% {d}\theta_{1}\cdots\,\mathrm{d}\theta_{n}=\Gamma\left(1+\tfrac{1}{2}n\beta% \right)(\Gamma\left(1+\tfrac{1}
See (5.14.7). | {"url":"https://dlmf.nist.gov/5.20","timestamp":"2024-11-07T23:20:16Z","content_type":"text/html","content_length":"59420","record_id":"<urn:uuid:0d5b9336-c623-4bb8-b265-bd2b2d18fe7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00485.warc.gz"} |
Current is the flow of electrons. It measures how many electrons per second are moving through a wire. In the circuit we build today, we will have about 20mA of current flowing through the wires.
20mA is equal to 0.020A or 0.020 amps. A 60W light bulb uses about A.
Current is measured in units called amps or amperes or A. It is represented by the letter I in equations.
Voltage is a measure of the force that makes electrons want to move from one place to another. It is also known as ``potential''. If electrons were water droplets, voltage would be the pressure at
the base of the water tower. A tall water tower has large pressure at the base; a short tower has little pressure.
At 0V between two points, no electrons will move between those points. No current will flow if those points are connected by a wire. It's like putting a pipe between two ends of a flat lake: no water
flows through the pipe.
A small voltage (like 1.5V from an AA battery) is like a fairly flat river. If you put a pipe (wire) between two points along the river, water will flow. But for a given size pipe, the amount of
water per second won't be very big.
A high voltage (say, 120V) is like a waterfall along that river. If you put that same pipe between the top and the bottom of that waterfall, water will flow like crazy.
But voltage is not the same as capacity. Say you have a small stream flowing parallel to the river. If you put a small pipe from one point along each waterway to a set distance downstream, you will
get the same amount of water flow in each of the two pipes. But if you replace the small water pipe with a big water pipe, the stream won't be able to provide enough water to fill the pipe, and
you'll get more water in the river-pipe than the stream-pipe. The stream and the river result in the same drop in water pressure from one point to another (like a voltage), but can provide a
different maximum amount of water (just like lower capacity batteries go dead earlier than high capacity batteries).
A battery is rated both with a voltage (like 1.5V) and a capacity (like 1600mAh). A dead battery has run out of charge - it has exhausted its store of electrons. It's like an empty water tower.
Voltage is measured in units called volts or V. It is represented by the letter V in equations.
A resistor resists the flow of current (flow of electrons). It makes it so fewer electrons flow. A big resistor is like using a skinny pipe - you get less water. A small resistor is like using a big
wide pipe - water flows easily.
Resistance is measured in units called Ohms or . It is represented by the letter R in equations.
A law called Ohm's Law governs how these quantites relate. It's usually written as . Since we will be interested in calculating the current in our circuit, let's divide both sides by R:
Ok, what does this mean? Equations have meanings, you know. Let's say we get high voltage battery. That means V is very big. Let's keep the same size resistor. What happens to the current? If V goes
up, the right side of the equation gets bigger, so the left side of the equation must get bigger, so I increases. Making V big is like building a higher water tower. Keeping R the same size is like
not changing the size of the water pipes. More water (or electrical current) flows.
Resistors resist current flow. In our circuit, they let us regulate how much current flows through the LED. If we want a brightly glowing LED, then we need a lot of current. To allow a lot of
current, we don't want to resist the current very much so we pick a resistor with a small resistance.
The stripes on a resistor indicate its resistance and tolerance. The resistance indicates how much the resistor will constrict current flow. The tolerance indicates how precisely the resistor's value
actually matches what the color code indicates.
The three stripes clumped together indicate the resistance. The first two stripes correspond to the two most significant digits. The third stripe represents the number of zeros. The following table
indicates what each color represents:
│ black │ brown │ red │ orange │ yellow │ green │ blue │ violet │ grey │ white │ │ │ │ │ │ │ │
│ 0 │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ 7 │ 8 │ 9 │ │ │ │ │ │ │ │
So for example blue-gray-yellow stripes correspond to 6(blue), 8(gray), and 4 zeros (yellow), which is 680000: . That's a lot of zeros to count, so people usually add prefixes to the units to remove
zeros in sets of three. The most common prefixes are k for 3 zeros, and M for 6 zeros. So instead of 680000, we can write 680k.
The fourth stripe that's a little further from the others indicates the tolerance, or how precisely the resistor's value actually matches what the color code indicates. So if you have a 100 resistor
with a 5% tolerance, that resistor could actually be anywhere between 95 to 105. What the resistor companies actually do is they fabricate lots of resistors, then measure them and sort them by
tolerance. So in practice, if a resistor has a 5% band on it, its value will be between 95 and 97.99999 or between 102.00001 and 105. All the resistors between 98 and 102 will have been labelled as
2% resistors by the manufacturer and sold at a higher price. The colors for tolerance are:
│ brown │ red │ gold │ silver │ none │
│ 1% │ 2% │ 5% │ 10% │ 20% │
The battery stores energy. It provides a source of electrons. It's like a water tower, in our water analogy. We usually hear batteries described by their voltage - a measure of the strength of the
force provided by the battery to make electrons flow from one battery terminal, through a circuit, to the other battery terminal. The batteries we'll be using are 1.5V.
Batteries also differ in how many electrons they store - their capacity - measured in milliamp-hours (mAh). Voltage tells you how strongly the battery pressures the electrons to flow. Capacity tells
you how many of those electrons it holds. The capacity of our batteries is about 1650mAh. This means that if we were to use the battery to power one LED, drawing 20mA of current, the battery would
last over 80 hours.
We will be using two batteries. There are a couple of ways to connect batteries shown in Fig. 2.2.
Connecting batteries in parallel is like putting two water towers next to each other. The water pressure would be the same as with a single battery (so the voltage doesn't change). But we have twice
as much water (so we double the capacity). Connecting batteries in series is like stacking two water towers on top of each other, making a water tower that is twice as tall. We have twice as much
water pressure (so the voltage is doubled).
We will connect our batteries in series. The two 1.5V batteries will give us 3V.
LED stands for Light Emitting Diode. A diode is a type of electrical circuit component that lets current flow one way but not the other. It's like a one-way valve. A light-emitting diode puts out
light when current flows through it. More current results in more light. Less current gives less light.
The LED component we're using actually contains two LEDs. The red LED faces one way and the green LED faces the other way. So depending on how you connect it in the circuit, one or the other will
glow but not both.
Ok, so what's going on in our circuit? The batteries provide the source of electrons at a steady voltage of 3V. The LEDs glow brightly if there is a lot of current, not much if there is little
current, and not at all if there is no current. The resistor controls how much current flows. Ohm's Law tells us the relationship between these quantities.
Let's do an example calculation.
What do we plug in for V, the voltage? The two batteries are 1.5V each. We've connected them in series. It's like stacking one water tower on top of another: the result is twice as high. So our two
1.5V batteries are like one 3.0V battery. So use .
What do we plug in for R, the resistance? You have three resistors in your kit, each with a different resistance. The colored stripes on the resistor indicate the value of the resistor. One of those
resistors is 680. So use . Note that the LED has some resistance of its own. But the resistance of the LED is so low that it gets dwarfed by the resistor, so we can ignore the LEDs resistance.
What do we plug in for I, the current? Well, that's what we calculate! Ohm's Law will tell us how much current (I) to expect given the voltage (V) provided by the battery, and the resistance (R) from
the resistor that we choose. It can help us pick the correct resistor. So let's do some math:
That's 4.4mA, which we read out loud as 4.4 milli-amps.
What do you think will happen if we put in a smaller resistor? Look back at: . If we make R smaller, that's makes the whole fraction bigger. Look at an example:
So a smaller resistor (R) gives us a bigger current (I). Big current means more light. Try it! Put in a smaller value resistor!
Let's build our circuit! | {"url":"https://ofb.net/~ania/teaching/EE_for_kids/LED_lesson/","timestamp":"2024-11-09T08:07:57Z","content_type":"text/html","content_length":"17957","record_id":"<urn:uuid:60c67833-6942-4ca5-afa8-f9cca3867a83>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00693.warc.gz"} |
Civil Engineer Objective Questions - Survey (Section-1)
Civil engineer objective questions – Survey (Section-1)
1. Principle of surveying is
a) to work from whole to the part
b) to work from part to whole
c) both (a) and (b) above
d) none of the above
2. Under Survey “working from whole to part” is done
a) to prevent the accumulation of error
b) to localize the error
c) both of the above
d) none of the above
3. The curvature of the earth’s surface is taken into account only if the extent of survey is more than
a) 60 sq km
b) 160 sq km
c) 260 sq km
d) 500 sq km
4. Geodetic survey is different from plane surveying because of
a) very large area is covered
b) the curvature of the earth is considered
c) undulations of the topography
d) the large difference of elevations
5. The limitation for geodetic survey is
a) 150 km²
b) 250 km²
c) 350 km²
d) 400 km²
6. Hydrographic survey deal with the mapping of
a) large water bodies
b) canal system
c) cloud movement d) none of the above
7 The survey which is carried out for determining absolute locations and the direction of any line on the surface of the earth by making observation to heavenly bodies is called
a) hydrographic survey
b) astronomical survey
c) land survey
d) none of the above
8. A scale representing either three units or only one unit and its fractions upto second place of decimal point is
a) diagonal scale
b) comparative scale
c) simple vernier
d) shrunk scale
9. If the smallest division of a vernier is longer than the smallest division of its primary scale, the vernier is known as
a) direct vernier
b) double vernier
c) dimple vernier
d) retrograde vernier
10. The scale used to measure and to set out the angles is
a) diagonal scale
b) comparative scale
c) vernier scale
d) scale of chords
11. If s is the value of one smallest division on main scale, v is the value of one smallest division on the vernier and n is the number divisions on the vernier, then least count is given by
12. A discrepancy is the difference between
a) true value and error
b) measured value and actual value
c) two measured values of the same quantity
d) none of the above
13. Find the R.F. for scale, 1cm=5m
b) 1/50
c) 1/500
d) 1/5000
14. Find the representative fraction for a scale 10 cm = 2 km.
a) 1/200
b) 1/20000
c) 1/2000000
d) none of the above
15. The designation of scale recommended by IS: 1491-1959
a) A to C
b) A to D
c) A to E
d) A to F
16. Least count is given by
a) p-v
b) v-p
c) p+v
d) both a) and b) of above
Where p & v are the smallest division of primary & secondary scale respectively.
17. Which of the following scale is the smallest one
a) 1cm=10m
b) 1cm=100m
c) lcm=1000m
d) 1cm= 10^4 m
18. Compensating errors in chaining or other survey are
a) proportional to the length of the line
B) proportional to the square root of the length of the line
c) inversely proportional to the square root of the length of the line
d) inversely proportional to the length of the line
19. The errors which are not possible to correct is
a) positive cumulative error
b) negative cumulative error
c) compensating error
d) none of the above
20. Negative errors are caused in chain, when its length is
a) more than the standard length
b) less than the standard length
c) equal to the standard-length
d) any of the above
21. Theory of probability is applied to
a) cumulative errors
b) compensative errors
c) accidental errors
d) none of the above
22. The most probable value of an observed quantity available from a given set of observation is the one for which the sum of the square of errors is a minimum. This statement is called as
a) principle of least square
b) law of errors
c) principle of square errors
d) none of the above
23. The difference between the most probable value of a quantity and its observed value is
a) conditional error
b) true error
c) residual error d) safe error
24. The maximum allowable limit up to that a measurement may vary from the true value is known as
a) permissible error
b) residual error
c) expected error
d) systematic error
25. The types of error which are of cumulative nature and can be corrected is known as
a) permissible error
b) residual error
c) expected error
d) systematic error | {"url":"https://www.civilconcept.com/civil-engineer-objective-questions-survey-section-1/","timestamp":"2024-11-10T23:58:08Z","content_type":"text/html","content_length":"86130","record_id":"<urn:uuid:50c3cc13-b6a8-4500-a777-dabb38beb9c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00849.warc.gz"} |
Printable Multiplication Table 1 12
Printable Multiplication Table 1 12 - This use of color with a purpose helps students to easily navigate the multiplication chart. Mat measures 12 x 17. An expanded chart like this is great for
advanced math students. To get the pdf of 1 to 12 table, click the download option and take a print of this 1 to 12 multiplication table. Web for this printable 12 x 12 multiplication table, the
perfect squares from 1 to 144 are highlighted on the main diagonal of the grid. Web printable multiplication table 1 to 12. Web multiplication chart 0 to 12 view & download multiplication grid 1 to
100 this multiplication grid shows all the multiplication facts for the numbers 1 through 100. It consists of a process that multiplies two numbers, called factors, which pertains to the accumulation
of the given numbers. Multiplication tables are essential for students working out math problems. More sizes of multiplication tables looking for a smaller or larger multiplication table?
Math Tables 1 to 12 Printable Multiplication Chart 1 to 12 Maths
Web printable multiplication tables up to 12: Web this multiplication table from 1 to 12 is useful for kids learning how to multiply. This chart will allow you to memorize the patterns and the whole
multiplication table in no time. Web published on 02 august 2022 / last modified on 24 january 2023. Buy a poster of multiplication table
112 Multiplication Chart Free Download
Web it is a chart with a grid of numbers from 1 to 12, which can help you learn to multiplication quickly, either using the table or in your head. So, if your kids are not taking interest in learning
multiplication tables then you can take help of option “printable multiplication table for kids”. Here is the printable multiplication chart.
5+ Blank Multiplication Table 112 Printable Chart in PDF
Here is the printable multiplication chart (pdf) from the 1 time table up to the 12 times table, it's a free resource. When you are just getting started learning the multiplication tables, these
simple printable pages are great tools! Use these charts as a practice sheet by checking your values from here. You can use it as a reminder or.
Multiplication Tables 112 Printable Pdf Printable Word Searches
Besides this, you can teach simple tips that will help them to memorize tables 1 to 12 quickly. To get the pdf of 1 to 12 table, click the download option and take a print of this 1 to 12
multiplication table. Web for this printable 12 x 12 multiplication table, the perfect squares from 1 to 144 are highlighted.
Printable Time Tables 112 Activity Shelter
These are perfect multiplication worksheets for grade 3 to help kids slowly learn the multiplication tables starting with 1s and inching your way up to 12s. Multiplication tables are essential for
students working out math problems. 10× color, 10× black and white, 10× small (exercise book) size, and a 10× blank version for you to fill in. This use of.
112 Times Table Printable K5 Worksheets
Blank 12 x 12 pdf format png format multiplication chart. Use with dry erase or water based markers and washable crayons. When you are just getting started learning the multiplication tables, these
simple printable pages are great tools! An expanded chart like this is great for advanced math students. Use these charts as a practice sheet by checking your values.
Multiplication Table 112
You can use it as a reminder or to learn your times tables up to 12x12 multiplication. Each side is an interactive lesson. Use these charts as a practice sheet by checking your values from here. Use
with dry erase or water based markers and washable crayons. These table charts are suitable for the kids from the 1st standard to.
Printable Multiplication Table 112 Pdf
Web this multiplication table 1 to 12 is consist of 12 rows with a respective operation of multiplication, which is very beneficial to learn the basic multiplication of 1 to 12 table. 12× color, 12×
black and white, 12× small (exercise book) size, and a 12× blank version for you to fill in. Web multiplication chart 0 to 12 view.
7 Best Images of Printable Multiplication Tables 0 12 Multiplication
Use these charts as a practice sheet by checking your values from here. Use color patterns or hide numbers for kids to fill in. Web multiplication chart 0 to 12 view & download multiplication grid 1
to 100 this multiplication grid shows all the multiplication facts for the numbers 1 through 100. Web this multiplication table from 1 to 12.
Multiplication Charts 112 Times Table Activity Shelter
It consists of a process that multiplies two numbers, called factors, which pertains to the accumulation of the given numbers. Feel free to color it to memorize more easily. Use with dry erase or
water based markers and washable crayons. Each side is an interactive lesson. Mat measures 12 x 17.
Memorizing the times table gives children a foundation on which they can build their mathematical skills.paper size: When you are just getting started learning the multiplication tables, these simple
printable pages are great tools! Web here you can find multiplication tables from 1 to 12. This worksheets are a very useful tool to improve students skill on printable subjects. Multiplication grid
1 to 100 view & download tips for using a multiplication grids (landscape orientation) 12 x 12 You will find below complete or blank charts to be filled in, classic or fanciful charts, colorful or
black & white charts. Web this multiplication table from 1 to 12 is useful for kids learning how to multiply. An expanded chart like this is great for advanced math students. These table charts are
suitable for the kids from the 1st standard to the 5th standard. It consists of a process that multiplies two numbers, called factors, which pertains to the accumulation of the given numbers. Web
printable 10 times tables: Buy a poster of multiplication table 12 x 12 pdf format png format times table chart. Web multiplication chart 0 to 12 view & download multiplication grid 1 to 100 this
multiplication grid shows all the multiplication facts for the numbers 1 through 100. Use color patterns or hide numbers for kids to fill in. You can get the pdf downloads for the full table as well
as worksheets. Mat measures 12 x 17. 12× color, 12× black and white, 12× small (exercise book) size, and a 12× blank version for you to fill in. Web this multiplication table 1 to 12 is consist of 12
rows with a respective operation of multiplication, which is very beneficial to learn the basic multiplication of 1 to 12 table.
You Can Use It As A Reminder Or To Learn Your Times Tables Up To 12X12 Multiplication.
This chart will allow you to memorize the patterns and the whole multiplication table in no time. Answer chart, copy practice, and blank table pdf. Use color patterns or hide numbers for kids to fill
in. Web multiplication chart 0 to 12 view & download multiplication grid 1 to 100 this multiplication grid shows all the multiplication facts for the numbers 1 through 100.
You Can Use The Blank Times Table Chart To Practice Multiplication Facts And The Prefilled Multiplication Charts To Place On The Kids’ Room Or Classroom Wall.
Multiplication tables are essential for students working out math problems. Web multiplication tables 1 to 12 are extremely helpful in enhancing your child’s mathematical knowledge. When you are just
getting started learning the multiplication tables, these simple printable pages are great tools! This use of color with a purpose helps students to easily navigate the multiplication chart.
More Sizes Of Multiplication Tables Looking For A Smaller Or Larger Multiplication Table?
Web this multiplication table from 1 to 12 is useful for kids learning how to multiply. Each side is an interactive lesson. Besides this, you can teach simple tips that will help them to memorize
tables 1 to 12 quickly. Web here you can find multiplication tables from 1 to 12.
Blank 12 X 12 Pdf Format Png Format Multiplication Chart.
Web practice multiplication with this learning mat. To get the pdf of 1 to 12 table, click the download option and take a print of this 1 to 12 multiplication table. These table charts are suitable
for the kids from the 1st standard to the 5th standard. Web printable 10 times tables:
Related Post: | {"url":"https://dl-uk.apowersoft.com/en/printable-multiplication-table-1-12.html","timestamp":"2024-11-05T00:36:40Z","content_type":"text/html","content_length":"30745","record_id":"<urn:uuid:f1647b0c-5ad4-4c34-bf20-648fa4999ce5>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00007.warc.gz"} |
Answers to the Question: What is the Measure of One Interior Angle in a Regular Hexagon? - Thehousebrew.com (UPDATED👍)
Answers to the Question: What is the Measure of One Interior Angle in a Regular Hexagon?
Introduction to Unveiling the Mystery: What is the Measure of One Interior Angle in a Regular Hexagon?
When it comes to geometry, the measure of one interior angle in a regular hexagon is an interesting topic. The answer to this question can be found by understanding the properties of a regular
hexagon. A regular hexagon is a six-sided polygon that has equal sides and angles. When all six angles are added together, the sum of the measures in degrees equals 720°. Since there are six angles
in a regular hexagon, then each angle must have a measure of 120°. This makes sense because if you divide 720° by 6, you get 120°.
It’s important to remember that this will only hold true for any type of regular polygon, meaning all sides and angles have to be equal for the equation to work. This can help when approaching other
types shapes as well; just divide the total sum of all its angles by the number of sides it has and you’ll arrive at your desired measure for one interior angle. It doesn’t matter what shape or how
many sides it has – this technique always works!
So there you have it – now we know that the measure of one interior angle in a regular hexagon is 120°. Keep in mind though, that geometry isn’t always this simple and straightforward; but with
enough practice and understanding its principles, like calculating interior angles, it’ll become easier down the line!
Step by Step Guide on How to Calculate the Measurement of One Interior Angle of a Regular Hexagon
A regular hexagon is a two-dimensional shape with six sides and six interior angles, all of which are equal in measure. Calculating the measurement of one particular angle can be done quickly if you
understand basic geometric principles. This step by step guide explains how to calculate the measurement of one interior angle of a regular hexagon using only mathematics – no ruler or protractor
To begin, remind yourself that the sum of all angles in any triangle (including a regular hexagon) are always equal to 180 degrees. So, since there are 6 total angles inside of a regular hexagon,
each angle must measure 180/6 = 30 degrees.
This calculation should be your starting point for finding any interior angle’s measurement for a regular hexagon: multiply the number of sides by itself and divide it by two (in this case, 6 X 6 =
36; 36/2 = 18). This equation should yield the overall sum of all internal angles within the shape (in this case, 18 x 60° = 1080°). What we need now is to determine what percentage 1080° represents
out of 360° — in other words, how many degrees make up each individual interior angle?
To do this, you’ll have to take an extra step: divide 1080° by 360° and then multiply this answer by 100. You’ll end up with 3 — meaning that each individual interior angle in a regular hexagon
measures 3% (or 3/100th or 0.03) times 360°. To find exactly what that equals out in terms of real numbers, simply take 0.03 and multiply it by 360 — yielding 10.8 ° or 10 ° 48′ as your final result:
which means each internal angle inside a regular hexagon has a measure equal to 10 ° 48′.
Now you know how to calculate the exact measurement for one individual interior angle inside a regular hexagon! The next time someone asks you about these shapes from
FAQs About Measuring One Interior Angle in a Regular Hexagon
Q. How many interior angles does a regular hexagon have?
A. A regular hexagon has six interior angles, each measuring 120 degrees.
Q. Is there a formula for measuring one interior angle in a regular hexagon?
A. Yes, there is a formula for measuring one interior angle in a regular hexagon. To calculate the measure of one interior angle it is necessary to divide 360 degrees (the sum of all angles of any
polygon) by 6, the number of sides:
360°/6 = 60°
So, the measure of each interior angle in a regular hexagon is 60°.
Q. Are there any other ways to measure an interior angle in a regular hexagon?
A. Fortunately, yes! We can make use of some basic geometry rules and formulas to calculate the measure of an interior angle in a regular hexagon. First we must consider that internal angles are
supplementary angles so their sum must add up to 180 degrees according to the Triangle Sum Theorem:
2x + 120 + 120 + 120 + 120 +120 = 180
Therefore, 2x = 180 – 720 = -540
Solving for x we get that x= -270
Divide this result by 2 since we know that an individual angle measures half of what two supplements form – thus the measure for an individual internal angle will be 135° (-270 ÷ 2). So using basic
geometry we can also determine that each internal angle in a regular hexagon measures 135° subdivided into two equal parts between each pair or vertexes on its edge which amounts to 67.5° per side of
1080° total around it!
Top 5 Facts You Should Know About Measuring One Interior Angle of a Regular Hexagon
1. The interior angles of a regular hexagon are all equal. The measure of an interior angle- the angle formed by two connecting sides within the hexagon- is 120°.
2. By knowing this, we can calculate the measure of each exterior angle (the angle formed between one side and its adjacent side outside the shape.) It’s simply 180° minus 120° equals 60°: the
measure of each exterior angle.
3. While any interior or exterior angle could be chosen to measure for this shape, to solve for other measurements in geometry it often helps if you choose a vertex that is on a principal axis of
symmetry – such as choosing one corner at either end of a line when solving for parallel lines .
4. When finding area, it’s important to remember that dividing up regular polygons into triangles is often easier than trying to find its area formula directly. This means that when measuring out one
interior angle, you should also connect points distal to that vertex and form two triangles which contain 6 total measured angles (now including both right angles.)
5. Additionally, this allows for easier calculations in using trigonometry ratios like SOHCAHTOA in order to help solve equations without as much trouble from multiplying several large fractions
together through traditional area formulas specifically pertaining to a regular hexagon .
What Are Some Practical Uses for Understanding the Measure of an Interior Angle in a Regular Hexagon?
The understanding of interior angles in a regular hexagon is a fundamental concept in geometry, and it can have practical applications. Horse breeders use this concept to decide the optimal size of
their horse stalls since every animal needs adequate space to move comfortably. By knowing the measure of each interior angle in a regular hexagon, they can determine how much surface area each stall
should occupy for both economical and safety reasons.
Interior angles can also be used practically in construction projects. Architects often take advantage of this measurement when designing enclosures like gazebos or outdoor fire pits that need to be
hexagonal-shaped. This calculation helps them ensure that the sides are equally spaced out so that when building materials are purchased, costs stay within budget as there is no wasted material.
Finally, knowing and applying this geometric concept comes in handy when playing certain board games such as Hex which uses a board with hexagons where players have to strategically place pieces on
the board so they fit correctly into their designated spaces while trying to block their opponent’s moves by designing fences with these shapes around their own pieces.
Conclusion: Unveiling the Mystery, Understanding the Measurement of an Interior Angle in a Regular Hexagon
The measurement of an interior angle in a regular hexagon is equal to 120 degrees. It’s a mystery that many students have attempted to unravel over the years, but the secret is finally revealed.
That’s right – 120 degrees!
This particular tidbit of geometry can be explained with the help of a few simple equations and diagrams. To begin with, let us examine the concept of triangles and their angles. The sum total of all
interior angles within any triangle will almost always add up to 180°, regardless of the measure or shape of said triangle . Armed with this fundamental knowledge we can now move on to discovering
how it applies to our regular hexagon example.
A regular hexagon is composed of six separate equilateral triangles tightly packed together in a geometric pattern. As each individual triangle has its own interior angles totaling 180° it stands to
reason that our grand total for all six must necessarily be 1080° (180 times 6). Therefore if we divide this figure by the number 6 (the amount of sides present on our hexagon) then it logically
follows that each inner corner formed by these connected triangles measures exactly 120°! Mystery solved!
To summarize: an interior angle inside a regular hexagon equals precisely 120 degrees which can be easily deduced using some basic math and understanding of shapes and their dimensions. So there you
have it – Unveiling the Mystery, Understanding the Measurement of an Interior Angle in a Regular Hexagon! | {"url":"http://thehousebrew.com/answers-to-the-question-what-is-the-measure-of-one-interior-angle-in-a-regular-hexagon","timestamp":"2024-11-09T10:40:18Z","content_type":"text/html","content_length":"86005","record_id":"<urn:uuid:8ee7ed62-f253-4d92-b3f6-94d87dc14766>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00589.warc.gz"} |
HackerRank Solutions
Xor and Sum
You are given two positive integers a and b in binary representation. You should find the following sum modulo 10^9 + 7: where operation xor means exclusive OR operation, operation shl means binary
shift to the left. Please note, that we consider ideal model of binary integers. That is there is infinite number of bits in each number, and there are no disappearings (or cyclic shifts) of bits.
Input Format The first line contains number a (1 <= a <2^10^5) in binary representation. The
View Solution →
Lego Blocks
You have an infinite number of 4 types of lego blocks of sizes given as (depth x height x width): d h w 1 1 1 1 1 2 1 1 3 1 1 4 Using these blocks, you want to make a wall of height n and width m.
Features of the wall are: - The wall should not have any holes in it. - The wall you build should be one solid structure, so there should not be a straight vertical break across all rows of bricks. -
The bricks must be laid horizontally. How many ways can the wall be built? Example
View Solution →
Brick Tiling
You are given a grid having N rows and M columns. A grid square can either be blocked or empty. Blocked squares are represented by a '#' and empty squares are represented by '.'. Find the number of
ways to tile the grid using L shaped bricks. A L brick has one side of length three units while other of length 2 units. All empty squares in the grid should be covered by exactly one of the L shaped
tiles, and blocked squares should not be covered by any tile. The bricks can be used in any orientatio
View Solution →
Alien Languages
Sophia has discovered several alien languages. Suprisingly, all of these languages have an alphabet, and each of them may contain thousands of characters! Also, all the words in a language have the
same number of characters in it. However, the aliens like their words to be aesthetically pleasing, which for them means that for the ith letter of an n-letter alphabet (letters are indexed 1 . . . n
): if 2i > n, then the ith letter may be the last letter of a word, or it may be immediately fo
View Solution →
Stock Maximize
Your algorithms have become so good at predicting the market that you now know what the share price of Wooden Orange Toothpicks Inc. (WOT) will be for the next number of days. Each day, you can
either buy one share of WOT, sell any number of shares of WOT that you own, or not make any transaction at all. What is the maximum profit you can obtain with an optimum trading strategy? Example
prices = [1,2] Buy one share day one, and sell it day two for a profit of 1. Return 1. prices = [2,1]
View Solution → | {"url":"https://hackerranksolution.in/?page=93","timestamp":"2024-11-15T04:34:14Z","content_type":"text/html","content_length":"33480","record_id":"<urn:uuid:10bf057f-c7de-4be8-8633-d74e0976363e>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00080.warc.gz"} |
059 feet per second to kilometers per second
Speed Converter - Feet per second to kilometers per second - 6,059 kilometers per second to feet per second
This conversion of 6,059 feet per second to kilometers per second has been calculated by multiplying 6,059 feet per second by 0.0003 and the result is 1.8467 kilometers per second. | {"url":"https://unitconverter.io/feet-per-second/kilometers-per-second/6059","timestamp":"2024-11-07T17:08:40Z","content_type":"text/html","content_length":"15620","record_id":"<urn:uuid:19629280-09a8-4a13-a477-8355a6a20cdb>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00642.warc.gz"} |
Smoothed particle hydrodynamics implementation of the standard viscous–plastic sea-ice model and validation in simple idealized experiments
Articles | Volume 18, issue 3
© Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License.
Smoothed particle hydrodynamics implementation of the standard viscous–plastic sea-ice model and validation in simple idealized experiments
The viscous–plastic (VP) rheology with an elliptical yield curve and normal flow rule is implemented in a Lagrangian modelling framework using the smoothed particle hydrodynamics (SPH) meshfree
method. Results show, from a perturbation analysis of SPH sea-ice dynamic equations, that the classical SPH particle density formulation expressed as a function of sea-ice concentration and mean ice
thickness leads to incorrect plastic wave speed. We propose a new formulation for particle density that gives a plastic wave speed in line with theory. In all cases, the plastic wave in the SPH
framework is dispersive and depends on the smoothing length (i.e., the spatial resolution) and on the SPH kernel employed in contrast to its finite-difference method (FDM) implementation counterpart.
The steady-state solution for the simple 1D ridging experiment is in agreement with the analytical solution within an error of 1%. SPH is also able to simulate a stable upstream ice arch in an
idealized domain representing the Nares Strait in a low-wind regime (5.3ms^−1) with an ellipse aspect ratio of 2, an average thickness of 1m and free-slip boundary conditions in opposition to the
FDM implementation that requires higher shear strength to simulate it. In higher-wind regimes (7.5ms^−1) no stable ice arches are simulated – unless the thickness is increased – and the ice arch
formation showed no dependence on the size of particles, in contrast to what is observed in the discrete-element framework. Finally, the SPH framework is explicit, can take full advantage of parallel
processing capabilities and shows potential for pan-Arctic climate simulations.
Received: 10 Aug 2022 – Discussion started: 18 Aug 2022 – Revised: 17 Oct 2023 – Accepted: 31 Dec 2023 – Published: 04 Mar 2024
Sea ice is an important component of the Earth’s system to consider for accurate climate projection. Generally, numerical models used for geophysical sea-ice simulations and projections are based on
a system of differential equations assuming a continuum. The equations that predict the sea-ice dynamics are a combination of the momentum equations, which describe the drift of sea ice under
external and internal forces, and the continuity equations which ensure mass conservation. The external forces (per unit area) generally include surface air stress, water drag, sea surface tilt and
the Coriolis effect, and the internal forces are related to the ice stress term. This internal stress term is based on various constitutive relations which can differ between models. The more
commonly used constitutive laws are the standard viscous–plastic model (Hibler, 1979) or modifications thereof (e.g., elastic–viscous–plastic or EVP and elastic–plastic–anisotropic or EPA; Hunke and
Dukowicz, 1997; Tsamados et al., 2013). They are typically discretized on an Eulerian mesh using the finite-difference method (FDM). FDM is the simplest method to discretize and solve partial
differential equations numerically. However, it is based on a local Taylor series expansion to approximate the continuum equations and construct a topologically rectangular network of relations
between nodes (e.g., Arakawa grids).
Even though the VP (and EVP) rheologies are commonly used to describe sea-ice dynamics and are able to capture important large-scale deformation features (Bouchat et al., 2022; Hutter et al., 2022),
they still have difficulties representing smaller-scale properties (Schulson, 2004; Weiss et al., 2007; Coon et al., 2007) such as linear kinematic features (LKFs) unless run at very high resolution
(≈2km; Ringeisen et al., 2019; Hutter et al., 2022). To improve the simulation of small-scale ice features and to alleviate the problem of FDM with complex geometries (Peiró and Sherwin, 2005), the
community also considered new sea-ice rheologies (Schreyer et al., 2006; Girard et al., 2011; Dansereau et al., 2016; Ringeisen et al., 2019) and explored different space discretization frameworks
like the finite-element method (FEM) (Rampal et al., 2016; Mehlmann et al., 2021), the finite-volume method (FVM) (Hutchings et al., 2004; Losch et al., 2010; Adcroft et al., 2019) or the
discrete-element method (DEM) (Hopkins and Thorndike, 2006; Herman, 2016; Damsgaard et al., 2018).
In recent decades, the spatial resolution of sea-ice models has become comparable to the characteristic length of the ice floes. This makes the continuum assumption of current FDM, FVM and FEM models
questionable. Also, Eulerian models are known to have difficulties determining the precise locations of inhomogeneity, free surfaces, deformable boundaries and moving interfaces (Liu and Liu, 2010).
These shortcomings have led to an increased interest in the DEM approach. Another advantage of using DEMs is that the granularity of the material (Overland et al., 1998) is directly represented using
discrete rigid bodies from which the physical interactions are calculated explicitly in the hope that large-scale properties naturally emerge. In practice, the emergent properties of a granular
medium still depend on the assumed floe size and the nature of collisions in contrast to the continuous numerical methods which can indirectly account for floe interactions through the formulation of
a constitutive law. Nevertheless, DEMs easily capture the formation of cracks, leads and large deformations, making them a consistent framework for the numerical simulation of granular material like
sea ice (Fleissner et al., 2007).
Despite the shortcomings of the continuum approaches, FDM, FVM and FEM are still the most commonly used frameworks in the community because they have been developed and tested for a longer period and
are well understood, computationally more efficient and easily coupled for large-scale simulations. In an attempt to take advantage of both continuum and discrete formulations, blends between the two
approaches have been proposed – e.g., the finite- and discrete-element methods (Lilja et al., 2021) or the material-point method (Sulsky et al., 2007). Those framework, however, still use a mesh to
solve the dynamic equations in addition to considering sea ice as discrete elements, making them even more computationally expensive. Finally, a fairly new approach in the context of sea-ice
modelling – also taking from both continuum and discrete frameworks – uses a Lagrangian meshfree continuous method called smoothed particle hydrodynamics (SPH) developed by Lucy (1977) and Gingold
and Monaghan (1977). This meshfree method is known to facilitates the numerical treatment and description of free surfaces (Liu and Liu, 2010) which are common in sea-ice dynamics with polynyas,
LKFs, free-drifting ice floes and unbounded ice extent. As in DEM, the physical quantities are carried out by particles in space (an analogy for ice floes in the real world) but evolve according to
the same dynamic equations used in the continuum approach. Furthermore, the method has the advantage of treating the system of equations in a Lagrangian framework discretized explicitly, making it
well-suited for parallelization.
SPH has been applied successfully for modelling other granular materials such as sand, gravels and soils (Salehizadeh and Shafiei, 2019; Yang et al., 2020; Sheikh et al., 2020). In the context of
mesoscale and larger sea-ice modelling, Gutfraind and Savage (1997) initiated the SPH study of sea-ice dynamics using a VP rheology based on a Mohr–Coulomb failure criterion. The ice concentration
and thickness were fixed at 100% and 1m with a continuity equation expressed in terms of a particle density. The internal ice strength between particles was derived diagnostically from ice density
assuming ice was a nearly incompressible material. Later, Wang et al. (1998) developed a sea-ice model of the Bohai Sea (east coast of China) using an SPH viscous–plastic rheology (Hibler, 1979) with
continuity equations for ice concentration and mean thickness, as well as ice strength calculated from static ice jam theory (Shen et al., 1990). Following Wang et al. (1998), Ji et al. (2005)
implemented a new viscoelastic–plastic rheology that was in better agreement with observations from the Bohai Sea. Recently, Staroszczyk (2017) proposed a sea-ice model considering ice behaving as a
compressible non-linear viscous material with a (dimensionless) contact-length-dependent parameterization for floe collisions and rafting (Gray and Morland, 1994; Morland and Staroszczyk, 1998). In
all of the above, except for Gutfraind and Savage (1997), the same ice particle density definition is used.
In this work, we use the standard VP sea-ice model with an elliptical yield curve and normal flow rule (Hibler, 1979) as a proof of concept. Further development of the SPH model should consider a
broader range of rheologies. We propose a reformulation of the ice particle density that is internally consistent with the model physics. One goal of the study is to investigate differences coming
from the numerical framework. To this end, we theoretically investigate the plastic wave propagation, a fundamental property of a sea-ice physical model, using a 1D perturbation analysis and we test
the model in a ridging and ice arch experiment following earlier works by Lipscomb et al. (2007), Dumont et al. (2009), Rabatel et al. (2015), Dansereau et al. (2017), Williams et al. (2017),
Damsgaard et al. (2018), Ranta et al. (2018), Plante et al. (2020), and West et al. (2022). We chose to investigate the SPH method performance on a ridging experiment since it has an analytical
steady-state solution that can be used to establish the model accuracy and it is possible to evaluate whether the coupling with the mass equations is coherent. We also test SPH performance on ice
arch simulations since this classic problem is an example of large-scale features resulting from small-scale interactions involving fractures of the material. The two experiments allow a direct
comparison with previous work and identify advantages and disadvantages with the continuum and discrete sea-ice dynamics. The long-term goal is to lay the foundation for an SPH sea-ice formulation
that can be used in large-scale models.
The paper is organized as follows. In Sect. 2, a description of the sea-ice VP rheology, momentum and continuity equation implementations in the SPH framework is presented. Results of a plastic wave
propagation analysis, ridging experiments and ice-arching simulations are presented in Sect. 3. Finally, Sect. 4 discusses the SPH advantages and limitations of the SPH framework, future model
development, and the main conclusions from the work.
2.1Momentum and continuity equations
Following Plante et al. (2020), we consider sea ice to behave as a 2D granular material described by the 2D momentum equation (neglecting the Coriolis and sea surface tilt terms):
$\begin{array}{}\text{(1)}& {\mathit{\rho }}_{\mathrm{i}}h\frac{\text{D}\mathbit{u}}{\text{D}t}=\mathrm{abla }\cdot \mathbit{\sigma }+\mathbit{\tau },\end{array}$
where ρ[i] is the ice density, h is the mean ice thickness (ice volume over an area), $\mathbit{u}=u\stackrel{\mathrm{^}}{\mathbit{x}}+v\stackrel{\mathrm{^}}{\mathbit{y}}$ is the ice velocity vector,
σ is the vertically integrated internal stress tensor acting in the $\stackrel{\mathrm{^}}{\mathbit{y}}$ direction on a face with a unit-outward-normal pointing in the $\stackrel{\mathrm{^}}{\mathbit
{x}}$ direction, τ is the sum of water stress and surface air stress, and $\frac{\text{D}}{\text{D}t}=\frac{\partial }{\partial t}+\mathbit{u}\cdot \mathrm{abla }$ is the Lagrangian derivative
operator. The Coriolis and sea surface tilt terms are neglected from the momentum equation to ease the comparison with the analytical solution and simple 1D problem. Note that using the Lagrangian
derivative operator naturally incorporates the advection of momentum in the ice dynamics – a term that is typically neglected for most continuum-based Eulerian sea-ice models. The surface air stress
and the water stress can be written using the bulk formulation as (McPhee, 1979)
$\begin{array}{}\text{(2)}& \mathbit{\tau }={\mathit{\rho }}_{\mathrm{a}}{C}_{\mathrm{a}}|{\mathbit{u}}_{\mathrm{a}}-\mathbit{u}|\left({\mathbit{u}}_{\mathrm{a}}-\mathbit{u}\right)+{\mathit{\rho }}_
{\mathrm{w}}{C}_{\mathrm{w}}|{\mathbit{u}}_{\mathrm{w}}-\mathbit{u}|\left({\mathbit{u}}_{\mathrm{w}}-\mathbit{u}\right),\text{(3)}& \approx {\mathit{\rho }}_{\mathrm{a}}{C}_{\mathrm{a}}|{\mathbit{u}}
_{\mathrm{a}}|\left({\mathbit{u}}_{\mathrm{a}}\right)+{\mathit{\rho }}_{\mathrm{w}}{C}_{\mathrm{w}}|{\mathbit{u}}_{\mathrm{w}}-\mathbit{u}|\left({\mathbit{u}}_{\mathrm{w}}-\mathbit{u}\right),\end
where ρ[a] and ρ[w] are air and water densities, u[a] and u[w] are air and water velocity vectors, C[a] and C[w] are the air and water drag coefficients, and u is neglected in the formulation of the
wind stress since u≪u[a]. The continuity equations for the mean ice thickness h and the ice concentration A can be written as
$\begin{array}{}\text{(4)}& \frac{\text{D}h}{\text{D}t}+h\mathrm{abla }\cdot \mathbit{u}=\mathrm{0},\text{(5)}& \frac{\text{D}A}{\text{D}t}+A\mathrm{abla }\cdot \mathbit{u}=\mathrm{0},\end{array}$
where the thermodynamic source terms are omitted. Note that the thickness and concentration are independent prognostic variables in a two-category model (Hibler, 1979), resulting in a singularity
when thickness reaches zero. To avoid this singularity and for a more mathematically correct treatment of the mass equation, Gray and Morland (1994) introduced a continuous solution where the
concentration asymptotes to zero and one. In the following, we ignore melting and consider cases where only convergent motion is present and the use of a two-category model does not have an impact on
the simulated results.
2.2Constitutive laws
The constitutive relations for the viscous–plastic ice model with an elliptical yield curve, a normal flow rule and tensile strength can be written as (Beatty and Holland, 2010)
$\begin{array}{}\text{(6)}& {\mathit{\sigma }}_{ij}=\mathrm{2}\mathit{\eta }{\stackrel{\mathrm{˙}}{\mathit{ϵ}}}_{ij}+\left[\left(\mathit{\zeta }-\mathit{\eta }\right){\stackrel{\mathrm{˙}}{\mathit
{ϵ}}}_{kk}-\frac{{P}_{\mathrm{r}}\left(\mathrm{1}-{k}_{\mathrm{t}}\right)}{\mathrm{2}}\right]{\mathit{\delta }}_{ij},\text{(7)}& {\stackrel{\mathrm{˙}}{\mathit{ϵ}}}_{ij}=\frac{\mathrm{1}}{\mathrm{2}}
\left(\frac{\partial {u}_{j}}{\partial {x}_{i}}+\frac{\partial {u}_{i}}{\partial {x}_{j}}\right)=\frac{\mathrm{1}}{\mathrm{2}}\left(\mathrm{abla }\mathbit{u}+\mathrm{abla }{\mathbit{u}}^{⊺}\right),\
where ${\stackrel{\mathrm{˙}}{\mathit{ϵ}}}_{ij}$ is the symmetric part of the strain-rate tensor, ζ and η are the non-linear bulk and shear viscosities, P[r] is the replacement pressure, k[t] is the
tensile strength factor, and δ[ij] is the Kronecker delta. Following Bouchat and Tremblay (2017) we write
$\begin{array}{}\text{(8)}& \mathit{\zeta }=\frac{P\left(\mathrm{1}+{k}_{\mathrm{t}}\right)}{\mathrm{2}{\mathrm{\Delta }}^{*}},\text{(9)}& \mathit{\eta }=\frac{\mathit{\zeta }}{{e}^{\mathrm{2}}}=\
mathit{\zeta }\left(\frac{\mathrm{2}S}{P\left(\mathrm{1}+{k}_{\mathrm{t}}\right)}{\right)}^{\mathrm{2}},\text{(10)}& {\mathrm{\Delta }}^{*}=\text{max}\left(\mathrm{\Delta },{\mathrm{\Delta }}_{\
mathrm{min}}\right),\text{(11)}& \begin{array}{rl}\mathrm{\Delta }& =\left[\left({\stackrel{\mathrm{˙}}{\mathit{ϵ}}}_{\mathrm{11}}^{\mathrm{2}}+{\stackrel{\mathrm{˙}}{\mathit{ϵ}}}_{\mathrm{22}}^{\
mathrm{2}}\right)\left(\mathrm{1}+{e}^{-\mathrm{2}}\right)+\mathrm{4}{e}^{-\mathrm{2}}{\stackrel{\mathrm{˙}}{\mathit{ϵ}}}_{\mathrm{12}}^{\mathrm{2}}\\ & +\mathrm{2}{\stackrel{\mathrm{˙}}{\mathit{ϵ}}}
where $P={P}^{*}h\phantom{\rule{0.25em}{0ex}}\text{exp}\left(-C\left(\mathrm{1}-A\right)\right)$ is the ice strength (Hibler, 1979), P^* and C are respectively the ice compressive strength and ice
concentration parameters, S is the ice shear strength, and e is the ellipse aspect ratio. In the limit where Δ goes to zero, ζ and η tend to infinity. To avoid this situation, the deformation Δ is
capped to ${\mathrm{\Delta }}_{\mathrm{min}}=\mathrm{2}×{\mathrm{10}}^{-\mathrm{9}}\phantom{\rule{0.125em}{0ex}}{\text{s}}^{-\mathrm{1}}$. Using the Δ^* formulation, the replacement pressure P[r] can
be written as
$\begin{array}{}\text{(12)}& {P}_{\mathrm{r}}=P\frac{\mathrm{\Delta }}{{\mathrm{\Delta }}^{*}},\end{array}$
which ensures that the stresses are zero when the strain rates are zero.
2.3Governing differential equations: SPH framework
To solve the system of equations in the SPH framework, equations involving spatial derivatives (Eqs. 1, 4, 5 and 7) are reformulated (see Appendix A for more details on the SPH theory) using Eqs. (A5
–A6–A7) with the particle subscripts p and q (see Fig. A1), and a temporal evolution for the ice particle position is defined:
$\begin{array}{}\text{(13)}& \frac{\text{D}{\mathbit{x}}_{\mathrm{p}}}{\text{D}t}={\mathbit{u}}_{\mathrm{p}},\mathrm{\text{momentum}}\text{(14)}& \begin{array}{rl}{\mathit{\rho }}_{i}{h}_{\mathrm{p}}
& \frac{\text{D}{\mathbit{u}}_{\mathrm{p}}}{\text{D}t}={\mathit{\rho }}_{\mathrm{p}}\sum _{q=\mathrm{1}}^{N}{m}_{\mathrm{q}}\left(\frac{{\mathbit{\sigma }}_{\mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}
^{\mathrm{2}}}+\frac{{\mathbit{\sigma }}_{\mathrm{p}}}{{\mathit{\rho }}_{\mathrm{p}}^{\mathrm{2}}}\right)\\ & \cdot {\mathrm{abla }}_{\mathrm{p}}{W}_{\mathrm{pq}}+{\mathbit{\tau }}_{\mathrm{p}},\
mathrm{\text{momentum}}\end{array}\text{(15)}& \frac{\text{D}{h}_{\mathrm{p}}}{\text{D}t}+\frac{{h}_{\mathrm{p}}}{{\mathit{\rho }}_{\mathrm{p}}}\sum _{q=\mathrm{1}}^{N}{m}_{\mathrm{q}}\left({\mathbit
{u}}_{\mathrm{q}}-{\mathbit{u}}_{\mathrm{p}}\right)\cdot {\mathrm{abla }}_{\mathrm{p}}{W}_{\mathrm{pq}}=\mathrm{0},\mathrm{\text{continuity}}\text{(16)}& \frac{\text{D}{A}_{\mathrm{p}}}{\text{D}t}+\
frac{{A}_{\mathrm{p}}}{{\mathit{\rho }}_{\mathrm{p}}}\sum _{q=\mathrm{1}}^{N}{m}_{\mathrm{q}}\left({\mathbit{u}}_{\mathrm{q}}-{\mathbit{u}}_{\mathrm{p}}\right)\cdot {\mathrm{abla }}_{\mathrm{p}}{W}_
{\mathrm{pq}}=\mathrm{0},\mathrm{\text{continuity}}\text{(17)}& \begin{array}{rl}{\stackrel{\mathrm{˙}}{\mathbit{ϵ}}}_{\mathrm{p}}& =\frac{\mathrm{1}}{\mathrm{2}}\left[\left(\sum _{q=\mathrm{1}}^{N}\
frac{{m}_{\mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}}\left({\mathbit{u}}_{\mathrm{q}}-{\mathbit{u}}_{\mathrm{p}}\right)\otimes {\mathrm{abla }}_{\mathrm{p}}{W}_{\mathrm{pq}}\right)\\ & +\left(\sum _
{q=\mathrm{1}}^{N}\frac{{m}_{\mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}}\left({\mathbit{u}}_{\mathrm{q}}-{\mathbit{u}}_{\mathrm{p}}\right)\otimes {\mathrm{abla }}_{\mathrm{p}}{W}_{\mathrm{pq}}{\
It is important to make the distinction between the intrinsic ice density ρ[i] and the particle densities ρ[p]. For consistency reasons with the standard VP rheology, we consider the following
definition of density independent of ice concentration in contrast to previous work (Wang et al., 1998; Ji et al., 2005; Staroszczyk, 2017) (see Results section for discussion):
$\begin{array}{}\text{(18)}& {\mathit{\rho }}_{\mathrm{p}}={\mathit{\rho }}_{\mathrm{i}}{h}_{\mathrm{p}}.\end{array}$
By formulating density as in Eq. (18), the continuity Eq. (15) has the same form as the more commonly used continuity density equation (Monaghan, 2012):
$\begin{array}{}\text{(19)}& \frac{\text{D}{\mathit{\rho }}_{\mathrm{p}}}{\text{D}t}=-{\mathit{\rho }}_{\mathrm{p}}\mathrm{abla }\cdot {\mathbit{u}}_{\mathrm{p}}=\sum _{q=\mathrm{1}}^{N}{m}_{\mathrm
{q}}\left({\mathbit{u}}_{\mathrm{p}}-{\mathbit{u}}_{\mathrm{q}}\right)\cdot {\mathrm{abla }}_{\mathrm{p}}{W}_{\mathrm{pq}},\end{array}$
except for the fact that the divergence of the velocity field is scaled by the ice material density ρ[i] $\left(\frac{\text{D}{\mathit{\rho }}_{\mathrm{p}}}{\text{D}t}={\mathit{\rho }}_{\mathrm{i}}\
frac{\text{D}{h}_{\mathrm{p}}}{\text{D}t}\right)$. Overall, a particle can be seen as an unresolved collection of floes scattered within the support domain 𝒜 that can converge, ridge over one
another, break and drift apart. Note that since the particle density ρ[p] definition is independent of A[p], the concentration can be interpreted as a quantity that measures the compactness of the
sea ice at the particle location. It describes the probability of ice floes carried by a particle, which is a point in space, to come in “contact” with ice floes of another particle and get repulsed
according to the ice strength.
2.4Numerical approach
Following Hosseini et al. (2019), we use a second-order predictor–corrector scheme to evolve in time the SPH ice system of equations (see Algorithm 1 below). This integration scheme takes a given
function f (here f can be x, u, A and h) and uses a predictor step to calculate its value ${f}^{n+\mathrm{1}/\mathrm{2}}$ at time $t=\left(n+\frac{\mathrm{1}}{\mathrm{2}}\right)\mathrm{\Delta }t$
(where Δt is the time step) followed by a correction step to calculate the solution f^n+1 at time $t=\left(n+\mathrm{1}\right)\mathrm{\Delta }t$ from ${f}^{n+\mathrm{1}/\mathrm{2}}$:
$\begin{array}{}\text{(20)}& {f}_{\mathrm{p}}^{n+\mathrm{1}/\mathrm{2}}={f}_{\mathrm{p}}^{n}+\frac{\mathrm{\Delta }t}{\mathrm{2}}\frac{\text{D}{f}_{\mathrm{p}}^{n}}{\text{D}t}+O\left(\mathrm{\Delta }
{t}^{\mathrm{2}}\right),\text{(21)}& {f}_{p\text{ corrected}}^{n+\mathrm{1}/\mathrm{2}}={f}_{\mathrm{p}}^{n}+\frac{\mathrm{\Delta }t}{\mathrm{2}}\frac{\text{D}{f}_{\mathrm{p}}^{n+\mathrm{1}/\mathrm
{2}}}{\text{D}t},\text{(22)}& {f}_{\mathrm{p}}^{n+\mathrm{1}}=\mathrm{2}{f}_{p\text{ corrected}}^{n+\mathrm{1}/\mathrm{2}}-{f}_{\mathrm{p}}^{n}+O\left(\mathrm{\Delta }{t}^{\mathrm{3}}\right).\end
In the above equations, O(Δt^2) and O(Δt^3) represent higher-order terms, which are ignored in the proposed scheme. Following Lemieux and Tremblay (2009), a simple 1D model taking into account only
the viscous term – the most restrictive condition – leads to the following stability criterion:
$\begin{array}{}\text{(23)}& \mathrm{\Delta }t\le \frac{{\mathit{\rho }}_{\mathrm{i}}h{l}_{\mathrm{min}}^{\mathrm{2}}}{{\mathit{\eta }}_{\mathrm{max}}}=\frac{{e}^{\mathrm{2}}{\mathit{\rho }}_{\mathrm
{i}}{l}_{\mathrm{min}}^{\mathrm{2}}{\mathrm{\Delta }}_{\mathrm{min}}}{{P}^{*}\left(\mathrm{1}+{k}_{\mathrm{t}}\right)},\end{array}$
where l[min] is the minimum smoothing length across all the particles. The stability criterion imposes a strict limitation on the time step ($\sim {\mathrm{10}}^{-\mathrm{4}}$ to 10^−2s for
particles with a radius of 1 to 10km); this cannot be avoided using a pseudo-time step because particles in an SPH framework are irregularly placed and move within the domain at each time step. This
makes the parallelization of the particle interactions algorithm mandatory for any practical applications. On the positive side, the explicit time stepping also eliminates possible convergence issues
of the numerical solver. A pseudo-code for the proposed algorithm is shown below (Algorithm 1).
Domain shape and boundaries, Spatial resolution, Total integration time
initialize particle and boundary according to input
for i=0 to IntegrationTime do
for j=0 to nInteraction do
physicalQuantities← (externalForce,internalForce)
timeStep← smoothingLength
monitor particle interaction statistics
2.5Particle interactions
Following Rhoades (1992), we use the bucket search algorithm parallelized using shared memory multiprocessing (OpenMP) to find all the neighbours of each particle in favour of the tested tree
algorithm (Cavelan et al., 2019), which involve pointers and complex memory structure that are not easy to manipulate in OpenMP. The proposed OpenMP parallelization is rudimentary, and one time step
in a domain with 40000 particles takes ≈0.1s. For this reason, the model requires more computational resources for the effective resolution when compared with a continuum approach. This could be
greatly improved by taking advantage of CPU clusters (Yang et al., 2020) or GPUs (Xia and Liang, 2016).
After the neighbour search, the interactions between pairs of particles are computed using the Wendland C^6 kernel – Wendland kernels have the best stability properties for wavelengths smaller than
the smoothing kernel (Dehnen and Aly, 2012) – which is written as
where α[d] is a normalization factor depending on the dimension of the problem. Note that R ($=\mathit{\kappa }|{\mathbit{r}}_{\mathrm{p}}-{\mathbit{r}}_{\mathrm{q}}|/{l}_{\mathrm{p}}$) is the
normalized distance between particles in the referential r[p]−r[q]. Consequently, we always integrate from 0 to l[p] (the smoothing length) independently of the kernel instead of 0 to κl[p] as shown
by Liu and Liu (2010). The constant α[d] becomes $\frac{\mathrm{78}{\mathit{\kappa }}^{\mathrm{2}}}{\mathrm{7}\mathit{\pi }{l}^{\mathrm{2}}}$ in 2D, with a factor of κ^2 difference from the usual
definition. Note that the scaling factor κ has a value of 1 for the Wendland C^6 kernel. The choice of kernel was validated using stability tests with six different kernels including the original
Gaussian kernel (Gingold and Monaghan, 1977); a quartic-spline Gaussian approximation (Liu and Liu, 2010); a quintic-spline Gaussian approximation (Morris et al., 1997); a quadratic kernel (Johnson
and Beissel, 1996); and the Wendland C^2, C^4 and C^6 kernels (Wendland, 1995).
2.6Smoothing length
The smoothing or correlation length is a key element of SPH and has a direct influence on the accuracy of the solution and the efficiency of the computation. For instance, if l[p] is too small, there
may not be enough particles in the support domain violating the kernel moment requirements. If the smoothing length l[p] is too large, all the local properties of particles would be smoothed out over
a large number of neighbours and the computation time would increase with the number of interactions. In 2D, the optimal number of neighbours interacting with any particle p should be about 20 to
balance the precision and the computational cost (Liu and Liu, 2003). We therefore implement a variable smoothing length that evolves in time and space to maintain this approximate number of
neighbours. To this end, we keep the mass of particles constant in time and evaluate the smoothing length from the particle density. Note that keeping the mass of a particle constant has the
advantage of ensuring mass conservation. This assumption is justified in our case since we are only interested in sea-ice dynamics, and ridging changes the area covered by ice floes but not their
mass. However, fixing the ice mass is only valid when neglecting the thermodynamics and needs to be modified for synoptic-scale simulations.
The initial mass of a particle is defined from the ice area it represents within its support domain (Δ𝒜[p] in Fig. 1). To avoid creating porosity in the medium, we divide the space in equal square
area ($={L}_{\mathrm{p}}^{\mathrm{2}}$) that covers the whole domain. Since we want approximately 20 neighbours for every particle, we introduce α (=3 in all simulations), a parameter that stands for
the approximate number of particles desired in any direction within the support domain. The parameter α can also be interpreted as the proportionality constant between the particle spacing L[p] and
the smoothing length l[p]. Note that to increase the accuracy of the particle approximation, α can be increased by any desired factor (see Fig. 1). The mass carried by a particle is therefore written
$\begin{array}{}\text{(26)}& {m}_{\mathrm{p}}=\mathrm{\Delta }{\mathcal{A}}_{\mathrm{p}}{\mathit{\rho }}_{i}{h}_{\mathrm{0}p}={L}_{\mathrm{p}}^{\mathrm{2}}{\mathit{\rho }}_{i}{h}_{\mathrm{0}p},\end
where h[0p] is the initial mean thickness of the particle. The smoothing length is then updated at each time step diagnostically from
$\begin{array}{}\text{(27)}& {l}_{\mathrm{p}}=\mathit{\alpha }{L}_{\mathrm{p}}=\mathit{\alpha }\sqrt{\frac{{m}_{\mathrm{p}}}{{\mathit{\rho }}_{\mathrm{p}}}}.\end{array}$
The smoothing length l[p] is capped to 10 times its initial value when the particle density tends to zero. This capping prevents conservation of mass for a density lower than 1% of its initial value
(see Eq. 26). We justify this capping because such small densities do not affect the ice dynamics.
2.7Boundary treatment
We implemented the boundary treatment of Monaghan and Kajtar (2009) because of its simplicity, versatility and low computational cost. The boundaries are set up by placing stationary particles with
fixed smoothing length l[b] and a mass m[b] equal to the average ice particle mass m[p]. The boundary smoothing length l[b] is chosen in a way that only one layer of ice particles initially interact
with the boundary (this makes l[b] resolution-dependent). The boundary particles are (equally) spaced apart by a factor of 1/4 of their smoothing length (${l}_{\mathrm{b}}/\mathrm{4}$). In this
manner, all ice particles p within a support domain l[b] will interact with approximately four boundary particles (denoted by the subscript b) at a time, resulting in a net normal repulsive force $
$\begin{array}{}\text{(28)}& {{\mathbit{F}}_{\mathbit{N}}}_{\mathrm{p}}=\sum _{b=\mathrm{1}}^{{N}_{b}}\mathit{\mu }\frac{\left({\mathbit{r}}_{\mathrm{p}}-{\mathbit{r}}_{b}\right)}{|{\mathbit{r}}_{\
which is added to their momentum equation. In Eq. (28), μ is a constant with units of kgm^4s^−2 used to adjust the repulsion strength and is also simulation-dependent because it needs to
counterbalance the particle acceleration and prevent them from escaping the domain. This free parameter is not suited for complex pan-arctic simulations but is sufficient in our idealized
experiments. For all the simulations, a free-slip boundary condition, i.e., no tangential friction force between boundary particle and ice particle, is applied.
Values of the parameters used for the simulations are the same as the ones presented in Williams et al. (2017) to facilitate comparison in the Results section.
3.1Plastic wave propagation
We first compare the plastic wave speed for the VP dynamic equations with and without the SPH approximations. To this end, we do a perturbation analysis for a 1D case with a fixed sea-ice
concentration (A=1). In this case, the 1D SPH sea-ice dynamic equations (Eqs. 13–16) form a system of three equations and three unknowns (x, u and h):
$\begin{array}{}\text{(29)}& \frac{\text{D}{x}_{\mathrm{p}}}{\text{D}t}={u}_{\mathrm{p}},\text{(30)}& \frac{\text{D}{u}_{\mathrm{p}}}{\text{D}t}=\mathrm{\Gamma }\sum _{q=\mathrm{1}}^{N}\frac{{m}_{\
mathrm{q}}}{{\mathit{\rho }}_{\mathrm{i}}^{\mathrm{2}}}\left(\frac{\mathrm{1}}{{h}_{\mathrm{q}}}+\frac{\mathrm{1}}{{h}_{\mathrm{p}}}\right)\frac{{x}_{\mathrm{pq}}}{|{x}_{\mathrm{pq}}|}\frac{\partial
W}{\partial {x}_{\mathrm{pq}}}+{\mathit{\tau }}_{\mathrm{p}},\text{(31)}& \frac{\text{D}{h}_{\mathrm{p}}}{\text{D}t}=-\frac{\mathrm{1}}{{\mathit{\rho }}_{\mathrm{i}}}\sum _{q=\mathrm{1}}^{N}{m}_{\
mathrm{q}}\left({u}_{\mathrm{q}}-{u}_{\mathrm{p}}\right)\frac{{x}_{\mathrm{pq}}}{|{x}_{\mathrm{pq}}|}\frac{\partial W}{\partial {x}_{\mathrm{pq}}},\end{array}$
where x[pq] is a short form for x[p]−x[q] and
$\begin{array}{}\text{(32)}& \mathrm{\Gamma }=\frac{{P}^{*}}{\mathrm{2}}\left[±\left({e}^{-\mathrm{2}}+\mathrm{1}{\right)}^{\mathrm{1}/\mathrm{2}}-\mathrm{1}\right].\end{array}$
In the above, we made use of the following 1D normal stress for convergent plastic motion (see Gray, 1999; Williams et al., 2017, for 1D normal stress derivation):
$\begin{array}{}\text{(33)}& \mathit{\sigma }={\mathit{\sigma }}_{xx}=\frac{{P}^{*}}{\mathrm{2}}\left[±\left({e}^{-\mathrm{2}}+\mathrm{1}{\right)}^{\mathrm{1}/\mathrm{2}}-\mathrm{1}\right]h=\mathrm{\
Gamma }h.\end{array}$
Linearizing around a mean state ($\stackrel{\mathrm{‾}}{u}=\mathrm{0}$ and $\stackrel{\mathrm{‾}}{h}={h}_{\mathrm{0}}$), considering small perturbations (δx, δu and δh) and ignoring the second-order
term, we obtain
$\begin{array}{}\text{(34)}& \frac{\text{D}\mathit{\delta }{x}_{\mathrm{p}}}{\text{D}t}=\mathit{\delta }{u}_{\mathrm{p}},\text{(35)}& \begin{array}{rl}\frac{\text{D}\mathit{\delta }{u}_{\mathrm{p}}}
{\text{D}t}& =\frac{\mathrm{\Gamma }}{{\mathit{\rho }}_{\mathrm{i}}}\sum _{q=\mathrm{1}}^{N}\mathrm{\Delta }{\mathcal{A}}_{\mathrm{q}}\frac{{\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}}{|{\stackrel{\
mathrm{‾}}{x}}_{\mathrm{pq}}|}\left(\frac{-\mathrm{1}}{{h}_{\mathrm{0}}}\left(\mathit{\delta }{h}_{\mathrm{q}}+\mathit{\delta }{h}_{\mathrm{p}}\right)\frac{\partial W}{\partial {\stackrel{\mathrm{‾}}
{x}}_{\mathrm{pq}}}\\ & +\mathrm{2}\left(\mathit{\delta }{x}_{\mathrm{p}}-\mathit{\delta }{x}_{\mathrm{q}}\right)\frac{{\partial }^{\mathrm{2}}W}{\partial {\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}^{\
mathrm{2}}}\right),\end{array}\text{(36)}& \frac{\text{D}\mathit{\delta }{h}_{\mathrm{p}}}{\text{D}t}=-{h}_{\mathrm{0}}\sum _{q=\mathrm{1}}^{N}\mathrm{\Delta }{\mathcal{A}}_{\mathrm{q}}\frac{{\
stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}}{|{\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}|}\left(\mathit{\delta }{u}_{\mathrm{q}}-\mathit{\delta }{u}_{\mathrm{p}}\right)\frac{\partial W}{\partial {\stackrel
where $\mathrm{\Delta }{\mathcal{A}}_{\mathrm{q}}=\frac{{m}_{\mathrm{q}}}{{\mathit{\rho }}_{\mathrm{i}}{h}_{\mathrm{0}}}=\frac{{m}_{\mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}}$ (Eq. A4) and where we
have used the binomial expansion $\frac{\mathrm{1}}{h}=\frac{\mathrm{1}}{{h}_{\mathrm{0}}+\mathit{\delta }h}\approx \frac{\mathrm{1}}{{h}_{\mathrm{0}}}\left(\mathrm{1}-\frac{\mathit{\delta }h}{{h}_{\
mathrm{0}}}\right)$. Following Williams et al. (2017), we do a perturbation analysis on the system of Eqs. (34)–(36) and assume a wave solution of the form $\mathit{\delta }f=\stackrel{\mathrm{^}}{f}
\phantom{\rule{0.25em}{0ex}}\text{exp}\left(i\left(k\stackrel{\mathrm{‾}}{x}-\mathit{\omega }t\right)\right)$, where i is the imaginary number; k is the wavenumber; ω is the angular velocity; and f
is a dummy variable standing for u, x and h. Substituting δf in Eqs. (34)–(36), the resulting set of equations in the reference frame following the ice motion reduces to
$\begin{array}{}\text{(37)}& \stackrel{\mathrm{^}}{x}=\frac{i}{\mathit{\omega }}\stackrel{\mathrm{^}}{u},\text{(38)}& \begin{array}{rl}\stackrel{\mathrm{^}}{u}& =\frac{i\mathrm{\Gamma }}{\mathit{\
omega }{\mathit{\rho }}_{\mathrm{i}}}\sum _{q=\mathrm{1}}^{N}{\mathcal{A}}_{\mathrm{q}}\frac{{\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}}{|{\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}|}\left(\left[-\frac{\
stackrel{\mathrm{^}}{h}}{{h}_{\mathrm{0}}}\left(\mathrm{1}+\phantom{\rule{0.25em}{0ex}}\text{exp}\left(-ik{\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}\right)\right)\right]\frac{\partial W}{\partial {\
stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}}\\ & +\mathrm{2}\stackrel{\mathrm{^}}{x}\left(\mathrm{1}-\phantom{\rule{0.25em}{0ex}}\text{exp}\left(-ik{\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}\right)\right)\
frac{{\partial }^{\mathrm{2}}{W}_{\mathrm{pq}}}{\partial {\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}^{\mathrm{2}}}\right),\end{array}\text{(39)}& \stackrel{\mathrm{^}}{h}=-\frac{i{h}_{\mathrm{0}}\
stackrel{\mathrm{^}}{u}}{\mathit{\omega }}\sum _{q=\mathrm{1}}^{N}{\mathcal{A}}_{\mathrm{q}}\frac{{\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}}{|{\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}|}\left(\phantom{\
Note that since the ice is initially at rest, the Lagrangian and the Eulerian frameworks are equivalent. For a large enough wavelength (so that the perturbation can be resolved across multiple
particles with high accuracy, i.e., λ≥l[p] and N→∞), the summations can be approximated by integrals over the space; i.e., ${\sum }_{q=\mathrm{1}}^{N}{\mathcal{A}}_{\mathrm{q}}\frac{{\stackrel{\
mathrm{‾}}{x}}_{\mathrm{pq}}}{|{\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}|}$ becomes ${\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}\text{d}{\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}$. Taking advantage
of the kernel properties – i.e., all moments higher than 0 vanish – we can write Eqs. (38)–(39) as
$\begin{array}{}\text{(40)}& \begin{array}{rl}\stackrel{\mathrm{^}}{u}& =\frac{-i\mathrm{\Gamma }}{\mathit{\omega }{\mathit{\rho }}_{\mathrm{i}}}\underset{-\mathrm{\infty }}{\overset{\mathrm{\infty
}}{\int }}\left(\frac{\stackrel{\mathrm{^}}{h}}{{h}_{\mathrm{0}}}\frac{\partial W}{\partial {\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}}+\mathrm{2}\stackrel{\mathrm{^}}{x}\frac{{\partial }^{\mathrm{2}}
{W}_{\mathrm{pq}}}{\partial {\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}^{\mathrm{2}}}\right)\phantom{\rule{0.25em}{0ex}}\text{exp}\left(-ik{\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}\right)\text{d}{\
stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}\\ & =\frac{\mathrm{\Gamma }}{\mathit{\omega }{\mathit{\rho }}_{\mathrm{i}}}\left(\frac{\stackrel{\mathrm{^}}{h}}{{h}_{\mathrm{0}}}k+i\mathrm{2}{k}^{\mathrm{2}}\
stackrel{\mathrm{^}}{x}\right)\stackrel{\mathrm{̃}}{W},\end{array}\text{(41)}& \stackrel{\mathrm{^}}{h}=-\frac{i{h}_{\mathrm{0}}\stackrel{\mathrm{^}}{u}}{\mathit{\omega }}\underset{-\mathrm{\infty }}
{\overset{\mathrm{\infty }}{\int }}\phantom{\rule{0.25em}{0ex}}\text{exp}\left(-ik{\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}\right)\frac{\partial W}{\partial {\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}}\
text{d}{\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}=\frac{{h}_{\mathrm{0}}\stackrel{\mathrm{^}}{u}k}{\mathit{\omega }}\stackrel{\mathrm{̃}}{W},\end{array}$
where the integrals have been converted to Fourier transform using $\mathcal{F}\left(\frac{\partial W}{\partial {\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}}\right)={\int }_{-\mathrm{\infty }}^{\mathrm{\
infty }}\left(\frac{\partial W}{\partial {\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}}\right)\text{exp}\left(-ik{\stackrel{\mathrm{‾}}{x}}_{\mathrm{pq}}\right)\text{d}{\stackrel{\mathrm{‾}}{x}}_{\mathrm
{pq}}=ik\mathcal{F}\left(W\right)=ik\stackrel{\mathrm{̃}}{W}$. Finally, Eqs. (37), (40) and (41) represent a system of three equations for three unknowns ($\stackrel{\mathrm{^}}{x}$, $\stackrel{\
mathrm{^}}{u}$, $\stackrel{\mathrm{^}}{h}$) that we solve by substitution. This leads to the following form for the phase speed of the plastic wave ($\frac{\mathit{\omega }}{k}$):
$\begin{array}{}\text{(42)}& {c}_{\text{SPH}}=\frac{\mathit{\omega }}{k}=±\stackrel{\mathrm{̃}}{W}\sqrt{-\frac{\mathrm{\Gamma }}{{\mathit{\rho }}_{\mathrm{i}}}\left(\frac{\mathrm{2}}{\stackrel{\mathrm
For wavelengths much larger than the smoothing length ($\mathit{\lambda }\propto \frac{\mathrm{1}}{k}\gg {l}_{\mathrm{p}}$), the Fourier transform of the kernel tends to 1 ($\stackrel{\mathrm{̃}}{W}\
approx \mathrm{1}$) and the SPH formulation reduces to the viscous–plastic theory without SPH approximations (see for instance Williams et al., 2017), i.e.,
$\begin{array}{}\text{(43)}& {c}_{\text{VP}}=±\sqrt{-\frac{\mathrm{\Gamma }}{{\mathit{\rho }}_{\mathrm{i}}}},\end{array}$
with a plastic wave propagation speed c[VP]≈5.7ms^−1 for typical sea-ice parameters (see Table 1). Consequently, a major difference of SPH with the FDM framework is that the plastic wave speed is
dispersive with a phase velocity c[SPH] that is dependent on the wavelength and the smoothing length. In general, only the plastic waves with a wavelength between approximately 1 and 11 times the
smoothing length will have their travelling speed modified by more than 1%. More specifically, in the limit where the wavelength λ approaches the smoothing length l[p], the plastic wave speed
increases in the SPH framework to a maximum value of ≈6.7ms^−1 (see Fig. 2a). Note that for wavelengths smaller than the smoothing length, the summations in Eqs. (40)–(41) cannot be written as
integrals, but the particles still respond partially to the perturbations. This sometimes leads to the tensile and the zero-energy mode instabilities (Swegle et al., 1995). As mentioned above, Dehnen
and Aly (2012) showed that Wendland kernels can diminish the tensile instability and the pairing of particles. A deeper analysis of unresolved waves (λ<l[p]) in the context of sea-ice SPH dynamic
equations is beyond the scope of the current study.
For the more general case when the base state allows for a variable concentration (linearized around a mean state $\stackrel{\mathrm{‾}}{A}={A}_{\mathrm{0}}$) and considering the classical – denoted
by a superscript C – particle density definition (${\mathit{\rho }}_{\mathrm{p}}^{\text{C}}={\mathit{\rho }}_{\mathrm{i}}{h}_{\mathrm{p}}{A}_{\mathrm{p}}$) used by Wang et al. (1998), Ji et al. (2005
), and Staroszczyk (2017), the plastic wave speed becomes
$\begin{array}{}\text{(44)}& {c}_{\text{A,SPH}}^{\text{C}}=±\stackrel{\mathrm{̃}}{W}\sqrt{-\frac{{\mathrm{\Gamma }}^{*}}{{\mathit{\rho }}_{\mathrm{i}}}\left(C{A}_{\mathrm{0}}-\mathrm{3}+\frac{\mathrm
where ${\mathrm{\Gamma }}^{*}=\mathrm{\Gamma }\phantom{\rule{0.25em}{0ex}}\text{exp}\left(-C\left(\mathrm{1}-{A}_{\mathrm{0}}\right)\right)$. We argue that the plastic wave speed ${c}_{\text{A,SPH}}^
{\text{C}}$ obtained with the classical density definition does not converge (see Fig. 2b) to the viscous–plastic theory, c[A,VP], derived from FDM (see Williams et al., 2017, for a derivation),
$\begin{array}{}\text{(45)}& {c}_{\text{A,VP}}=±\sqrt{-\frac{{\mathrm{\Gamma }}^{*}}{{\mathit{\rho }}_{\mathrm{i}}}\left(C{A}_{\mathrm{0}}+\mathrm{1}\right)},\end{array}$
because the ice concentration is taken into account in both the definition of ${\mathit{\rho }}_{\mathrm{p}}^{\text{C}}$ and implicitly in the definition of the average thickness h[p]. When we
consider the new formulation of particle density independent of concentration as proposed above (Eq. 18), the wave speed equation becomes
$\begin{array}{}\text{(46)}& {c}_{\text{A,SPH}}=±\stackrel{\mathrm{̃}}{W}\sqrt{-\frac{{\mathrm{\Gamma }}^{*}}{{\mathit{\rho }}_{\mathrm{i}}}\left(C{A}_{\mathrm{0}}-\mathrm{1}+\frac{\mathrm{2}}{\
which reduces to the FDM VP theory (Eq. 45) when the wavelength is large compared to the smoothing length (see Fig. 2c). Note that the perturbation analysis presented above is not valid for the
classical density definition proposed by Wang et al. (1998), Ji et al. (2005), and Staroszczyk (2017) since they use a different set of momentum, continuity and constitutive equations to describe sea
ice. In a similar manner to the plastic wave speed with a fixed concentration (Eq. 42), the wave speed c[A,SPH] (Eq. 46) is dispersive and the wavelengths between 1 and 11 times the smoothing length
are those that are mostly affected (more than 1%). However, in this case, the plastic wave speed is damped for wavelengths close to the smoothing length for a mean concentration state higher than
0.1. Note that while the plastic wave speed is defined for all A, it does not have a physical meaning for A<0.85 since there are negligible ice–ice interactions.
3.2Ridging experiments
We validate our implementation of the SPH model (with the new definition of particle density ρ[p]) in a 1D ridging experiment for which we can validate against the simulated field from a
viscous–plastic sea-ice model based on the FDM – the 1D version of McGill-SIM model used in the SIREx studies (Bouchat et al., 2022; Hutter et al., 2022) – and against the analytical solution (see
Williams and Tremblay, 2018, for a derivation):
$\begin{array}{}\text{(47)}& -\frac{\text{d}\mathit{\sigma }}{\text{d}x}={\mathit{\rho }}_{a}{C}_{a}|{u}_{a}|{u}_{a}⇒\frac{\text{d}h}{\text{d}x}=\frac{\mathrm{2}{\mathit{\rho }}_{a}{C}_{a}|{u}_{a}|
i.e., a linear profile in thickness with a slope proportional to the square of the wind velocity and inversely proportional to the ice strength. We consider a rectangular domain of 1000 by 2000km
including the boundary (the ice field is 1900km to ensure that no particles escape on the open side) with 37240 particles; an initial homogeneous smoothing length l[p] of 21.429km (spacing ${l}_{\
mathrm{p}}/\mathit{\alpha }=\mathrm{7.14}$km); and a smaller – to limit boundary effect – boundary particle smoothing length l[b] of 4km (spacing ${l}_{\mathrm{b}}/\mathrm{4}=\mathrm{1.0}$km) to
represent the wall (see Fig. 3). Particles are initialized with an average thickness h=1m and a concentration A=1. They are forced against the wall by a constant unidirectional wind of 5ms^−1.
Note that the water stress is removed in the simulation for a faster convergence to the steady state, which enables higher resolution – a water current of 0ms^−1 would slow down the ice and the
ridge formation since it is driven by the advection speed. The Coriolis force should normally also have to be considered with this domain size at classical polar latitude – the Rossby number is 𝒪(10^
−2) – but is neglected in this idealized experiment to conserve the symmetry of the solution and compare it to the theoretical 1D equation (Eq. 47). In results presented below (Figs. 4–5), the
particle properties are averaged over a grid of approximately 10 by 5km cells for plotting purposes.
Results show that the simulated thickness field converges to the analytical solution (within an error of ≈1%) after ≈5d with a slope of $\mathrm{1.33}×{\mathrm{10}}^{-\mathrm{3}}$mkm^−1, compared
to $\mathrm{1.34}×{\mathrm{10}}^{-\mathrm{3}}$mkm^−1 for the theory – lower-resolution simulations were run for a longer time and also converged to this stable state (results not shown). This is
comparable to the precision obtained by the 1D McGill-SIM FDM model which reaches a slope of $\mathrm{1.35}×{\mathrm{10}}^{-\mathrm{3}}$mkm^−1. Artifacts are observed close to the boundary where
the repulsive force prevents the particles from reaching the “wall”. Additionally, when a particle comes into contact with the boundary with a certain inertia (due to the $\mathrm{1}/\mathbit{r}$
dependence of the boundary force), we observe oscillations in the motion of particles which can propagate far in the domain (e.g., Fig. 4a, at $x\approx \left[\mathrm{50},\mathrm{300}\right]$km and
$t=\left[\mathrm{30},\mathrm{45}\right]$h). The oscillations are damped, and the energy is dissipated by the rheology term with time until an equilibrium is reached. Note that reintroducing the
water drag diminishes the oscillation coming from the boundary but does not remove them completely. A more physical boundary treatment is beyond the scope of this study.
We also repeated the ridge experiment with the same forcing and total sea-ice volume but letting the sea-ice concentration evolve with time. Specifically, the initial average thickness and
concentration were set to h=0.5m and A=0.5. This ensures that both h and A covary in time such that $\frac{h}{A}$ remains constant – note that A and h follow the same continuity Eqs. (15) and (16)
or (4) and (5) when omitting the SPH approximations, and therefore they should vary identically in time until A reaches 1 – in the marginal ice zone (MIZ), which we define as the area where the
sea-ice concentration ranges between 0.15 and 0.85 and where low ridging by ice collision occurs (see Fig. 4b). To accomplish this, the domain was extended to 4000km (3800km, excluding the
boundaries) and the initial particles spacing changed from 7.14km to 10.0km for a corresponding initial smoothing length l[p] of 30.0km and total number of particles of 38000. In this
configuration, the model converges to a steady-state solution in ≈10d with a slope of $\mathrm{1.36}×{\mathrm{10}}^{-\mathrm{3}}$mkm^−1, in agreement with theory within an error of ≈1% (see Fig.
4b). Results at x=300km, away from boundary effects, show that (as desired) thickness and concentration evolve coherently – $\frac{\text{d}\left(h/A\right)}{\text{d}t}\approx \mathrm{0}$ – before
ice concentration reaches ≈85% (see Fig. 5a). At that point (t≈22h), ice–ice interactions emerge and the ridging process starts $\left(\frac{\text{d}\left(h/A\right)}{\text{d}t}>\mathrm{0}\right)$.
One key difference with the simulation initialized at A=1 is a thickness build-up (above 1m) at the edge of the ridge in the MIZ. At this location, the continuity equation for sea-ice concentration
is capped, while that of the mean ice thickness remains continuous. This results in a local increase in ice thickness to ≈1.1m. This process is akin to the wave radiation drag in the MIZ (Sutherland
and Dumont, 2018). A detailed analysis of simulations in simple convergent ice flow in the MIZ with ice concentration close to 100% will be considered in future work.
In the ridge building phase, the speed of advance of the ridge front increases until a maximum concentration is reached after ≈70h (see Fig. 5c). Subsequently, the ice drift speed reduces, and the
rate of advance of the ridge slows down. When the ice thickness gradient is in balance with the surface wind stress (after ≈200h), $\frac{\text{d}\left(h/A\right)}{\text{d}t}$ reaches a steady
state. Overall, we can observe three stages in the ridge formation (see Fig. 5): first, a rapid compaction stage when ice particles are drifting close to their free-drift speed since the ice strength
is weak; second, a transition stage between A≈0.85 and 1.00 when ridging occurs in the MIZ analogous to the wave radiation drag mentioned above; and third, a ridging stage with changes in ice
thickness that are about 1 order of magnitude higher than during the transition stage. Note that the amplitude of oscillations between particles within the domain or at the boundaries in ridging
experiments diminishes when incorporating the water drag (a damping term). The water drag also increases the time needed to reach steady state because the ice drift speed is slower.
3.3Arch experiments
We next compare the SPH approach with the FDM and DEM sea-ice models in a second well-studied idealized experiment: the ice arch formation. To this end, we run the SPH model in an idealized domain
representing the Nares Strait (see Fig. 6) with an upstream reservoir 5 times the length of the channel (L) to minimize the boundary confinement effect without sacrificing the spatial resolution.
The set of simulations uses a domain with L=60km. The initial conditions for ice thickness, concentration and velocity are h=1m, A=1 and u=0ms^−1. The ice is forced with a constant unidirectional
wind of −7.5ms^−1 in the $\stackrel{\mathrm{^}}{\mathbit{y}}$ direction, and ocean current is fixed to u[w]=0ms^−1. The corresponding surface stress is ≈0.04kNm^−2, and the total integrated
stress at the entry of the channel is slightly smaller than P^* (${\int }_{\mathrm{0}}^{\mathrm{5}L}{\mathit{\tau }}_{a}\text{d}x=\mathrm{26.325}$kNm^−1). We use a weaker wind than commonly used in
Nares Strait ice arch simulations (≈10ms^−1) to limit the ridging phase prior to the formation of the ice arch. In this experiment, we limit ourselves to ice with no tensile strength (k[t]=0) and a
shear strength of 6.875kNm^−2, i.e., an ellipse aspect ratio of 2.
We suspect that the SPH and DEM frameworks have a similar behaviour in certain circumstances even though they have different (implicit) rheologies because of their Lagrangian nature. Indeed, the
interpretation of the numerical representation of a particle in SPH as a collection of ice floes is also present in the DEM (Li et al., 2014), and the two numerical frameworks compute their
quantities with one-to-one interactions. Consequently, we first test whether the SPH approach has the same sensitivity to the relative size of particles with respect to the channel width as in the
DEM (Damsgaard et al., 2018). Results showed that no stable arch can be formed with the specified forcing for all particle diameter sizes tested (7.5, 5, 3.75km) (see ice velocity field in Fig. 7).
Instead, a “continuous” slow flow of ice is present in the channel. The discontinuity at the entry of the channel is visible in the concentration, thickness and velocity fields (Fig. 7) and can be
interpreted as an intermittent (unstable) ice arch formation. Also, we noted that larger particles are not more prone to ice jam than smaller ones. This is contrary to what is known from granular
material theory and to results from Damsgaard et al. (2018) that show a transition from stable to no ice arch formation for floe sizes ranging from approximately 1/4 to 1/16 of the strait's width. We
explain this difference between SPH and DEM from the continuum description of the ice dynamics equation. In the present model, the constitutive laws prescribe the repulsion of the particles from one
another according to the ice strength, which is a function dependent on the ice concentration and mean thickness, not on the particle size. We conclude that to enforce granularity within the SPH
framework, the constitutive laws would need to be adapted to account for contact force and particle size which could then reproduce a similar behaviour to that observed in the DEM. However, even
though the increase in resolution – or particle size – has no effect on the arch stability, it enables smaller fracture resolutions that are visible at the entrance of the channel (see ϵ[I] and ϵ[II]
in Fig. 8). In our SPH model, the stress invariants σ[I] and σ[II] shows oscillation patterns in regions where the ice is in the viscous regime (see the tree-like structure in the normal and shear
stress fields in Fig. 8). From our experiments, the “tree-like” peak stresses appear during the transient phase and at steady state. However, the particles never stop moving even in a steady state
because viscous deformations are always present. We hypothesized that stress patterns are associated with over-damped elastic waves associated with small movements (but large internal stresses) of
the particle in the viscous regime. Those structures are not symmetric, despite symmetrical initial conditions, because of the domino effect between interacting viscous waves. Note that they are
absent from the strain-rate fields since viscous deformations are extremely small. They are also absent in sea-ice model based on a continuum approach (Dumont et al., 2009; Dansereau et al., 2017;
Plante et al., 2020), but these tree-like structures are qualitatively similar to the stress structure between floes observed in the DEM (e.g., Damsgaard et al., 2018, Fig. 5c). Despite the fact that
the model solves the same continuum equations as other FDM models, we believe that stress networks can be observed with the SPH method because particles interact in a pairwise fashion according to
their relative distance. This can create less-dense ice areas within the medium which can lead to oscillations in the stress field. It is known that SPH can have spurious behaviour in some cases when
the stress is solved at the same location as the particle centre (as done here). This can be avoided using stress particles (see Chalk et al., 2020, for details). More investigations are required to
test whether this behaviour is physical. This is left for future work.
Second, we explored the ability of the model to produce stable ice arches. To this end, we reduce the total integrated surface stress at the entry of the channel to 13.146kNm^−1 (or wind speed of
5.3ms^−1) to a value below the ice compressive strength (P^*) to avoid completely ridging the north of the channel and jump immediately to the arch-forming stage. In this case, the results show a
clear stable arch (see Fig. 9) with a shape that is qualitatively similar to the one presented by Dansereau et al. (2017), Plante et al. (2020), and West et al. (2022). The formation of a stable arch
in an SPH model is possible with the standard shear strength (e=2), in contrast to continuum models that require an increase in shear strength (e.g., Dumont et al., 2009; Dansereau et al., 2017;
Plante et al., 2020) – it is important to keep in mind that the domain configurations were different in each of those studies. This suggests that SPH has a different sensitivity of ice arching to the
ellipse aspect ratio e and ice thickness h. With a no-slip boundary condition and the same default yield curve (same P^* and ellipse aspect ratio e), preliminary results suggest that no arches form –
the pack is undeformed – and instead a higher surface wind stress is required to form an arch. Note that in the SPH simulations, only one arch forms close to the outlet. Presumably, the number of
arches would increase and location would change if the model was run at higher resolution, with different boundary conditions or in a non-idealized domain geometry. Overall, this shows that SPH is
able to capture large-scale features coming from small-scale interactions. The simulation of a stable ice arch (Fig. 9) also shows how SPH can fracture and create discontinuities in the ice field as
seen in DEM models. This behaviour is similar to that of the elastic-decohesive sea-ice constitutive model of Schreyer et al. (2006) or the FEM model of Rampal et al. (2016). Finally, in the SPH
framework, a lead or polynya can be defined by an absence of particles for leads larger than particle size – akin to DEM – or by particles with reduced concentration for sub-particle size leads –
akin to FDM.
4Discussion and conclusion
In this paper, we have presented a first implementation of the viscous–plastic rheology with an elliptical yield curve and normal flow rule in the framework of SPH with the long-term goal of
simulating synoptic-scale sea-ice dynamics. We have described the basics of the SPH approach and how the sea-ice dynamic equations can be formulated in this framework along with the implementation of
key components of the numerical method such as the smoothing length, the kernel, the boundaries and the time integration technique. We proposed a different definition of the particle density and
showed that the more commonly used density definition involving the ice concentration (Wang et al., 1998; Ji et al., 2005; Staroszczyk, 2017) when used together with the average ice thickness leads
to erroneous plastic wave speed propagation. A particle density definition independent of the ice concentration corrects this and leads to results that are in line with the VP theory. The SPH model
thus developed is in excellent agreement (error of ≈1%) with an analytical solution of the VP ice dynamics for a simple 1D ridging experiment. The approximations used at the core of the SPH
framework result in a dispersive plastic wave speed in the medium – in contrast to its FDM counterpart – which is dependent on the smoothing length (or resolution) and the choice of the kernel. The
plastic wave speed is mostly affected for wavelengths 11 times the smoothing length and lower.
From the simple ridging experiment with fixed sea-ice concentration (A=1), we observe nonphysical damped oscillations that propagate in the domain associated with our choice of boundary conditions.
The conclusions drawn from our simulations are robust to the choice of boundary conditions. Nevertheless, this behaviour needs to be removed for a proper simulation of sea ice near coastlines. The
ridging experiment with an initial ice concentration below 100% showed that continuity equations for concentration and thickness evolve coherently until a concentration of 85%. At that point, SPH
particles start to ridge locally in the MIZ in addition to the wall where the maximum stress is located. This effect is not observed in the continuum approach and is presumably related to particle
collisions in converging motion.
When compared to other numerical frameworks, the SPH model is able to reproduce stable ice arches in an idealized domain of a strait with an ellipse aspect ratio of 2 and a wind forcing of 5.3ms^−1
, in contrast to other continuum approaches that require higher material shear strength. However, when using a stronger wind field of 7.5ms^−1, no stable arches are formed when increasing the
particles size in the strait (stable arches are only achieved when increasing particle average thickness). We concluded that the number of particles in the strait does not influence the formation of
ice arches, in contrast to the DEM, and is analogous to an increase in resolution in a continuum framework: a larger number of particles influence the number of fractures that can form and the
resolution of fine-scale structures. The stress fields produced by the SPH model in the channel experiment show a tree-like pattern upstream of the channel where there are low total deformations.
This is not observed in FDM experiments, but it is qualitatively similar to the tensile stress network exhibited in DEM simulations (Damsgaard et al., 2018) that comes from individual contact force
between the ice floes and is hypothesized to be associated with damped viscous sound waves.
Even though we successfully implemented the standard sea-ice viscous–plastic rheology with an elliptical yield curve and a normal flow rule in an SPH framework, the current model does not outperform
a classical FDM model. In fact, there are inherent difficulties and instabilities in SPH that do not exist in FDM. It is known that the SPH framework trades consistency – i.e., the ability to
correctly represent a differential equation in the limit of an infinite number of points with a null spacing between them – for stability, which gives the SPH a distinct feature of working well for
many complicated problems with good efficiency but less accuracy. However, the classical formulation of SPH used and described in the present work does not usually respect zeroth-order consistency
because of the unstructured particle position in space (see Belytschko et al., 1998, Sect. 3 for a derivation). Nevertheless, consistency can be improved at the expense of computational cost (Chen
and Beraun, 2000; Liu et al., 2003) by reformulating the SPH core approximation (Eq. A1). Also, the boundary description has been identified as a weak point of the SPH framework. Prescribing
Dirichlet, Neumann or Robin boundary conditions is not as straightforward as in continuum approaches. Moreover, preventing particle penetration through a boundary is still a challenging task (Liu and
Liu, 2010), and the SPH consistency is usually at its worst at the boundary because the support domain is truncated. In the present study, a proper physical representation of the boundary was not
adopted, and the boundary treatment was chosen for its numerical simplicity and should be modified in future work. Other major issues with SPH are the zero-energy modes and the tensile instability
previously mentioned. The zero-energy modes can be found in FDM and FEM, and they correspond to modes at which the strain energy calculated is erroneously zero (Swegle et al., 1995). The tensile
instability results in particle clumping or nonphysical fractures in the material. In the present work, we adopted a different kernel from the usual Gaussian spline to avoid those instabilities, but
other methods such as the independent stress point (Dyka and Ingel, 1995; Chalk et al., 2020), artificial short-length repulsive force (Monaghan, 2000), particle repositioning (Sun et al., 2018) or
adaptive kernel (Lahiri et al., 2020) can be used if more stabilization is needed. For example, at a smaller scale, SPH simulation of ice in uniaxial compression was improved by a simplified
finite-difference interpolation scheme (Zhang et al., 2017). More specifically for sea-ice models, the pressure closure of Kreyscher et al. (2000) is not well-suited for long simulation. Indeed,
particles can still move when they are in the viscous state but have low internal ice pressure because of the replacement pressure scheme. Consequently, particles could pass through each other
resulting in erroneous locations of the parameters carried out. Finally, using SPH for sea-ice modules in grid-based continuum global climate models (GCMs) complicates the coupling with ocean and
atmosphere components since particle quantities need to be converted on a grid and vice versa.
In its current state, the model reproduces very similar behaviour to other FDM continuum models and does not constitute a large improvement. Nevertheless, we believe that SPH enables the possibility
to describe sea ice as a continuum at large scale using what is already known from continuum models and to explore some new avenues at small scales, where the continuity approximation is
questionable. Indeed, SPH also has interesting properties that could be exploited. For example, SPH can be used with little change for problems involving several fluids, whether liquid, gas or dust
fluids (Monaghan, 2012). This feature could be exploited in the creation of a general approach for all components of a GCM (atmosphere, ocean and sea ice). The method developed is also a proper
option for nowcasting sea-ice prediction because only the ice dynamics need to be considered in nowcasting applications and the model has a good ability to carry the ice property in space. SPH can
fracture and transition from the continuum to fragments seamlessly since it is not restricted on a grid, which also has the advantage of enabling ice edge shapes independent of it. The ability of SPH
to move around particles has the interesting property of concentrating them in converging motion, effectively increasing the spatial resolution of the model in regions under high-stress activity and
dispersing particles when the flow is divergent, which decreases the resolution in low-ice-concentration areas. This property should result in higher accuracy than that in typical continuum models.
The elastic behaviour assumed for sea ice with a certain rheology can be associated with the weak compressibility inherent in the classical formulation of SPH. Finally, the SPH discretization of the
continuum into particles enables the implementation of several new features. For example, angular momentum to individual floes (or pack of floes) can be added to take into account rotation along
LKFs. A direct measure of the concentration from the number of particles within a support domain (this takes advantage of the already-computed number of neighbours and helps to ensure the desired
number of neighbours in converging flow) can be computed. A subscale parametrization of floe–floe contact force (this short-length repulsive force could also help with the tensile instability) can be
implemented. A varying floe size distribution could be incorporated by varying the mass carried by a particle for a given particle density.
For future work and before exploring new features enabled by the SPH numerical framework, a more physical treatment of the boundary conditions should be investigated to properly simulate the
grounding of sea ice near the coast enabling the no-slip conditions. Subsequently, the model could be tested against other benchmark problems in an idealized domain to further understand and compare
the effect of the SPH method (Flato, 1993; Hunke, 2001; Hibler et al., 2006; Danilov et al., 2015; Mehlmann et al., 2021). Also, in order to use the model for pan-Arctic simulations, the Coriolis and
sea surface tilt force along with the treatment of the thermodynamics source and sink terms should be implemented in the SPH framework (see preliminary work by Staroszczyk, 2018). In addition, the
parallelization of the code should be improved in order to bring the computational time down to a value comparable to that of an FDM model. Finally, while there still is a significant amount of work
to be completed before SPH can be used in large-scale climate simulations, the method shows promise and deserves further investigations and development.
Appendix A:Smoothed particle hydrodynamics basics
The SPH method is at the interface between the finite-element and discrete-element methods. In this framework any function f(r) at a point r is approximated from neighbouring values in the parameter
space f(r^′) using an integral interpolant (see Fig. A1):
$\begin{array}{}\text{(A1)}& f\left(\mathbit{r}\right)=\underset{\mathcal{V}}{\int }f\left({\mathbit{r}}^{\mathbf{\prime }}\right)W\left(|\mathbit{r}-{\mathbit{r}}^{\mathbf{\prime }}|,l\right)\mathrm
{d}{\mathbit{r}}^{\mathbf{\prime }},\end{array}$
where $W\left(|\mathbit{r}-{\mathbit{r}}^{\mathbf{\prime }}|,l\right)$ is the interpolating kernel and 𝒱 is the entire space volume. In 2D, the space volume is an area 𝒜, and the kernel has units of
per square meter (m^−2). This integral interpolant approximation is based on the singular integral mathematical framework of Natanson (1961) and imposes the following restrictions on the kernel:
$\begin{array}{}\text{(A2)}& \underset{\mathcal{A}}{\int }W\left(|\mathbit{r}-{\mathbit{r}}^{\mathbf{\prime }}|,l\right)\mathrm{d}{\mathbit{r}}^{\mathbf{\prime }}=\mathrm{1}\end{array}$
$\begin{array}{}\text{(A3)}& \underset{l\to \mathrm{0}}{lim}\phantom{\rule{0.33em}{0ex}}W\left(|\mathbit{r}-{\mathbit{r}}^{\mathbf{\prime }}|,l\right)=\mathit{\delta }\left(\mathbit{r}-{\mathbit{r}}^
{\mathbf{\prime }}\right),\end{array}$
where l is the smoothing length of the kernel and δ is the Dirac delta function. Using the particle approximation, Eq. (A1) can be written as a weighted summation over all neighbouring points within
the area 𝒜:
$\begin{array}{}\text{(A4)}& \begin{array}{rl}f\left({\mathbit{r}}_{\mathrm{p}}\right)& \approx \sum _{q=\mathrm{1}}^{N}f\left({\mathbit{r}}_{\mathrm{q}}\right)W\left(|{\mathbit{r}}_{\mathrm{p}}-{\
mathbit{r}}_{\mathrm{q}}|,{l}_{\mathrm{p}}\right)\mathrm{\Delta }{\mathcal{A}}_{\mathrm{q}}\\ & \approx \sum _{q=\mathrm{1}}^{N}f\left({\mathbit{r}}_{\mathrm{q}}\right)W\left(|{\mathbit{r}}_{\mathrm
{p}}-{\mathbit{r}}_{\mathrm{q}}|,{l}_{\mathrm{p}}\right)\frac{{m}_{\mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}},\end{array}\end{array}$
where N is the number of points in space referred to as neighbour particles, Δ𝒜[q] ($=m/\mathit{\rho }$) is the area associated with the particle q, m represent the mass (kg) and ρ is the 2D density
From the above approximations, we reformulate differential operators relevant to our study in their discrete SPH forms. We write the divergence of a vector field (V), the divergence of a tensor (T)
and the gradient of a vector field (V) (Monaghan, 2005) as (see Appendix B for complete derivation)
$\begin{array}{}\text{(A5)}& \left(\mathrm{abla }\cdot \mathbit{V}{\right)}_{\mathrm{p}}=\frac{\mathrm{1}}{{\mathit{\rho }}_{\mathrm{p}}}\sum _{q=\mathrm{1}}^{N}{m}_{\mathrm{q}}\left(\mathbit{V}\left
({\mathbit{r}}_{\mathrm{q}}\right)-\mathbit{V}\left({\mathbit{r}}_{\mathrm{p}}\right)\right)\cdot {\mathrm{abla }}_{\mathrm{p}}{W}_{\mathrm{pq}},\text{(A6)}& \left(\mathrm{abla }\cdot \mathbit{T}{\
right)}_{\mathrm{p}}={\mathit{\rho }}_{\mathrm{p}}\sum _{q=\mathrm{1}}^{N}{m}_{\mathrm{q}}\left(\frac{\mathbit{T}\left({\mathbit{r}}_{\mathrm{q}}\right)}{{\mathit{\rho }}_{\mathrm{q}}^{\mathrm{2}}}+\
frac{\mathbit{T}\left({\mathbit{r}}_{\mathrm{p}}\right)}{{\mathit{\rho }}_{\mathrm{p}}^{\mathrm{2}}}\right)\cdot {\mathrm{abla }}_{\mathrm{p}}{W}_{\mathrm{pq}},\text{(A7)}& \left(\mathrm{abla }\
mathbit{V}{\right)}_{\mathrm{p}}=\sum _{q=\mathrm{1}}^{N}\frac{{m}_{\mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}}\left(\mathbit{V}\left({\mathbit{r}}_{\mathrm{q}}\right)-\mathbit{V}\left({\mathbit{r}}_
{\mathrm{p}}\right)\right)\otimes {\mathrm{abla }}_{\mathrm{p}}{W}_{\mathrm{pq}}.\end{array}$
In Eq. (A7), ⊗ denotes the outer product. ∇[p]W[pq] is the gradient of the kernel at the coordinate r[p]−r[q] in the reference frame of particle p and is written as
$\begin{array}{}\text{(A8)}& {\mathrm{abla }}_{\mathrm{p}}{W}_{\mathrm{pq}}=\frac{{\mathbit{r}}_{\mathrm{p}}-{\mathbit{r}}_{\mathrm{q}}}{|{\mathbit{r}}_{\mathrm{p}}-{\mathbit{r}}_{\mathrm{q}}|}\frac
{\partial W\left(|{\mathbit{r}}_{\mathrm{p}}-{\mathbit{r}}_{\mathrm{q}}|,{l}_{\mathrm{p}}\right)}{\partial |{\mathbit{r}}_{\mathrm{p}}-{\mathbit{r}}_{\mathrm{q}}|}.\end{array}$
Note that W[pq] is a scalar function and consequently ∇[p]W[pq] is a vector, the inner product in Eq. (A5) is a scalar, the inner product in Eq. (A6) is a 2D vector and the outer product in Eq. (A7)
is a 2D tensor of rank 2. In addition to Eqs. (A2)–(A3), the smoothing kernel must have the following set of properties to avoid nonphysical behaviour and costly computation (Liu and Liu, 2003):
$\begin{array}{}\text{(A9)}& \begin{array}{rl}\text{compact support}:& W\left(|{\mathbit{r}}_{\mathrm{p}}-{\mathbit{r}}_{\mathrm{q}}|,{l}_{\mathrm{p}}\right)=\mathrm{0},\\ & \text{ for }|{\mathbit
{r}}_{\mathrm{p}}-{\mathbit{r}}_{\mathrm{q}}|>{l}_{\mathrm{p}},\end{array}\text{(A10)}& \text{positive definite}:\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{0.25em}{0ex}}W\left(|
{\mathbit{r}}_{\mathrm{p}}-{\mathbit{r}}_{\mathrm{q}}|,{l}_{\mathrm{p}}\right)\ge \mathrm{0},\text{(A11)}& \text{monotonically decreasing}:\frac{\partial W\left(|{\mathbit{r}}_{\mathrm{p}}-{\mathbit
{r}}_{\mathrm{q}}|,{l}_{\mathrm{p}}\right)}{\partial \left(|{\mathbit{r}}_{\mathrm{p}}-{\mathbit{r}}_{\mathrm{q}}|\right)}\le \mathrm{0},\text{(A12)}& \begin{array}{rl}\text{symmetric}:& W\left(|{\
mathbit{r}}_{\mathrm{p}}-{\mathbit{r}}_{\mathrm{q}}|,{l}_{\mathrm{p}}\right)\\ & =W\left(-|{\mathbit{r}}_{\mathrm{p}}-{\mathbit{r}}_{\mathrm{q}}|,{l}_{\mathrm{p}}\right),\end{array}\text{(A13)}& \
text{differentiable}:\frac{{\partial }^{n}W\left(|{\mathbit{r}}_{\mathrm{p}}-{\mathbit{r}}_{\mathrm{q}}|,{l}_{\mathrm{p}}\right)}{\partial \left(|{\mathbit{r}}_{\mathrm{p}}-{\mathbit{r}}_{\mathrm{q}}
|{\right)}^{n}}\phantom{\rule{0.25em}{0ex}}\exists ,\end{array}$
where ∃ stands for exist. In the above, “differentiable” means that the kernel derivatives exist up to the highest order present in the equations. Finally, to ensure the consistency of the
discretization of partial differential equations (PDEs; as defined in Belytschko et al., 1998) of the SPH method approximations to the nth order, all kernel moments of order 1 to n need to vanish. In
practice, the consistency conditions are satisfied when the number of neighbouring particles is sufficiently large to be evenly distributed in the domain of influence (Fraga Filho, 2019). Note that,
at the boundaries, the domain of influence of the particle is truncated, making it impossible to satisfy the kernel moment equations. This phenomenon is referred to as the particle inconsistency and
leads to poorer approximations of physical properties. No clear solutions to this problem have been proposed in the literature yet.
Appendix B:Vector operators in SPH
Vector operators take different forms in the SPH framework because they only operate on the smoothing kernel W and they need to ensure symmetric interactions between particles. The following
subsections show the demonstrations to obtain the relevant one to our study.
B1Divergence of a vector
First, the divergence of vector needs to be changed into a form that can be symmetrized. To do so, we use the identity of the divergence of a scalar function times a vector and chose the scalar
function to be the density as follow:
$\begin{array}{}\text{(B1)}& \mathrm{abla }\cdot \mathbit{V}=\frac{\mathrm{1}}{\mathit{\rho }}\left(\mathrm{abla }\cdot \left(\mathit{\rho }\mathbit{V}\right)-\mathbit{V}\cdot \mathrm{abla }\mathit{\
rho }\right).\end{array}$
Now applying the integral interpolant approximation (A1) to the divergence term (∇⋅(ρV)) and to the density (ρ) gives
$\begin{array}{}\text{(B2)}& \begin{array}{rl}\mathrm{abla }\cdot \left(\mathit{\rho }\mathbit{V}\right)& =\underset{\mathcal{V}}{\int }{\mathrm{abla }}^{\prime }\cdot \left({\mathit{\rho }}^{\prime
}{\mathbit{V}}^{\mathbf{\prime }}\right)W\phantom{\rule{0.25em}{0ex}}\mathrm{d}{\mathbit{r}}^{\mathbf{\prime }}=\underset{\mathcal{V}}{\int }{\mathrm{abla }}^{\prime }\cdot \left({\mathit{\rho }}^{\
prime }{\mathbit{V}}^{\mathbf{\prime }}W\right)\mathrm{d}{\mathbit{r}}^{\mathbf{\prime }}\\ & -\underset{\mathcal{V}}{\int }{\mathit{\rho }}^{\prime }{\mathbit{V}}^{\mathbf{\prime }}\cdot {\mathrm
{abla }}^{\prime }W\mathrm{d}{\mathbit{r}}^{\mathbf{\prime }},\end{array}\text{(B3)}& \mathit{\rho }=\underset{\mathcal{V}}{\int }{\mathit{\rho }}^{\prime }W\mathrm{d}{\mathbit{r}}^{\mathbf{\prime
In the above equations, the prime quantities represent the surrounding values. Note that the kernel is the only function that depends on both primed and non-primed positions as defined in Eq. (A1).
Using the divergence theorem, the first term in Eq. (B2) can be cancelled:
$\begin{array}{}\text{(B4)}& \underset{\mathcal{V}}{\int }{\mathrm{abla }}^{\prime }\cdot \left({\mathit{\rho }}^{\prime }{\mathbit{V}}^{\mathbf{\prime }}W\right)\mathrm{d}{\mathbit{r}}^{\mathbf{\
prime }}=\underset{\mathcal{S}}{\int }\left({\mathit{\rho }}^{\prime }{\mathbit{V}}^{\mathbf{\prime }}W\right)\cdot \mathrm{d}{\mathbit{s}}^{\mathbf{\prime }}=\mathrm{0},\end{array}$
since the integration surface 𝒮 encompassing the volume 𝒱 is arbitrary and the kernel W has the compact support property (Eq. A9). Applying the particle approximation (A4) to Eqs. (B2)–(B3), we
$\begin{array}{}\text{(B5)}& \begin{array}{rl}\left(\mathrm{abla }\cdot \left(\mathit{\rho }\mathbit{V}\right){\right)}_{\mathrm{p}}& =-\sum _{\mathrm{q}}{m}_{\mathrm{q}}{\mathbit{V}}_{\mathrm{q}}\
cdot {\mathrm{abla }}_{q}{W}_{\mathrm{pq}}\\ & =\sum _{\mathrm{q}}{m}_{\mathrm{q}}{\mathbit{V}}_{\mathrm{q}}\cdot {\mathrm{abla }}_{p}{W}_{\mathrm{pq}},\end{array}\text{(B6)}& {\mathit{\rho }}_{\
mathrm{p}}=\sum _{\mathrm{q}}{m}_{\mathrm{q}}{W}_{\mathrm{pq}},\end{array}$
where we used the identity ${\mathrm{abla }}_{\mathrm{p}}=-{\mathrm{abla }}_{\mathrm{q}}$ and p and q represent the current particle and neighbour. Finally, substituting the last two Eqs. (B5)–(B6)
into Eq. (B1) gives the desired form of the operator:
$\begin{array}{}\text{(B7)}& \begin{array}{rl}\left(\mathrm{abla }\cdot \mathbit{V}{\right)}_{\mathrm{p}}& =\frac{\mathrm{1}}{{\mathit{\rho }}_{\mathrm{p}}}\left(\sum _{\mathrm{q}}{m}_{\mathrm{q}}{\
mathbit{V}}_{\mathrm{q}}\cdot {\mathrm{abla }}_{p}{W}_{\mathrm{pq}}-{\mathbit{V}}_{\mathrm{p}}\\ & \cdot {\mathrm{abla }}_{\mathrm{p}}\sum _{\mathrm{q}}{m}_{\mathrm{q}}{W}_{\mathrm{pq}}\right)\end
{array}\text{(B8)}& =\frac{\mathrm{1}}{{\mathit{\rho }}_{\mathrm{p}}}\left(\sum _{\mathrm{q}}{m}_{\mathrm{q}}\left({\mathbit{V}}_{\mathrm{q}}-{\mathbit{V}}_{\mathrm{p}}\right)\cdot \mathrm{abla }{W}_
B2Divergence of a 2D tensor field
Note that in the following demonstration, the Einstein summation convention is used to simplify the calculation and the tensor representation. We start with the divergence of a 2D tensor divided by
the density:
$\begin{array}{}\text{(B9)}& \frac{\partial }{\partial {x}_{i}}\left(\frac{{T}_{ij}}{\mathit{\rho }}\right)=\frac{\mathrm{1}}{\mathit{\rho }}\frac{\partial {T}_{ij}}{\partial {x}_{i}}-\frac{{T}_{ij}}
{{\mathit{\rho }}^{\mathrm{2}}}\frac{\partial \mathit{\rho }}{\partial {x}_{i}}.\end{array}$
Reorganizing the terms gives
$\begin{array}{}\text{(B10)}& \frac{\partial {T}_{ij}}{\partial {x}_{i}}=\mathit{\rho }\left[\frac{\partial }{\partial {x}_{i}}\left(\frac{{T}_{ij}}{\mathit{\rho }}\right)+\frac{{T}_{ij}}{{\mathit{\
rho }}^{\mathrm{2}}}\frac{\partial \mathit{\rho }}{\partial {x}_{i}}\right].\end{array}$
Now applying the interpolant approximation (A1) to the first term in the bracket leads to
$\begin{array}{}\text{(B11)}& \frac{\partial }{\partial {x}_{i}}\left(\frac{{T}_{ij}}{\mathit{\rho }}\right)=\underset{\mathcal{V}}{\int }\frac{\partial }{\partial {x}_{i}^{\prime }}\left(\frac{{T}_
{ij}^{\prime }}{{\mathit{\rho }}^{\prime }}\right)W\phantom{\rule{0.25em}{0ex}}\mathrm{d}{\mathbit{r}}^{\mathbf{\prime }}\text{(B12)}& =\underset{\mathcal{V}}{\int }\frac{\partial }{\partial {x}_{i}^
{\prime }}\left(\frac{{T}_{ij}^{\prime }}{{\mathit{\rho }}^{\prime }}W\right)\phantom{\rule{0.25em}{0ex}}\mathrm{d}{\mathbit{r}}^{\mathbf{\prime }}-\underset{\mathcal{V}}{\int }\left(\frac{{T}_{ij}^
{\prime }}{{\mathit{\rho }}^{\prime }}\right)\frac{\partial W}{\partial {x}_{i}^{\prime }}\mathrm{d}{\mathbit{r}}^{\mathbf{\prime }}.\end{array}$
As for the divergence of a vector demonstration (Sect. B1), the first integral above vanishes by using the divergence theorem, and applying the particle approximation gives
$\begin{array}{}\text{(B13)}& \begin{array}{rl}\left(\frac{\partial }{\partial {x}_{i}}& \left(\frac{{T}_{ij}}{\mathit{\rho }}\right){\right)}_{\mathrm{p}}=-\sum _{\mathrm{q}}\left({m}_{\mathrm{q}}\
frac{\left({T}_{ij}{\right)}_{\mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}^{\mathrm{2}}}\right)\frac{\partial {W}_{\mathrm{pq}}}{\partial \left({x}_{i}{\right)}_{\mathrm{q}}}\\ & =\sum _{\mathrm{q}}\
left({m}_{\mathrm{q}}\frac{\left({T}_{ij}{\right)}_{\mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}^{\mathrm{2}}}\right)\frac{\partial {W}_{\mathrm{pq}}}{\partial \left({x}_{i}{\right)}_{\mathrm{p}}}.\end
Substituting this into Eq. (B10) and using the equality Eq. (B6) we get the following expression:
$\begin{array}{}\text{(B14)}& \begin{array}{rl}\left(\frac{\partial {T}_{ij}}{\partial {x}_{i}}{\right)}_{\mathrm{p}}& ={\mathit{\rho }}_{\mathrm{p}}\left[\sum _{\mathrm{q}}\left({m}_{\mathrm{q}}\
frac{\left({T}_{ij}{\right)}_{\mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}^{\mathrm{2}}}\right)\frac{\partial {W}_{\mathrm{pq}}}{\partial \left({x}_{i}{\right)}_{\mathrm{p}}}\\ & +\frac{\left({T}_{ij}
{\right)}_{\mathrm{p}}}{{\mathit{\rho }}_{\mathrm{p}}^{\mathrm{2}}}\frac{\partial }{\partial \left({x}_{i}{\right)}_{\mathrm{p}}}\left(\sum _{\mathrm{q}}{m}_{\mathrm{q}}{W}_{\mathrm{pq}}\right)\
right]\end{array}\text{(B15)}& ={\mathit{\rho }}_{\mathrm{p}}\left[\sum _{\mathrm{q}}{m}_{\mathrm{q}}\left(\frac{\left({T}_{ij}{\right)}_{\mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}^{\mathrm{2}}}+\
frac{\left({T}_{ij}{\right)}_{\mathrm{p}}}{{\mathit{\rho }}_{\mathrm{p}}^{\mathrm{2}}}\right)\frac{\partial {W}_{\mathrm{pq}}}{\partial \left({x}_{i}{\right)}_{\mathrm{p}}}\right]\text{(B16)}& ={\
mathit{\rho }}_{\mathrm{p}}\sum _{\mathrm{q}}{m}_{\mathrm{q}}\left(\frac{{\mathbit{T}}_{\mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}^{\mathrm{2}}}+\frac{{\mathbit{T}}_{\mathrm{p}}}{{\mathit{\rho }}_{\
mathrm{p}}^{\mathrm{2}}}\right)\cdot {\mathrm{abla }}_{\mathrm{p}}{W}_{\mathrm{pq}},\end{array}$
which is the form presented in Eq. (A6).
B3Gradient of a vector field
To demonstrate Eq. (A7) we first write
$\begin{array}{}\text{(B17)}& \mathrm{abla }\left(a\mathbit{V}\right)=a\mathrm{abla }\mathbit{V}+\mathbit{V}\cdot \mathrm{abla }a.\end{array}$
Choosing a=1 and recalling that the zeroth-order moment of the kernel also equals 1, we can substitute it in the last term of the expression (B17) and obtain
$\begin{array}{}\text{(B18)}& \mathrm{abla }\left(\mathbit{V}\right)=\mathrm{abla }\mathbit{V}+\mathbit{V}\cdot \mathrm{abla }{M}_{\mathrm{0}}\text{(B19)}& =\mathrm{abla }\mathbit{V}+\mathbit{V}\cdot
\mathrm{abla }\underset{\mathcal{V}}{\int }W\left(\mathbit{r}-{\mathbit{r}}^{\mathbf{\prime }},{l}_{\mathrm{p}}\right)\mathrm{d}{\mathbit{r}}^{\mathbf{\prime }}.\end{array}$
Finally using the particle approximation (A4) we get
$\begin{array}{}\text{(B20)}& \begin{array}{rl}\left(\mathrm{abla }\mathbit{V}{\right)}_{\mathrm{p}}& =\frac{\partial }{\partial \left({x}_{i}{\right)}_{\mathrm{p}}}\sum _{\mathrm{q}}\frac{{m}_{\
mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}}\left({V}_{j}{\right)}_{\mathrm{q}}{W}_{\mathrm{pq}}-\left({V}_{j}{\right)}_{\mathrm{p}}\frac{\partial }{\partial \left({x}_{i}{\right)}_{\mathrm{p}}}\\ & \
sum _{\mathrm{q}}\frac{{m}_{\mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}}{W}_{\mathrm{pq}}\end{array}\text{(B21)}& =\sum _{\mathrm{q}}\frac{{m}_{\mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}}\left(\left
({V}_{j}{\right)}_{\mathrm{q}}-\left({V}_{j}{\right)}_{\mathrm{p}}\right)\frac{\partial }{\partial \left({x}_{i}{\right)}_{\mathrm{p}}}{W}_{\mathrm{pq}}\text{(B22)}& =\sum _{\mathrm{q}}\frac{{m}_{\
mathrm{q}}}{{\mathit{\rho }}_{\mathrm{q}}}\left({\mathbit{V}}_{\mathrm{q}}-{\mathbit{V}}_{\mathrm{p}}\right)\otimes {\mathrm{abla }}_{\mathrm{p}}{W}_{\mathrm{pq}},\end{array}$
which is Eq. (A7) and where Einstein summation convention was once again used to simplify the derivation.
Output data from the SPH sea-ice model simulations along with a version of the model used and the analyzing programs are available at https://doi.org/10.5281/zenodo.6950156 (Marquis et al., 2022).
OM coded the model, ran all the simulations, analyzed results and led the writing of the manuscript. BT participated in weekly discussions during the course of the work and edited the manuscript. JFL
and MI participated in monthly discussions during the course of the work and edited the manuscript.
The contact author has declared that none of the authors has any competing interests.
Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation
in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors.
Oreste Marquis is grateful for the support from McGill University, Québec-Océan and Arctrain Canada.
This project was partially funded by grants and contributions from the Natural Science and Engineering Research Council Discovery Program (ONR-N00014-11-1-0977) and the National Research Council
(A-0038111) awarded to Bruno Tremblay.
This paper was edited by Jari Haapala and reviewed by three anonymous referees.
Adcroft, A., Anderson, W., Balaji, V., Blanton, C., Bushuk, M., Dufour, C. O., Dunne, J. P., Griffies, S. M., Hallberg, R., Harrison, M. J., Held, I. M., Jansen, M. F., John, J. G., Krasting, J. P.,
Langenhorst, A. R., Legg, S., Liang, Z., McHugh, C., Radhakrishnan, A., Reichl, B. G., Rosati, T., Samuels, B. L., Shao, A., Stouffer, R., Winton, M., Wittenberg, A. T., Xiang, B., Zadeh, N., and
Zhang, R.: The GFDL Global Ocean and Sea Ice Model OM4.0: Model Description and Simulation Features, J. Adv. Model. Earth Sy., 11, 3167–3211, https://doi.org/10.1029/2019ms001726, 2019.a
Beatty, C. and Holland, D.: Modeling landfast sea ice by adding tensile strength, J. Phys. Oceanogr., 40, 185–198, https://doi.org/10.1175/2009JPO4105.1, 2010.a
Belytschko, T., Krongauz, Y., Dolbow, J., and Gerlach, C.: On the completeness of meshfree particle methods, Int. J. Numer. Meth. Eng., 43, 785–819, https://doi.org/10.1002/(sici)1097-0207(19981115)
43:5<785::aid-nme420>3.0.co;2-9, 1998.a, b
Bouchat, A. and Tremblay, B.: Using sea-ice deformation fields to constrain the mechanical strength parameters of geophysical sea ice, J. Geophys. Res.-Oceans, 122, 5802–5825, https://doi.org/10.1002
/2017jc013020, 2017.a
Bouchat, A., Hutter, N., Chanut, J., Dupont, F., Dukhovskoy, D., Garric, G., Lee, Y. J., Lemieux, J.-F., Lique, C., Losch, M., Maslowski, W., Myers, P. G., Ólason, E., Rampal, P., Rasmussen, T.,
Talandier, C., Tremblay, B., and Wang, Q.: Sea Ice Rheology Experiment (SIREx): 1. Scaling and Statistical Properties of Sea-Ice Deformation Fields, J. Geophys. Res.-Oceans, 127, e2021JC017667,
https://doi.org/10.1029/2021jc017667, 2022.a, b
Cavelan, A., Cabezón, R. M., Korndorfer, J. H. M., and Ciorba, F. M.: Finding Neighbors in a Forest: A b-tree for Smoothed Particle Hydrodynamics Simulations, arXiv [preprint], https://doi.org/
10.48550/arXiv.1910.02639, 7 October 2019.a
Chalk, C., Pastor, M., Peakall, J., Borman, D., Sleigh, P., Murphy, W., and Fuentes, R.: Stress-Particle Smoothed Particle Hydrodynamics: An application to the failure and post-failure behaviour of
slopes, Comput. Meth. Appl. M., 366, 113034, https://doi.org/10.1016/j.cma.2020.113034, 2020.a, b
Chen, J. and Beraun, J.: A generalized smoothed particle hydrodynamics method for nonlinear dynamic problems, Comput. Meth. Appl. M., 190, 225–239, https://doi.org/10.1016/s0045-7825(99)00422-3,
Coon, M., Kwok, R., Levy, G., Pruis, M., Schreyer, H., and Sulsky, D.: Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions revisited and found inadequate, J. Geophys. Res., 112, C11S90, https:/
/doi.org/10.1029/2005jc003393, 2007.a
Damsgaard, A., Adcroft, A., and Sergienko, O.: Application of Discrete Element Methods to Approximate Sea Ice Dynamics, J. Adv. Model. Earth Sy., 10, 2228–2244, https://doi.org/10.1029/2018ms001299,
2018.a, b, c, d, e, f
Danilov, S., Wang, Q., Timmermann, R., Iakovlev, N., Sidorenko, D., Kimmritz, M., Jung, T., and Schröter, J.: Finite-Element Sea Ice Model (FESIM), version 2, Geosci. Model Dev., 8, 1747–1761, https:
//doi.org/10.5194/gmd-8-1747-2015, 2015.a
Dansereau, V., Weiss, J., Saramito, P., and Lattes, P.: A Maxwell elasto-brittle rheology for sea ice modelling, The Cryosphere, 10, 1339–1359, https://doi.org/10.5194/tc-10-1339-2016, 2016.a
Dansereau, V., Weiss, J., Saramito, P., Lattes, P., and Coche, E.: Ice bridges and ridges in the Maxwell-EB sea ice rheology, The Cryosphere, 11, 2033–2058, https://doi.org/10.5194/tc-11-2033-2017,
2017.a, b, c, d
Dehnen, W. and Aly, H.: Improving convergence in smoothed particle hydrodynamics simulations without pairing instability, Mon. Not. R. Astron. Soc., 425, 1068–1082, https://doi.org/10.1111/
j.1365-2966.2012.21439.x, 2012.a, b
Dumont, D., Gratton, Y., and Arbetter, T. E.: Modeling the Dynamics of the North Water Polynya Ice Bridge, J. Phys. Oceanogr., 39, 1448–1461, https://doi.org/10.1175/2008jpo3965.1, 2009.a, b, c
Dyka, C. and Ingel, R.: An approach for tension instability in smoothed particle hydrodynamics (SPH), Comput. Struct., 57, 573–580, https://doi.org/10.1016/0045-7949(95)00059-p, 1995.a
Flato, G. M.: A particle-in-cell sea-ice model, Atmos. Ocean, 31, 339–358, https://doi.org/10.1080/07055900.1993.9649475, 1993.a
Fleissner, F., Gaugele, T., and Eberhard, P.: Applications of the discrete element method in mechanical engineering, Multibody Syst. Dyn., 18, 81–94, https://doi.org/10.1007/s11044-007-9066-2, 2007.
Fraga Filho, C. A.: Smoothed Particle Hydrodynamics: Fundamentals and Basic Applications in Continuum Mechanics, Springer, ISBN 978-3-030-00772-0, https://doi.org/10.1007/978-3-030-00773-7, 2019.a
Gingold, R. A. and Monaghan, J. J.: Smoothed particle hydrodynamics: theory and application to non-spherical stars, Mon. Not. R. Astron. Soc., 181, 375–389, https://doi.org/10.1093/mnras/181.3.375,
1977.a, b
Girard, L., Bouillon, S., Weiss, J., Amitrano, D., Fichefet, T., and Legat, V.: A new modeling framework for sea-ice mechanics based on elasto-brittle rheology, Ann. Glaciol., 52, 123–132, https://
doi.org/10.3189/172756411795931499, 2011.a
Gray, J. and Morland, L.: A Two-Dimensional Model for the Dynamics of Sea Ice, Philos. T. Roy. Soc. B, 347, 219–290, https://doi.org/10.1098/rsta.1994.0045, 1994.a, b
Gray, J. M. N. T.: Loss of Hyperbolicity and Ill-posedness of the Viscous–Plastic Sea Ice Rheology in Uniaxial Divergent Flow, J. Phys. Oceanogr., 29, 2920–2929, https://doi.org/10.1175/1520-0485
(1999)029<2920:lohaip>2.0.co;2, 1999.a
Gutfraind, R. and Savage, S. B.: Smoothed Particle Hydrodynamics for the Simulation of Broken-Ice Fields: Mohr–Coulomb-Type Rheology and Frictional Boundary Conditions, J. Comput. Phys., 134,
203–215, https://doi.org/10.1006/jcph.1997.5681, 1997.a, b
Herman, A.: Discrete-Element bonded-particle Sea Ice model DESIgn, version 1.3a – model description and implementation, Geosci. Model Dev., 9, 1219–1241, https://doi.org/10.5194/gmd-9-1219-2016,
Hibler, W., Hutchings, J., and Ip, C.: sea-ice arching and multiple flow States of Arctic pack ice, Ann. Glaciol., 44, 339–344, https://doi.org/10.3189/172756406781811448, 2006.a
Hibler, W. D.: A Dynamic Thermodynamic Sea Ice Model, J. Phys. Oceanogr., 9, 815–846, https://doi.org/10.1175/1520-0485(1979)009<0815:adtsim>2.0.co;2, 1979.a, b, c, d, e
Hopkins, M. A. and Thorndike, A. S.: Floe formation in Arctic sea ice, J. Geophys. Res., 111, C11S23, https://doi.org/10.1029/2005jc003352, 2006.a
Hosseini, K., Omidvar, P., Kheirkhahan, M., and Farzin, S.: Smoothed particle hydrodynamics for the interaction of Newtonian and non-Newtonian fluids using the μ(I) model, Powder Technol., 351,
325–337, https://doi.org/10.1016/j.powtec.2019.02.045, 2019.a
Hunke, E. C.: Viscous–Plastic Sea Ice Dynamics with the EVP Model: Linearization Issues, J. Comput. Phys., 170, 18–38, https://doi.org/10.1006/jcph.2001.6710, 2001.a
Hunke, E. C. and Dukowicz, J. K.: An Elastic–Viscous–Plastic Model for Sea Ice Dynamics, J. Phys. Oceanogr., 27, 1849–1867, https://doi.org/10.1175/1520-0485(1997)027<1849:aevpmf>2.0.co;2, 1997.a
Hutchings, J. K., Jasak, H., and Laxon, S. W.: A strength implicit correction scheme for the viscous-plastic sea ice model, Ocean Model., 7, 111–133, https://doi.org/10.1016/S1463-5003(03)00040-4,
Hutter, N., Bouchat, A., Dupont, F., Dukhovskoy, D., Koldunov, N., Lee, Y. J., Lemieux, J.-F., Lique, C., Losch, M., Maslowski, W., Myers, P. G., Ólason, E., Rampal, P., Rasmussen, T., Talandier, C.,
Tremblay, B., and Wang, Q.: Sea Ice Rheology Experiment (SIREx): 2. Evaluating Linear Kinematic Features in High-Resolution Sea Ice Simulations, J. Geophys. Res.-Oceans, 127, https://doi.org/10.1029/
2021jc017666, 2022.a, b, c
Ji, S., Shen, H., Wang, Z., Shen, H., and Yue, Q.: A viscoelastic-plastic constitutive model with Mohr-Coulomb yielding criterion for sea ice dynamics, Acta Oceanol. Sin., 24, 54–65, 2005.a, b, c, d
, e
Johnson, G. R. and Beissel, S. R.: Normalized smoothing functions for sph impact computations, Int. J. Numer. Meth. Eng., 39, 2725–2741, https://doi.org/10.1002/(SICI)1097-0207(19960830)39:16
<2725::AID-NME973>3.0.CO;2-9, 1996.a
Kreyscher, M., Harder, M., Lemke, P., and Flato, G. M.: Results of the Sea Ice Model Intercomparison Project: Evaluation of sea ice rheology schemes for use in climate simulations, J. Geophys.
Res.-Oceans, 105, 11299–11320, https://doi.org/10.1029/1999jc000016, 2000.a
Lahiri, S. K., Bhattacharya, K., Shaw, A., and Ramachandra, L. S.: A stable SPH with adaptive B-spline kernel, ArXiv [preprint], https://doi.org/10.48550/arXiv.2001.03416, 4 January 2020.a
Lemieux, J.-F. and Tremblay, B.: Numerical convergence of viscous-plastic sea ice models, J. Geophys. Res., 114, C05009, https://doi.org/10.1029/2008jc005017, 2009.a
Li, B., Li, H., Liu, Y., Wang, A., and Ji, S.: A modified discrete element model for sea ice dynamics, Acta Oceanol. Sin., 33, 56–63, https://doi.org/10.1007/s13131-014-0428-3, 2014.a
Lilja, V.-P., Polojärvi, A., Tuhkuri, J., and Paavilainen, J.: Finite-discrete element modelling of sea ice sheet fracture, Int. J. Solids Struct., 217-218, 228–258, https://doi.org/10.1016/
j.ijsolstr.2020.11.028, 2021.a
Lipscomb, W. H., Hunke, E. C., Maslowski, W., and Jakacki, J.: Ridging, strength, and stability in high-resolution sea ice models, J. Geophys. Res., 112, C03S91, https://doi.org/10.1029/2005jc003355,
Liu, G. and Liu, M.: Smoothed Particle Hydrodynamics: A Meshfree Particle Method, World Scientific, ISBN 9789812384560, 2003.a, b
Liu, M. B. and Liu, G. R.: Smoothed Particle Hydrodynamics (SPH): an Overview and Recent Developments, Arch. Comput. Method. E., 17, 25–76, https://doi.org/10.1007/s11831-010-9040-7, 2010.a, b, c, d
, e
Liu, M. B., Liu, G. R., and Lam, K. Y.: A one-dimensional meshfree particle formulation for simulating shock waves, Shock Waves, 13, 201–211, https://doi.org/10.1007/s00193-003-0207-0, 2003.a
Losch, M., Menemenlis, D., Campin, J.-M., Heimbach, P., and Hill, C.: On the formulation of sea-ice models. Part 1: Effects of different solver implementations and parameterizations, Ocean Model.,
33, 129–144, https://doi.org/10.1016/j.ocemod.2009.12.008, 2010.a
Lucy, L. B.: A numerical approach to the testing of the fission hypothesis, Astron. J., 82, 1013, https://doi.org/10.1086/112164, 1977.a
Marquis, O.: McGill-sea-ice/SIMP: Sea Ice Modelling Particles (v1.0.0), Zenodo [code], https://doi.org/10.5281/zenodo.10714497, 2024.a
Marquis, O., Tremblay, B., Lemieux, J.-F., and Islam, M.: Smoothed Particle Hydrodynamics Implementation of the Standard Viscous-Plastic Sea-Ice Model and Validation in Simple Idealized Experiments,
Zenodo [data set], https://doi.org/10.5281/zenodo.6950156, 2022.a
McPhee, M. G.: The Effect of the Oceanic Boundary Layer on the Mean Drift of Pack Ice: Application of a Simple Model, J. Phys. Oceanogr., 9, 388–400, https://doi.org/10.1175/1520-0485(1979)009
<0388:teotob>2.0.co;2, 1979.a
Mehlmann, C., Danilov, S., Losch, M., Lemieux, J. F., Hutter, N., Richter, T., Blain, P., Hunke, E. C., and Korn, P.: Simulating Linear Kinematic Features in Viscous-Plastic Sea Ice Models on
Quadrilateral and Triangular Grids With Different Variable Staggering, J. Adv. Model. Earth Sy., 13, e2021MS002523, https://doi.org/10.1029/2021ms002523, 2021.a, b
Monaghan, J.: SPH without a Tensile Instability, J. Comput. Phys., 159, 290–311, https://doi.org/10.1006/jcph.2000.6439, 2000.a
Monaghan, J.: Smoothed Particle Hydrodynamics and Its Diverse Applications, Annu. Rev. Fluid Mech., 44, 323–346, https://doi.org/10.1146/annurev-fluid-120710-101220, 2012.a, b
Monaghan, J. and Kajtar, J.: SPH particle boundary forces for arbitrary boundaries, Comput. Phys. Commun., 180, 1811–1820, https://doi.org/10.1016/j.cpc.2009.05.008, 2009.a
Monaghan, J. J.: Smoothed particle hydrodynamics, Rep. Prog. Phys., 68, 1703–1759, https://doi.org/10.1088/0034-4885/68/8/r01, 2005.a
Morland, L. W. and Staroszczyk, R.: A material coordinate treatment of the sea–ice dynamics equations, P. Roy. Soc. Lond. A, 454, 2819–2857, https://doi.org/10.1098/rspa.1998.0283, 1998.a
Morris, J. P., Fox, P. J., and Zhu, Y.: Modeling Low Reynolds Number Incompressible Flows Using SPH, J. Computa. Phys., 136, 214–226, https://doi.org/10.1006/jcph.1997.5776, 1997.a
Natanson, I. P.: Theory of Functions of a Real Variable, vol. 2, Frederick Ungar Publishing Co., New York, ISBN 9780486806433, 1961.a
Overland, J. E., McNutt, S. L., Salo, S., Groves, J., and Li, S.: Arctic sea ice as a granular plastic, J. Geophys. Res.-Oceans, 103, 21845–21867, https://doi.org/10.1029/98jc01263, 1998.a
Peiró, J. and Sherwin, S.: Finite difference, finite element and finite volume methods for partial differential equations, in: Handbook of materials modeling, 2415–2446, Springer, https://doi.org/
10.1007/978-1-4020-3286-8_127, 2005.a
Plante, M., Tremblay, B., Losch, M., and Lemieux, J.-F.: Landfast sea ice material properties derived from ice bridge simulations using the Maxwell elasto-brittle rheology, The Cryosphere, 14,
2137–2157, https://doi.org/10.5194/tc-14-2137-2020, 2020.a, b, c, d, e
Rabatel, M., Labbé, S., and Weiss, J.: Dynamics of an assembly of rigid ice floes, J. Geophys. Res.-Oceans, 120, 5887–5909, https://doi.org/10.1002/2015jc010909, 2015.a
Rampal, P., Bouillon, S., Ólason, E., and Morlighem, M.: neXtSIM: a new Lagrangian sea ice model, The Cryosphere, 10, 1055–1073, https://doi.org/10.5194/tc-10-1055-2016, 2016.a, b
Ranta, J., Polojärvi, A., and Tuhkuri, J.: Limit mechanisms for ice loads on inclined structures: Buckling, Cold Reg. Sci. Technol., 147, 34–44, https://doi.org/10.1016/j.coldregions.2017.12.009,
Rhoades, C. E.: A fast algorithm for calculating particle interactions in smooth particle hydrodynamic simulations, Comput. Phys. Commun., 70, 478–482, https://doi.org/10.1016/0010-4655(92)90109-c,
Ringeisen, D., Losch, M., Tremblay, L. B., and Hutter, N.: Simulating intersection angles between conjugate faults in sea ice with different viscous–plastic rheologies, The Cryosphere, 13, 1167–1186,
https://doi.org/10.5194/tc-13-1167-2019, 2019.a, b
Salehizadeh, A. M. and Shafiei, A. R.: Modeling of granular column collapses with μ(I) rheology using smoothed particle hydrodynamic method, Granul. Matter, 21, 32, https://doi.org/10.1007/
s10035-019-0886-6, 2019.a
Schreyer, H. L., Sulsky, D. L., Munday, L. B., Coon, M. D., and Kwok, R.: Elastic-decohesive constitutive model for sea ice, J. Geophys. Res., 111, C11S26, https://doi.org/10.1029/2005jc003334,
2006.a, b
Schulson, E. M.: Compressive shear faults within arctic sea ice: Fracture on scales large and small, J. Geophys. Res., 109, C07016, https://doi.org/10.1029/2003jc002108, 2004.a
Sheikh, B., Qiu, T., and Ahmadipur, A.: Comparison of SPH boundary approaches in simulating frictional soil–structure interaction, Acta Geotech., 16, 2389–2408, https://doi.org/10.1007/
s11440-020-01063-y, 2020.a
Shen, H. T., Shen, H., and Tsai, S.-M.: Dynamic transport of river ice, J. Hydraul. Res., 28, 659–671, https://doi.org/10.1080/00221689009499017, 1990.a
Staroszczyk, R.: SPH Modelling of Sea-ice Pack Dynamics, Archives of Hydro-Engineering and Environmental Mechanics, 64, 115–137, https://doi.org/10.1515/heem-2017-0008, 2017.a, b, c, d, e
Staroszczyk, R.: Simulation of Sea-ice Thermodynamics by a Smoothed Particle Hydrodynamics Method, Archives of Hydro-Engineering and Environmental Mechanics, 65, 277–299, https://doi.org/10.1515/
heem-2018-0017, 2018.a
Sulsky, D., Schreyer, H., Peterson, K., Kwok, R., and Coon, M.: Using the material-point method to model sea ice dynamics, J. Geophys. Res., 112, C02S90, https://doi.org/10.1029/2005jc003329, 2007.a
Sun, P., Colagrossi, A., Marrone, S., Antuono, M., and Zhang, A.: Multi-resolution Delta-plus-SPH with tensile instability control: Towards high Reynolds number flows, Comput. Phys. Commun., 224,
63–80, https://doi.org/10.1016/j.cpc.2017.11.016, 2018.a
Sutherland, P. and Dumont, D.: Marginal Ice Zone Thickness and Extent due to Wave Radiation Stress, J. Phys. Oceanogr., 48, 1885–1901, https://doi.org/10.1175/jpo-d-17-0167.1, 2018.a
Swegle, J., Hicks, D., and Attaway, S.: Smoothed Particle Hydrodynamics Stability Analysis, J. Comput. Phys., 116, 123–134, https://doi.org/10.1006/jcph.1995.1010, 1995.a, b
Tsamados, M., Feltham, D. L., and Wilchinsky, A. V.: Impact of a new anisotropic rheology on simulations of Arctic sea ice, J. Geophys. Res.-Oceans, 118, 91–107, https://doi.org/10.1029/2012jc007990,
Wang, Z., Shen, H. T., and Wu, H.: A Lagrangian sea ice model with discrete parcel method, in: Proceedings of the 14 th International Symposium on Ice, Potsdam, Germany, 1, 313–320, 1998.a, b, c, d,
e, f
Weiss, J., Schulson, E. M., and Stern, H. L.: Sea ice rheology from in-situ, satellite and laboratory observations: Fracture and friction, Earth Planet. Sc. Lett., 255, 1–8, https://doi.org/10.1016/
j.epsl.2006.11.033, 2007.a
Wendland, H.: Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree, Adv. Comput. Math., 4, 389–396, https://doi.org/10.1007/bf02123482, 1995.a
West, B., O'Connor, D., Parno, M., Krackow, M., and Polashenski, C.: Bonded Discrete Element Simulations of Sea Ice With Non-Local Failure: Applications to Nares Strait, J. Adv. Model. Earth Syst.,
14, e2021MS002614, https://doi.org/10.1029/2021ms002614, 2022.a, b
Williams, J. and Tremblay, L. B.: The dependence of energy dissipation on spatial resolution in a viscous-plastic sea-ice model, Ocean Model., 130, 40–47, https://doi.org/10.1016/j.ocemod.2018.08.001
, 2018.a
Williams, J., Tremblay, L. B., and Lemieux, J.-F.: The effects of plastic waves on the numerical convergence of the viscous–plastic and elastic-viscous–plastic sea-ice models, J. Comput. Phys., 340,
519–533, https://doi.org/10.1016/j.jcp.2017.03.048, 2017.a, b, c, d, e, f
Xia, X. and Liang, Q.: A GPU-accelerated smoothed particle hydrodynamics (SPH) model for the shallow water equations, Environ. Model. Softw., 75, 28–43, https://doi.org/10.1016/j.envsoft.2015.10.002,
Yang, E., Bui, H. H., Sterck, H. D., Nguyen, G. D., and Bouazza, A.: A scalable parallel computing SPH framework for predictions of geophysical granular flows, Comput. Geotech., 121, 103474, https://
doi.org/10.1016/j.compgeo.2020.103474, 2020.a, b
Zhang, N., Zheng, X., and Ma, Q.: Updated Smoothed Particle Hydrodynamics for Simulating Bending and Compression Failure Progress of Ice, Water, 9, 882, https://doi.org/10.3390/w9110882, 2017.a | {"url":"https://tc.copernicus.org/articles/18/1013/2024/","timestamp":"2024-11-05T00:41:57Z","content_type":"text/html","content_length":"574319","record_id":"<urn:uuid:2b3f555a-a4ce-443e-b722-7deebe2c8485>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00572.warc.gz"} |
741 hectometers per square second to kilometers per square second
7,741 Hectometers per square second = 774.1 Kilometers per square second
Acceleration Converter - Hectometers per square second to kilometers per square second - 7,741 kilometers per square second to hectometers per square second
This conversion of 7,741 hectometers per square second to kilometers per square second has been calculated by multiplying 7,741 hectometers per square second by 0.1 and the result is 774.1 kilometers
per square second. | {"url":"https://unitconverter.io/hectometers-per-square-second/kilometers-per-square-second/7741","timestamp":"2024-11-14T13:25:43Z","content_type":"text/html","content_length":"27292","record_id":"<urn:uuid:ecf202c4-de20-4c1a-8272-89ff2522cef9>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00314.warc.gz"} |
Estimating Pi Using Monte Carlo Simulations
Published on
Estimating Pi Using Monte Carlo Simulations
Danial Khosravi
Pi (3.141593) is one of the few magical numbers in Mathematics that we often trust, accept and use in our calculations. However, you might be curious to know where it comes from. Pi can be obtained
analytically which gives us a value equal to 3.141593 but here we're going to find the value of pi numerically by running Monte Carlo simulations.
As you might remember from primary or elementary school, the formula to obtain the area of a circle is:
$A = \pi \times r^2$
where $r$ is the radius of the circle. So if we have a circle with $r=1$, then its area is just $\pi$ !
Now imagine we have a circle inside a square with sides equal to 2 and if we look at a corner of it, we see that $\frac{1}{4}$ of the area of the circle is inside a square with sides equal to 1. So
we know the area of that slice of circle is $\frac{\pi}{4}$.
So if we denote the area of the quarter of the circle as $Q$ and we know:
$\pi = 4 \times \frac{Q}{r^2}$
Now, using Monte Carlo simulations, we generate a large number, lets say a million, $x$ and $y$ coordinates that are uniformly distributed $Unif(0, 1)$ and using the pythagorus formula we find their
distances from the centre of the circle
$r' = \sqrt{x^2 + y^2}$
If our ${r'}$ is smaller than the radius of the circle, $r=1$, then it's inside the circle and vice versa. If we run enough simulations, then the proportion of points that are inside the circle to
the total number of trials would essentially give us the area of that slice of the circle, $Q$.
For a circle with $r=1$, if we multiply that area by 4, using the $\pi$ formula above, we get our estimate for $\pi$.
Plot of 500,000 simulations
Plot of 1,000,000 simulations
trials <- 1000000
r <- 1
x <- runif(trials, 0, r)
y <- runif(trials, 0, r)
distance_from_center <- sqrt(x^2 + y^2)
inbounds <- distance_from_center < r
ggplot(data.frame(x, y, inbounds), aes(x, y, color=inbounds)) +
theme_bw() +
ggtitle(paste(round(trials), 'Trials')) +
guides(color=FALSE) +
geom_point(size=0.002) +
ggtitle(paste(trials, 'Trials'))
pi_estimate <- 4 * sum(inbounds)/trials
error = 1 - pi_estimate/pi | {"url":"https://danialk.github.io/blog/2016/11/28/estimating-pi-using-monte-carlo-simulations/","timestamp":"2024-11-04T11:16:43Z","content_type":"text/html","content_length":"102626","record_id":"<urn:uuid:472827a2-a180-406b-9a3c-92324818bd56>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00899.warc.gz"} |
Using data from two datasets for an equation.
I have multiple different data sets, and I am trying to use two at a time for calculations. For instance:
I have a dataset with approximately 90,000 rows, each row has columns: policy , age, gender, loyalty (years with company), smoker, zip code, accidents, premium.
I have another dataset with 6 rows. each row has columns: accidents, Charge.
What I am trying to do is take the accidents from the first dataset for each row and reference that number of accidents with the same number in the second dataset under the accidents column, and then
use what is in the charge column for that many accidents (by row) as a multiplier.
I have spent a few hours trying different things and haven't really come up with any solutions.
EDIT: I have uploaded a visual I quickly made to make it easier to understand what I am trying to accomplish
12-01-2015 11:41 PM | {"url":"https://communities.sas.com/t5/SAS-Procedures/Using-data-from-two-datasets-for-an-equation/td-p/237314","timestamp":"2024-11-03T03:50:21Z","content_type":"text/html","content_length":"247416","record_id":"<urn:uuid:5ed0d397-2967-4cbf-a1a6-23c69e67006e>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00716.warc.gz"} |
Potential energy in Gauss' gun
In summary, Homework Statement: For my extended essay as part of the IB, I am investigating the effect of changing the distance and the number of stationary ball bearings in a Gaussian gun. I was
hoping to look at the energy transfer during each stage of magnets and therefore calculate the efficiency. However, I am struggling to understand the equations for determining the potential energy. I
understand that magnetic potential energy U can be found using U=-mB where m is the magnetic moment and B the magnetic field but I'm not sure if this would be correct for this application as it
relates to the orientation of the diopole in relation to the field. I can't see how the ball bearing would have more potential energy in one orientation than the other
Homework Statement
For my extended essay as part of the IB, I am investigating the effect of changing the distance and the number of stationary ball bearings in a Gaussian gun.
I was hoping to look at the energy transfer during each stage of magnets and therefore calculate the efficiency. However, I am struggling to understand the equations for determining the potential
I understand that magnetic potential energy U can be found using U=-mB where m is the magnetic moment and B the magnetic field but I'm not sure if this would be correct for this application as it
relates to the orientation of the diopole in relation to the field.
I can't see how the ball bearing would have more potential energy in one orientation than the other as it is a soft magnet and therefore surely the regions of polarity would change? Could anyone help
me with this? Thanks!
Homework Equations
The Attempt at a Solution
(Sorry it's all kind of in one part)
Last edited by a moderator:
daisy3110 said:
Homework Statement
For my extended essay as part of the IB, I am investigating the effect of changing the distance and the number of stationary ball bearings in a Gaussian gun.
I was hoping to look at the energy transfer during each stage of magnets and therefore calculate the efficiency. However, I am struggling to understand the equations for determining the potential
I understand that magnetic potential energy U can be found using U=-mB where m is the magnetic moment and B the magnetic field but I'm not sure if this would be correct for this application as it
relates to the orientation of the diopole in relation to the field.
I can't see how the ball bearing would have more potential energy in one orientation than the other as it is a soft magnet and therefore surely the regions of polarity would change? Could anyone
help me with this? Thanks!
Homework Equations
The Attempt at a Solution
(Sorry it's all kind of in one part)
Could you post some links to the technical articles you have been reading about this? And by Gauss'/Gaussian Gun, you mean Coil Gun, right?
What do you mean about "orientation" of the steel ball bearing? And what do you mean about a dipole? Do you mean induced dipole from the gradient of the magnetic field leading into each coil stage?
berkeman said:
Could you post some links to the technical articles you have been reading about this? And by Gauss'/Gaussian Gun, you mean Coil Gun, right?
What do you mean about "orientation" of the steel ball bearing? And what do you mean about a dipole? Do you mean induced dipole from the gradient of the magnetic field leading into each coil
There's not that much I can find about it online but
is quite useful.
Yes, I do mean coil gun but using short bar magnets rather than coils.
I think what I meant by the orientation was that the definition of magnetic potential energy from the equation U=-mB relates to the alignment of a dipole in the presence of a magnetic field and the
energy required to rotate it -
https://en.wikipedia.org/wiki/Magnetic_energy http://hyperphysics.phy-astr.gsu.edu/hbase/magnetic/magpot.html
berkeman said:
What do you mean about "orientation" of the steel ball bearing? And what do you mean about a dipole? Do you mean induced dipole from the gradient of the magnetic field leading into each coil
- yes I think that is what I mean
I think what I want is an equation that would give the potential energy a ball bearing would have as a result of being in the magnetic field in the same was as E = -GmM/r does for gravity.
Hope this makes some sort of sense! I'm only in year 12 in the UK so my knowledge of this area is really quite limited.
daisy3110 said:
Yes, I do mean coil gun but using short bar magnets rather than coils.
I didn't know that bar magnets could be used with a coil gun. But I'm certainly no coil gun expert. You would need to constrain the path of the short bar magnet so that it can't rotate as it travels
down the length of the coil gun, it would seem. Some sort of a plastic barrel maybe?
daisy3110 said:
I think what I want is an equation that would give the potential energy a ball bearing would have as a result of being in the magnetic field in the same was as E = -GmM/r does for gravity.
I believe you can look at how magnetic solenoid actuators work (where energizing the coil pulls the metal shaft into the body of the coil. It is the gradient of the magnetic field that generates the
force (at least when the ball bearing is not magnetized), AFAIK.
berkeman said:
I didn't know that bar magnets could be used with a coil gun. But I'm certainly no coil gun expert. You would need to constrain the path of the short bar magnet so that it can't rotate as it
travels down the length of the coil gun, it would seem. Some sort of a plastic barrel maybe?
I believe you can look at how magnetic solenoid actuators work (where energizing the coil pulls the metal shaft into the body of the coil. It is the gradient of the magnetic field that generates
the force (at least when the ball bearing is not magnetized), AFAIK.
Thanks, I'll have a look at that. This is what I mean about how the gun works (sorry wasn't very clear)
FAQ: Potential energy in Gauss' gun
What is potential energy in Gauss' gun?
Potential energy in Gauss' gun refers to the stored energy that is present in the system of magnets and conductive rails. This potential energy is converted into kinetic energy when the trigger is
pulled, causing the projectile to be launched.
How does a Gauss' gun work?
A Gauss' gun works by using the principle of electromagnetic induction. When the first projectile is launched, it pulls a magnet along with it, causing a current to flow through the conductive rails.
This current then creates a magnetic field that propels the next projectile forward, and the process repeats itself.
What factors affect the potential energy in Gauss' gun?
The potential energy in Gauss' gun is affected by the strength of the magnets, the distance between the magnets and conductive rails, and the mass of the projectile. Additionally, the number of
stages in the gun and the efficiency of the conductive rails also play a role in determining the potential energy.
Can potential energy in Gauss' gun be increased?
Yes, potential energy in Gauss' gun can be increased by using stronger magnets, decreasing the distance between the magnets and conductive rails, and optimizing the design of the conductive rails.
However, there is a limit to how much potential energy can be achieved due to factors such as the weight of the projectiles and the strength of the materials used.
What are the practical applications of Gauss' gun?
Gauss' gun has various applications in fields such as physics, engineering, and military technology. It can be used to demonstrate principles of electromagnetism, as well as to launch small
projectiles at high speeds. In the military, it has been proposed as a possible alternative for traditional firearms due to its potential for reduced recoil and increased accuracy. | {"url":"https://www.physicsforums.com/threads/potential-energy-in-gauss-gun.903127/","timestamp":"2024-11-09T17:36:09Z","content_type":"text/html","content_length":"105937","record_id":"<urn:uuid:7b124dfc-9e9f-4668-aeab-5290e3c34a46>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00147.warc.gz"} |
Ball Mill Centrifugalball Mill Centrifugal Force
Centrifugal Mill Jual Stone. centrifugal mill jual stone. multi roller centrifugal ball mill india. rock phosphate grinding mill host another vulnerability is grinding roller, under normal
circumstances, centrifugal induced draft fan motor, power(kw, 37, 75, 132 in the actual mineral processing line of clinker grinding ball mill in india, sbm's mq series ball mill . 1 request, multiple
ادامه مطلب | {"url":"https://www.mobilesafetylab.eu/29105_ball_mill_centrifugalball_mill_centrifugal_force.html","timestamp":"2024-11-06T15:26:08Z","content_type":"text/html","content_length":"73594","record_id":"<urn:uuid:55aabf48-98b7-4e2e-9a51-19ecc7d1dad1>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00166.warc.gz"} |
Python Function: Test Probability with Boosted Chances
Oops, something went wrong. Please try again in a few moments.
import random
def test_probability_with_boost():
Simulates a test of probability with boosted chances for certain outcomes.
The function simulates a test where there are two possible outcomes: "Green" and "Red".
The chances of getting each outcome are boosted according to the specified multipliers.
The boosted chances are then used to randomly select an outcome.
str: The selected outcome ("Green" or "Red").
- ValueError: If any of the multipliers is less than or equal to zero.
# Define the multipliers for the chances of each outcome
green_multiplier = 2
red_multiplier = 8
# Verify that the multipliers are valid
if green_multiplier <= 0 or red_multiplier <= 0:
raise ValueError("Multipliers should be greater than zero.")
# Calculate the total weight for the random selection
total_weight = green_multiplier + red_multiplier
# Generate a random number between 0 and the total weight
random_number = random.uniform(0, total_weight)
# Determine the selected outcome based on the random number
if random_number < green_multiplier:
selected_outcome = "Green"
selected_outcome = "Red"
# Return the selected outcome
return selected_outcome
# Example usage of the test_probability_with_boost function:
# Perform the test and get the selected outcome
selected_outcome = test_probability_with_boost()
# Print the selected outcome
print(f"The selected outcome is: {selected_outcome}") | {"url":"https://codepal.ai/code-generator/query/09EK4sMA/python-function-test-probability-boost","timestamp":"2024-11-08T01:35:58Z","content_type":"text/html","content_length":"109798","record_id":"<urn:uuid:4666ece8-98ce-4fb3-8b40-c1f897f79c36>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00189.warc.gz"} |
8.7 Is it possible to include other classes as instance or static members? | Legion of Learners
A. Introduction
If you observe the triangle.java in 8.6 carefully, the repetition of x and y coordinates of each vertex almost makes you to ask: is it possible to include a Vertex or Point class as a member of the
Triangle class?
private int Ax;
private int Ay;
private double ABlength;
private int Bx;
private int By;
private double BClength;
private int Cx;
private int Cy;
private double CAlength;
The answer is YES. Not only as instance members, but also as static members as necessary. This is called class inclusion, as shown below:
public class Point {
double x;
double y;
public Point(){ //empty constructor
public Point(double i, double k){ //full constructor
this.x = i; //use i & k because the formal parameters are general and hold no importance to code
this.y = k;
public double getX(){ //getter or accessor
return x;
public void setX(double i){ //setter or mutator
x = i;
public double getY(){ //getter
return y;
public void setY(double k){ //setter
y = k;
public double distance (Point aPoint){
return Math.sqrt(Math.pow(this.x-aPoint.x,2)+Math.pow(this.y-aPoint.y,2));
public String toStr(){ // it is NOT called by System.out.println() by default
return "(" + x + ", " + y +")";
public String toString(){ // it is called by System.out.println() by default
return "(" + x + ", " + y +")";
public boolean equals(Point p){ // to compare two Point objects
return p.x==x&&p.y==y;
class TriangleByVertex {
Point a;
Point b;
Point c;
public double perimeter() {
return a.distance(b)+b.distance(c)+c.distance(a);
public double area() {
double s = 0.5 * perimeter();
return Math.sqrt(s*(s-side(b,c))*(s-side(a, c))*(s-side(a, b)));
public double side(Point x, Point y){
return x.distance(y);
public double angle(Point x){
double cos;
if(x.equals(a)) // use equals() method of Point class to check if two points are the same
else if(x.equals(b))
else if(x.equals(c))
else {
System.out.println("This vertex does not below the triangle!");
return -1;
return Math.acos(cos); | {"url":"https://www.lol-101.com/classrooms/ap-computer-science-a/8-7-is-it-possible-to-include-other-classes-as-instance-or-static-members","timestamp":"2024-11-15T02:52:20Z","content_type":"text/html","content_length":"1050478","record_id":"<urn:uuid:a35fd814-00e3-4c06-b411-71152561f374>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00029.warc.gz"} |
CDF of EIRP & Max Hold
Antenna arrays are preferable in certain applications because of their beam steering ability, but analyzing a design for tens or hundreds of beam patterns is unwieldy. One way to condense this data
into a single metric is to measure the cumulative distribution function (CDF) of effective isotropic radiated power (EIRP) for numerous beam patterns.
EIRP is a function of direction and is defined as the gain of a transmitting antenna in a direction multiplied by the power delivered to the antenna from the transmitter [1].
EIRP is equivalent to the power that must be delivered to an isotropic antenna in order to produce the same signal level in the given direction. For example, if an antenna accepts 2 mW (≈3 dBm) from
the transmitter and the antenna's gain is 5 dB in a given direction, then the EIRP in that direction is 8 dBm. This means that the signal in that direction is equal to that of an isotropic antenna
fed with a power of 8 dBm.
Please log in to view this content.
Remcom customers and those interested in our products may access this content after logging in.
A simple patch antenna demonstrates the concept of EIRP and the meaning of the CDF of EIRP calculation. The patch is tuned to 28 GHz and mounted on a 15 mm by 15 mm dielectric substrate of 0.254 mm
thickness that is backed by a ground plane. The single patch produces a pattern in the region above the ground plane with a peak gain slightly over 7 dBi pointed in the direction above the patch.
Further adjustments to shape the pattern or point it in different directions are impossible due to the antenna consisting of a single patch. If the power available to the patch is adjusted to 23
dBmW, the expected peak EIRP is the sum of the antenna gain (7 dBi) and the available power (23 dBmW), or around 30 dBmW.
This EIRP level is illustrated by comparing the actual antenna pattern to an ideal spherical pattern of an isotropic radiator with a gain at the EIRP level of 30 dBmW. The actual patch antenna
pattern in the direction of the peak gain reaches the 30 dBmW level, but the other directions, particularly those under the ground plane, reach significantly lower power levels compared to the green
sphere of the isotropic radiator.
The CDF of EIRP calculation determines the peak EIRP level present at the maximum gain location of the antenna pattern, as well as the possible coverage of the antenna for a given available power.
The CDF of EIRP plot of the single patch, P[1], shows that the peak EIRP where the y-axis is 1 is approximately 30 dBmW, as expected. At the available power level of 23 dBmW, the plot crosses the
fractional area of 0.72576. This places approximately 72.6% of the gain pattern below the available power level, indicating that there is negative gain. In other words, approximately 100 minus 72.6,
or 27.4% of the pattern has positive gain.
This analysis may be possible with a simple radiation pattern, but determining the antenna's coverage for more complex devices is less straight-forward. For example, a simple array that adds a second
patch element a half-wavelength offset from the first element of the original antenna produces a gain pattern similar to the single patch, but with a peak gain of nearly 12 dBi when both elements are
fed in phase.
When comparing the plots of the single patch, P[1], and the CDF of EIRP for the 2x1 array with in-phase feeding, P[2], the max gain for the two antenna patterns is determined by looking at the
cumulative probability of 1. P[1] and P[2] have a max gain of approximately 30 dBi and 35 dBi, respectively. P[2] provides higher EIRP values covering 1 minus 0.675, or 32.5% of the angles as shown
in the upper-right portion of the graph, but crosses P[1] at 21.5 dBmW, providing lower EIRP for the remaining 67.5% of angles. This demonstrates that a 2x1 array is more directional over a limited
set of angles than a single patch, as expected.
Users can steer the beam by adjusting the phase between the elements, such as adding a phase shift of plus or minus 90 degrees to one element. The additional beams provide a higher gain in more
directions than the single in-phase pattern.
The 3-D max hold result shows the radiation pattern for the three beams: +90 degrees, in-phase, and -90 degrees.
The plots of P[1], P[2], and the CDF of the max hold of the three beams, P[3], show that P[3] shares the max gain with P[2] and that the 2x1 array with beam steering provides better coverage over all
angles when compared to the single patch result of P[1]. An available power of 23 dBmW indicates that (1 - 0.542) 45.8% of the angles show a positive gain.
As more elements are added to the array, 2-D beam steering allows for more coverage. In simulations performed for a 3x3 array with 9 total elements, the beam becomes increasingly focused and
generates a peak gain of 21.6 dBi when all ports are fed in-phase. The CDF of the 3x3 array with all ports fed in-phase appears in the P[4] plot.
In order to analyze the 2-D beam steering capability, array optimization determined the sets of port phases that pointed the beam in 24 directions, including theta equals 0, 15, and 30 degrees and
phi equals 0 to 360 degrees with 30 degree increments. The 3-D max hold result was generated from the 24 patterns and its corresponding CDF appears in the P[5] plot.
P[4] shows a peak EIRP of approximately 43 dBmW and a near-horizontal response down to 30 dBmW. This result is characteristic of a highly directional pattern with a narrow beam.
The peak EIRP of P[5] is equivalent to that of P[4]. This is expected because the 3x3 array has a peak gain when the ports are fed in-phase. The in-phase pattern contributes to the max hold result
over the boresight angles. As the max hold moves away from boresight, the other 23 patterns contribute to the directions where their beams are pointing. As a result, P[5] shows the contribution of 24
separate patterns that provide high gain over a much greater range of angles. With an available power of 23 dBmW, 1 minus 0.236, or 76.4% of the angles show positive gain.
A single array antenna covers a limited range of angles and is not capable of full 3-D coverage. Multple arrays are required in order to increase coverage.
A mobile phone case provides a more practical example when used as a platform for holding three separate arrays of patch antennas. The grey phone case has two 1x8 element arrays on each side and one
3x3 array on the back. When all elements are fed in-phase, the 1x8 arrays produce a fan shaped pattern that is narrow in the vertical direction but broad in the horizontal direction. The 3x3 array
produces a focused beam that can be steered in two directions over the back plane of the phone. Each of these three arrays produces a pattern that covers a different sector of the radiation sphere.
Using the 1x8 arrays, five beams were generated that point in 0, ±15, and ±30 degrees off boresight. The 3-D max hold radiation pattern for one array is determined using its five patterns and appears
as the CDF in the P[6] plot. The 3x3 array supports beamforming in two dimensions and 24 beams were analyzed as explained previously. The CDF of the 3-D max hold radiation pattern appears in the P[7]
plot. P[5] and P[7] differ only in that the latter array is impacted by being mounted on a device.
Each of the three mobile phone array patterns is capable of scanning over large regions, but combined they cover the entire sphere surrounding the phone more efficiently. A final max hold result
combines the 34 beams—five from the left 1x8 array, five from the right 1x8 array, and 24 from the 3x3 array—to produce a 3-D radiation pattern showing full coverage around the phone. The CDF of this
max hold appears in the P[8] plot.
With the available power adjusted to 23 dBmW, the CDF of EIRP plots show slightly higher coverage for the 3x3 array than for the 1x8 arrays. Data for only one 1x8 array is plotted since the second
array produces a nearly identical CDF result of the max hold of the five beams. However, because the two 1x8 arrays are on opposite sides of the phone case, a much larger set of angles is covered
with higher gain when they are used together and combined with the 3x3 array. This provides positive gain for 1 minus 0.062, or 93.8% of the angles.
Angular Sampling
In order to compute max hold, it is necessary to generate a three-dimensional gain pattern for each active source. For example, a far zone sensor computes fields from theta equals 0 to 180 degrees
with 2 degree increments and phi equals 0 to 360 degrees with 2 degree increments.
Performing the calculation necessary to generate the CDF of max hold plot is time-consuming with a large number of angular points that must be considered. Although an individual pattern can be
complex with many nulls, the max hold computed from the individual patterns is often fairly simple. Generally, a highly sampled antenna pattern is unnecessary in order to obtain accurate results, but
users should check their patterns' sensitivity.
In the 3x3 planar patch array's max hold result based on 24 patterns, P[5], two additional simulations show the impact of the theta and phi increment values. The 1 and 10 degree increments generated
a total of 37.32M and 0.37M far zone data points, respectively. The plot with 10 degree sampling is virtually indistinguishable from that with 1 degree sampling and is computed in much less time. | {"url":"https://support.remcom.com/xfdtd/users-guide/antenna-design/cdf-of-eirp-max-hold.html","timestamp":"2024-11-02T08:33:59Z","content_type":"text/html","content_length":"18829","record_id":"<urn:uuid:8c9db687-9cd2-46fd-9f42-7eb1f4c7d723>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00056.warc.gz"} |
Seductive Sport App
The fifth linear filtering algorithm Draw-Distinctive exploits the fact that some football sequences have distinctive sport matrix implying the uniqueness of the draw sequence. Reduction-Rec-Small
calls Filter which is a union of the fixed and linear time filtering algorithms and Discount-Rec-Large which is the subsequent quadratic filtering algorithm. The next natural implementation
Balanced-Quad of Lemma 38 requires quadratic time. The algorithm primarily based on this lemma but is shouldn’t be applied. The base of Internal-Attracts is the following lemma. The base of the
uniform allocation of the attracts is the following assertion. According to the next assertion we get a linear filtering algorithm utilizing the obligatory sport matrix. In Section 4 we proposed and
analyzed filtering of standard sequences with constant, linear and quadratic time algorithms. sbobet incorporates the results of quadratic filtering algorithms. Tables four and 5 present the concrete
filtering results of the linear time filtering algorithms.
The experimental outcomes show our model achieves a brand new state-of-the-artwork efficiency on each K-SportsSum and SportsSum datasets. Archimedean CLI groups are exactly the automorphism teams of
countable buildings whose Scott sentence does not have an uncountable model. If you wish to make it work, you must have self-discipline and you need to make your office a chosen work house — even
whether it is in your bedroom or the kitchen. Several completely different fire-spreading devices assist firefighters pull this off, together with forest hearth torches or fusses (which work much
like a highway flare), propane torches and drip torches. Explore the vineyards like the vintners of old — on the back of a horse. Not solely did he win the NL MVP award, however he additionally
helped the Cubs defeat the Indians on the planet Series to provide Chicago their first title since 1908. to put them again into championship contention? The again car can reverse. Ecomotors estimates
that the variety of shifting parts in its engine has been diminished from 385 to 62, meaning that there’s one heck of so much fewer components that want servicing and might go unhealthy. If 3, then
the nice sequence has to start with 0. There is only one sequence (0,3)03(0,3)( 0 , 3 ) requiring two losses for the omitted team.
There are thee prospects: the primary crew of the nice sequence acquired 3, 1 or zero points in opposition to the omitted one. If the so acquired draw sequence just isn’t graphic then the
investigated sequence just isn’t good. Draw-Sorted-Unique exploits that the uniqueness of the sport matrix just isn’t essential to have a novel sorted draw sequence. At first we omit zero from the
first sequence and state that the remaining sequence (3) may be derived from (0) provided that the crew having zero factors within the shorter sequence wins towards the omitted player. So the omitted
participant has to have zero factors. For the reason that omitted factor is strictly zero, (0,3,6)036(0,3,6)( zero , three , 6 ) is a good sequence. Since it has exactly zero factors, (0,4,4)044
(0,4,4)( zero , 4 , four ) can be a very good sequence. Omitting 00 and evaluating (4,4)44(4,4)( 4 , 4 ) with the nice sequences we get, that (1,1)11(1,1)( 1 , 1 ) is the only potential ancestor
requiring zero factors for the deleted workforce. The network may also be utilized by a workforce to detect below-performing gamers, fix weak spots, detect potential issues between teammates who
should not passing the ball as typically as their position dictates, as well as to detect weaknesses in rivals.
This primary visible evaluation might be made more quantitative by computing international community invariants, which characterize a team as a complete, or native invariants, which give perception
about individual players. It can be utilized, for example, to find out areas of the pitch which are favored or uncared for, whether the crew tends to use or abuse brief distance or long distance
passes, and whether or not a player shouldn’t be intervening enough in a recreation. A really perfect group labelling algorithm will likely be unsupervised, generalizing to new video games with out
needing any labelled knowledge, and would require minimal frames (burn-in time) from the beginning of the game to determine accurate labels for each player on the crew. Now, some of these aren’t
technically sport shows, however they arrive pretty close, in our opinion. Fred Snodgrass appeared sure to catch a fly ball in Sport eight of the 1912 World Series. Three implies the ball is occluded
by other objects. Reduction is predicated on filtering algorithms Reduction0 and Reduction1. For 14141414 teams we excluded more then the half of the common sequences by the constant time algorithms. | {"url":"https://lea-net.com/seductive-sport-app/","timestamp":"2024-11-03T10:46:42Z","content_type":"application/xhtml+xml","content_length":"41998","record_id":"<urn:uuid:3d8fadf2-b4a3-45e7-ac8e-d9938147eb03>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00880.warc.gz"} |
A Case for Building a Mathematics Vocabulary Sense
The purpose of this article is to discuss the philosophy behind helping our learners build a mathematics vocabulary sense using multi-modal approaches, why it is an important part of learning math
and how you can help your learner build a deeper understanding of mathematics.
What is a Mathematics Vocabulary Sense?
Why It Matters?
We develop a deep understanding of the mathematics term. (i.e. We examine the word parts of the term, its types and counter-types, our misunderstandings, and real-life applications.)
We create connections to the mathematics term so that our understanding is usable and lasting. (e.g. By building from the learner’s base of knowledge — through discussions, drawings, and games —
learners iteratively construct their understanding of the term.)
We surface and address misconceptions in our understanding of the mathematics term. (e.g. Learners and educators deliberately surface, label and repair misunderstandings. )
How We Build It?
For learners to fully understand a word and be able to apply it, they need multiple varied exposure and the word needs to be introduced in context.
Multi-modal sources include Marzano cards, word clouds, vocabulary chips, word puzzles, Quizlet and discourse. Membean is probably best multi-modal vocabulary solution out there, but no matter how
much I nudge them, they just do not cover mathematics terms in any meaningful way. But it is a wonderful resource for teachers to see a good way of structuring the teaching of new mathematics terms.
Marzano Vocabulary Cards
Learners use words, pictures and examples that are meaningful to them, in their own terms, from their own experiences. I am currently on a big Marvel Comics kick and using comic book templates will
be engaging for some of our learners. I provide a sticker set to help our learners embellish their cards.
A generic Marzano template is used below.
My Fractions Word Cloud
What I like about word clouds (e.g. wordclouds.com) is that the word cloud can be linked to additional content in support of each mathematical term. For elementary school, my go-to dictionary
resource is MathisFun.com and so I linked each of the vocabulary terms to information form their website. The colors can also be used to grey out areas not covered or emphasis areas of struggle or
completion. It might make for a nice exit ticket. I create the initial word list by uploading my existing textbook to the word cloud web site and it does the work of surfacing key repeated terms.
Mind Maps
Mind maps help learners build connections and see the big picture, dive deep and foster creativity.
Word Parts
Learners should examine the word parts of the term, looking for the origin of the term, appreciating the sub-components of the term, and connecting to its synonyms and opposites.
Vocabulary Chips
Flipping a chip to build vocabulary fits very well with engaging learners to connect to mathematics terms and is a research-based activity (Flip-a-Chip to Build Vocabulary.)
Greatest Common Factor vs. Least Common Multiple
How We Name Polyhedrons?
How You Can Help Your Learner?
Encourage caregivers to discuss mathematics terms at home — in the car, at the dining room table, wherever the opportunity arises.
Encourage learners to build a Marzano card when a mathematics term just does not stick or when there is a misconception that keeps resurfacing. Use Quizlets and games to keep K-12 mathematics terms
If we cannot retrieve these terms, have we really every learned them?
Thank You! | {"url":"https://njlovesmath.com/my-fractions-word-cloud/","timestamp":"2024-11-01T19:19:06Z","content_type":"text/html","content_length":"65351","record_id":"<urn:uuid:88864cad-c7da-4ade-bab5-5bc913c64a8f>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00503.warc.gz"} |
Arithmetic Series (Sum)
Arithmetic Series – Find the Sum Given First and Last Terms #1
Arithmetic Series – Find the Sum Given Two Terms #1
Arithmetic Series – Find the Sequence Given Two Sums #1
Arithmetic Series – Find the Term Position Given Sequence and Sum #1
Arithmetic Series – Find the Term Position Given Sequence and Sum #2 | {"url":"https://vividmath.com/ap-calculus-bc/sequences-and-series/arithmetic-series-sum/","timestamp":"2024-11-12T12:43:26Z","content_type":"text/html","content_length":"66465","record_id":"<urn:uuid:21b0395f-57c5-42ef-80eb-08028e84f613>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00303.warc.gz"} |
Chennai Mathematical Institute
4:00 pm to 4:25 pm, Seminar Hall
Division in Arithmetic circuits
Nikhil Balaji
Chennai Mathematical Institute.
Division is a basic arithmetic operation. Many efficient algorithms for division (Euclid, long division) are known since antiquity. However, these algorithms are inherently sequential. What is the
parallel complexity of division?
There has been a long line of research aimed at pinning down the exact complexity of integer division starting with Beame-Cook-Hoover(P-uniform NC^1), Reif and Tate(P-uniform TC^0),
Chiu-Davidow-Litow(L-uniform NC^1), and culminating in the work of Hesse-Allender-Barrington(DLOGTIME-uniform TC^0). We revisit this question looking to optimize the algorithm in terms of *majority
depth*(initially studied by Maciel and Therien). We present improved uniform TC^0 circuits for division, matrix powering, and related problems, where the improvement is in terms of majority-depth.
Given an arithmetic circuit representing a number, can you find the i-th bit of the number? This problem (BitSLP) is closely related to the complexity of integer division. It is known that any
improvements to the parallel complexity of division in terms of majority depth yields an improved algorithm for BitSLP. Coupled with our improved division algorithm, this yields an improvement in the
complexity of BitSLP.
Joint work with Eric Allender and Samir Datta. | {"url":"https://www.cmi.ac.in/activities/show-abstract.php?absyear=2014&absref=83&abstype=sem","timestamp":"2024-11-07T17:23:07Z","content_type":"text/html","content_length":"7815","record_id":"<urn:uuid:7cece9c2-00e8-42b1-a8d4-14de358dc95c>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00889.warc.gz"} |
Multiples and Products of Functions of Bounded Variation
Multiples and Products of Functions of Bounded Variation
Recall from the Functions of Bounded Variation page that $f$ is of bounded variation on the interval $[a, b]$ if there exists a positive real number $M > 0$ such that for all partitions $P = \{ a =
x_0, x_1, ..., x_n = b \} \in \mathscr{P} [a, b]$ we have that:
\quad V_f (P) = \sum_{k=1}^{n} \mid f(x_k) - f(x_{k-1}) \mid \leq M
We will now look at some nice theorems which tell us that if $f$ and $g$ are both of bounded variation on the interval $[a, b]$ then so is scalar multiple $kf$ for $k \in \mathbb{R}$ and the product
Theorem 1: If $f$ is of bounded variation on the interval $[a, b]$ and $t \in \mathbb{R}$ then $tf$ is of bounded variation on $[a, b]$.
• Proof: Let $f$ be of bounded variation on the interval $[a, b]$. Then there exists a positive real number $M_0 > 0$ such that for all partitions $P = \{ a = x_0, x_1, ..., x_n = b \} \in \mathscr
{P} [a, b]$ we have that:
\quad V_f (P) = \sum_{k=1}^{n} \mid f(x_k) - f(x_{k-1}) \mid < M_0
• Let $h = tf$ for any $t \in \mathbb{R}$ and consider the variation of $h$ associated with $P \in \mathscr{P}[a, b]$:
\quad V_h (P) = \sum_{k=1}^{n} \mid h(x_k) - h(x_{k-1}) \mid = \sum_{k=1}^{n} \mid (tf)(x_k) - (tf)(x_{k-1}) \mid = \sum_{k=1}^{n} \mid t f(x_k) - tf(x_{k-1}) \mid
• We note that $\mid t f(x_k) - tf(x_{k-1})\mid = \mid t \mid \mid f(x_k) - f(x_{k-1}) \mid$ and so:
\quad V_h(P) = \sum_{k=1}^{n} \mid t \mid \mid f(x_k) - f(x_{k-1}) \mid = \mid t \mid \sum_{k=1}^{n}\mid f(x_k) - f(x_{k-1}) \mid \leq \mid t \mid M_0
• Let $M = \mid t \mid M_0 > 0$. Then for all $P \in \mathscr{P}[a, b]$ there exists a positive real number $M > 0$ such that $V_h(P) = V_{tf}(P) \leq M$, so $h = tf$ is of bounded variation on $
[a, b]$. $\blacksquare$
Theorem 2: If $f$ and $g$ are of bounded variation on the interval $[a, b]$ then $fg$ is of bounded variation on $[a, b]$.
• Proof: Let $f$ and $g$ be of bounded variation on the interval $[a, b]$. Then there exists positive real numbers $M_1, M_2 > 0$ such that for all partitions $P = \{ a = x_0, x_1, ..., x_n = b \}
\in \mathscr{P} [a, b]$ we have that:
\quad V_f (P) = \sum_{k=1}^{n} \mid f(x_k) - f(x_{k-1}) \mid < M_1 \quad \mathrm{and} \quad V_g (P) = \sum_{k=1}^{n} \mid g(x_k) - g(x_{k-1}) \mid < M_2
• Let $h = fg$ and consider the variation of $h$ associated with $P \in \mathscr{P}[a,b]$:
\quad V_h = \sum_{k=1}^{n} \mid h(x_k) - h(x_{k-1}) \mid = \sum_{k=1}^{n} \mid (fg)(x_k) - (fg)(x_{k-1}) \mid = \sum_{k=1}^{n} \mid f(x_k)g(x_k) - f(x_{k-1})g(x_{k-1}) \mid
\quad \mid f(x_k)g(x_k) - f(x_{k-1})g(x_{k-1}) \mid = \mid f(x_k)g(x_k) - f(x_k)g(x_{k-1}) + f(x_k)g(x_{k-1}) - f(x_{k-1})g(x_{k-1}) \mid \\ \quad \mid f(x_k)g(x_k) - f(x_{k-1})g(x_{k-1}) \mid \leq \
mid f(x_k)g(x_k) - f(x_k)g(x_{k-1}) \mid + \mid f(x_k)g(x_{k-1}) - f(x_{k-1})g(x_{k-1}) \mid \\ \quad \mid f(x_k)g(x_k) - f(x_{k-1})g(x_{k-1}) \mid \leq \mid f(x_k) \mid \mid g(x_k) - g(x_{k-1}) \mid
+ \mid g(x_{k-1}) \mid \mid f(x_k) - f(x_{k-1}) \mid
• Since $f$ and $g$ are of bounded variation on the interval $[a, b]$ we have that $f$ and $g$ are bounded on $[a, b]$ and so there exists positive real numbers $A, B > 0$ such that for all $x \in
[a, b]$ we have that $\mid f(x) \mid \leq A$ and $\mid g(x) \mid \leq B$. Since $x_k, x_{k-1} \in [a, b]$ for all $k \in \{ 1, 2, ..., n \}$ we have that $\mid f(x_k) \mid \leq A$ and $\mid g(x_
{k-1}) \mid \leq B$ and so:
\quad \mid f(x_k)g(x_k) - f(x_{k-1})g(x_{k-1}) \mid \leq A \mid g(x_k) - g(x_{k-1}) \mid + B \mid f(x_k) - f(x_{k-1}) \mid + B \mid
\quad V_h (P) \leq \sum_{k=1}^{n} [A \mid g(x_k) - g(x_{k-1}) \mid + B \mid f(x_k) - f(x_{k-1}) \mid + B \mid] \\ \quad V_h (P) \leq A \sum_{k=1}^{n} \mid g(x_k) - g(x_{k-1}) \mid + B \sum_{k=1}^{n}
\mid f(x_k) - f(x_{k-1}) \mid \\ \quad V_h (P) \leq AM_2 + BM_1
• Let $M = AM_2 + BM_1 > 0$. Then for all $P \in \mathscr{P}[a, b]$ there exists a positive real number $M > 0$ such that $V_h(P) = V_{fg}(P) \leq M$, so $h = fg$ is of bounded variation on $[a, b]
$. $\blacksquare$ | {"url":"http://mathonline.wikidot.com/multiples-and-products-of-functions-of-bounded-variation","timestamp":"2024-11-13T22:38:18Z","content_type":"application/xhtml+xml","content_length":"20662","record_id":"<urn:uuid:a2068138-a835-4c93-bfb5-086febeefacb>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00597.warc.gz"} |
how to find the radius of a circle from an image
Accepted Answer
Commented: Image Analyst on 1 Mar 2023
I want to fit a circle just like the black line drawn in the figure and trying to find its radius. below is the code i am writing but not getting anything significant. kindly help me how to approch
this kind of problems.
clear all
% read the image
img = imread('gra.png');
% convert the image to grayscale
img_gray = rgb2gray(img);
% perform edge detection using the Canny algorithm
edge_img = edge(img_gray, 'sobel');
% perform hough transform to detect circles
[centers, radii] = imfindcircles(edge_img, [10 50000],'ObjectPolarity','bright','Sensitivity', 0.95);
% find the largest circle
[max_r, max_i] = max(radii);
max_center = centers(max_i, :);
% plot the results
hold on;
viscircles(max_center, max_r, 'EdgeColor', 'b');
38 views (last 30 days)
how to find the radius of a circle from an image
OK, not sure why you don't want to use John's program but I will. Here is my code:
clc; % Clear the command window.
close all; % Close all figures (except those of imtool.)
clear; % Erase all existing variables. Or clearvars if you want.
workspace; % Make sure the workspace panel is showing.
rgbImage = imread('gra.png');
title('Original Image', 'FontSize', fontSize);
grayImage = rgb2gray(rgbImage);
title('Mask Image', 'FontSize', fontSize);
% Get coordinates of all white pixels in the mask.
% Find minimum bounding circle's center and radius.
[center, radius] = minboundcircle(x, y)
% Plot that circle over the image.
viscircles(center, radius, 'EdgeColor', 'b', 'LineWidth', 2);
and here is the result:
4 Comments
Evidently if you did download the minboundcircle code from
you didn't extract it to a folder and add it to the path. Please do so.
More Answers (2)
I think you can reduce/simplify the problem statement to finding the enclosing circle around a collection of points.
To solve this problem, you can try out multiple algorithms depending on your requirements:
1. The simplest could be to find the 2D Bounding box of the points. The radius of the enclosing circle would be the diagonal of the bounding box.
2. The above solution, however, would not guarantee you the tightest/minimum enclosing circle. You can find the minimum enclosing circle using Welzl's Algorithm. You can read more about the Smallest
Circle problem.
You can check out implementations at File exchange: Exact minimum bounding spheres and circles - File Exchange - MATLAB Central (mathworks.com)
Please attach 'gra.png' so we can run your code. So I see small red circles and a large black circle. Do you have the coordinates of the centers of the red circles? If you want the min bounding
circle you can use
Otherwise if you want to fit just some of the more "outer" red circles then you need to somehow define the radius that defines ones you want to include in the fit and those inner ones that you want
to exclude from the fit.
6 Comments
If you want to minimize the number of atoms outside the circle then you want the min bounding circle so that the number outside will be zero. For this you can use John's function in the link I gave
you. Will you try it? Just pass your max_center array into his function. Or if you want no part outside, then pass in the coordinates of your binary image you get with find() into his function. | {"url":"https://se.mathworks.com/matlabcentral/answers/1919600-how-to-find-the-radius-of-a-circle-from-an-image","timestamp":"2024-11-03T18:22:21Z","content_type":"text/html","content_length":"204752","record_id":"<urn:uuid:a6b0bd5b-ad1d-48af-9574-88bd4396dec1>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00061.warc.gz"} |
Julia has been around since 2012 and after more than six years of development, its 1.0 version has been finally released. This is a major milestone and one that has inspired me to write a new
blogpost (after several months of silence). This time we are going to see how to do parallel programming in Julia using the Message Passing Interface (MPI) paradigm, through the open source library
Open MPI. We will do this by solving a real physical problem: heat diffusion across a two-dimensional domain. | {"url":"http://www.claudiobellei.com/","timestamp":"2024-11-12T07:25:51Z","content_type":"text/html","content_length":"33489","record_id":"<urn:uuid:5a0581de-2fc0-4bb1-8896-db82114d9655>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00720.warc.gz"} |
#quantitative-methods-basic-concepts #statistics
The arithmetic mean is what is commonly called the average.
If you want to change selection, open original toplevel document below and click on "Move attachment"
Parent (intermediate) annotation
Open itArithmetic Mean The arithmetic mean is what is commonly called the average. The population mean and sample mean are both examples of the arithmetic mean. If the data set encompasses an entire
population, the arithmetic mean is called a
Original toplevel document
Subject 4. Measures of Center Tendency
; The sample mean is the average for a sample. It is a statistic and is used to estimate the population mean. where n = the number of observations in the sample <span>
Arithmetic Mean The arithmetic mean is what is commonly called the average. The population mean and sample mean are both examples of the arithmetic mean. If the data set encompasses an entire
population, the arithmetic mean is called a population mean. If the data set includes a sample of values taken from a population, the arithmetic mean is called a sample mean. This is the most widely
used measure of central tendency. When the word "mean" is used without a modifier, it can be assumed to refer to the arithmetic mean. The mean is the sum of all scores divided by the number of
scores. It is used to measure the prospective (expected future) performance (return) of an investment over a number of periods. All interval and ratio data sets (e.g., incomes, ages, rates of return)
have an arithmetic mean. All data values are considered and included in the arithmetic mean computation. A data set has only one arithmetic mean. This indicates that the mean is unique. The
arithmetic mean is the only measure of central tendency where the sum of the deviations of each value from the mean is always zero. Deviation from the arithmetic mean is the distance between the mean
and an observation in the data set. The arithmetic mean has the following disadvantages: The mean can be affected by extremes, that is, unusually large or small values. The mean cannot be determined
for an open-ended data set (i.e., n is unknown). Geometric Mean The geometric mean has three important properties: It exists only if all the observations are gre
status not read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on | {"url":"https://buboflash.eu/bubo5/show-dao2?d=1332039257356","timestamp":"2024-11-12T06:50:08Z","content_type":"text/html","content_length":"19992","record_id":"<urn:uuid:411f2ca0-5acf-4801-892d-f8fbc6ce11b3>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00158.warc.gz"} |
fan calculator
Calculators are indispensable tools for various mathematical computations. Whether you’re a student, professional, or just someone who needs to crunch numbers, having a reliable calculator at your
disposal is essential. In this article, we’ll discuss how to create a simple yet efficient calculator.
How to Use
Using the calculator is straightforward. Input the values you want to calculate into the designated fields, select the desired operation, and click the “Calculate” button. The result will be
displayed instantly.
The formula used for calculations depends on the operation selected:
• Addition: result=num1+num2
• Subtraction: result=num1−num2
• Multiplication: result=num1×num2
Example Solve
Let’s say we want to add two numbers, 5 and 3. Inputting these values and selecting the addition operation should yield a result of 8.
Q: Can I perform multiple operations in one calculation?
A: No, the calculator is designed to perform only one operation at a time.
Q: Can I input decimal numbers?
A: Yes, the calculator supports decimal numbers for more precise calculations.
Q: Is there a limit to the size of numbers I can input?
A: The calculator can handle a wide range of numbers, but excessively large or small numbers might result in unexpected behavior.
Creating a basic calculator using HTML and JavaScript is a simple yet effective way to perform calculations quickly. By following the provided guidelines, you can customize the calculator to suit
your specific needs, making it a valuable tool for various mathematical tasks. | {"url":"https://calculatordoc.com/fan-calculator/","timestamp":"2024-11-13T21:59:32Z","content_type":"text/html","content_length":"91413","record_id":"<urn:uuid:49be8e57-2bc3-49ea-971e-d1b22905369b>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00789.warc.gz"} |
Is it possible to evaluate multiple predecessors and match to a cell value in another column?
I need to correlate a predecessor row to a work order number in a different column within the same sheet. There can be multiple predecessors. Is there a way to have a formula evaluate multiple
predecessors and show a corresponding work order number?
Here is a screenshot, I'm trying to figure out if a formula can be used to populate the work order numbers (cell in yellow), based on the values in the predecessor column when there are multiple
For each predecessor value, want to look up the work order number on the corresponding row.
Best Answer
• I've done something similar, and depending on how simple your predecessors are, the solution could be simple, or make your head spin. For the super simple version (2 predecessors MAX) try this:
Add "Row #" column to make sure your lookups match your Predecessor numbers. This formula will match if you shift rows around and such:
=MATCH([Work Order]@row, [Work Order]:[Work Order], 0)
You must break out the Predecessors into individual columns. I can't seem to change them to values referencing the Predecessors column directly:
Pred#1: =IFERROR(LEFT(Predecessors@row, FIND(",", Predecessors@row) - 1), Predecessors@row)
Pred#2: =IF(FIND(",", Predecessors@row) > 0, RIGHT(Predecessors@row, LEN(Predecessors@row) - FIND(",", Predecessors@row)), "")
Then you can use those values to bring in the corresponding work orders with an INDEX(MATCH()) combination.
=IFERROR(INDEX([Work Order]:[Work Order], MATCH(VALUE([Pred#1]@row), [Row #]:[Row #], 0)), "") + IF(ISBLANK([Pred#2]@row), "", ", ") + IFERROR(INDEX([Work Order]:[Work Order], MATCH(VALUE([Pred#
2]@row), [Row #]:[Row #], 0)), "")
As you can see in my sample above however, when you get 3+ predecessors this solution doesn't work. It is possible, but the formulas will start getting much more complicated involving the MID()
function and lots of FIND() commas. It would get even trickier still if you have any lag durations in your predecessors (such as 2FS-2d... etc). So if you can keep it simple, the above solution
should work for you, above and beyond that is going to take a lot more engineering time to work out well.
I hope this at least gets you in the right direction!
Jason Tarpinian - Sevan Technology
Smartsheet Aligned Partner
• I've done something similar, and depending on how simple your predecessors are, the solution could be simple, or make your head spin. For the super simple version (2 predecessors MAX) try this:
Add "Row #" column to make sure your lookups match your Predecessor numbers. This formula will match if you shift rows around and such:
=MATCH([Work Order]@row, [Work Order]:[Work Order], 0)
You must break out the Predecessors into individual columns. I can't seem to change them to values referencing the Predecessors column directly:
Pred#1: =IFERROR(LEFT(Predecessors@row, FIND(",", Predecessors@row) - 1), Predecessors@row)
Pred#2: =IF(FIND(",", Predecessors@row) > 0, RIGHT(Predecessors@row, LEN(Predecessors@row) - FIND(",", Predecessors@row)), "")
Then you can use those values to bring in the corresponding work orders with an INDEX(MATCH()) combination.
=IFERROR(INDEX([Work Order]:[Work Order], MATCH(VALUE([Pred#1]@row), [Row #]:[Row #], 0)), "") + IF(ISBLANK([Pred#2]@row), "", ", ") + IFERROR(INDEX([Work Order]:[Work Order], MATCH(VALUE([Pred#
2]@row), [Row #]:[Row #], 0)), "")
As you can see in my sample above however, when you get 3+ predecessors this solution doesn't work. It is possible, but the formulas will start getting much more complicated involving the MID()
function and lots of FIND() commas. It would get even trickier still if you have any lag durations in your predecessors (such as 2FS-2d... etc). So if you can keep it simple, the above solution
should work for you, above and beyond that is going to take a lot more engineering time to work out well.
I hope this at least gets you in the right direction!
Jason Tarpinian - Sevan Technology
Smartsheet Aligned Partner
• Thanks Jason!
This is very helpful and is a significant step in the right direction. I'm not sure we can keep it to only 2 predecessors, so it looks like I have some formula fun ahead of me. Of course the
current requirement is to allow for lag values as well but that may be a nice to have instead of a must have.
I really appreciate the help and hopefully this will help others as well, I've seen other posts expressing similar challenges.
Have a great day!
• Just an update on this, @Jason Tarpinian solution works great. We had a need to expand that to up to 10 predecessors. In order to do that, I had to add a comma as a delimiter between values and
at the end of the final value. Then used the following formulas to extract the individual values (Used a column called Constraints for the entry of the multiple predecessor values):
Contraint 1: =IFERROR(LEFT(Constraints@row, FIND(",", Constraints@row) - 1), Constraints@row)
Constraint 2: =IFERROR(MID(Constraints@row, (FIND([Constraint1]@row, Constraints@row) + LEN([Constraint1]@row) + 1), (FIND(",", Constraints@row, (FIND(", ", Constraints@row, (FIND([Constraint1]
@row, Constraints@row))) + 1))) - FIND(", ", Constraints@row, (FIND([Constraint1]@row, Constraints@row))) - 1), "")
Constraint 3: =IFERROR(MID(Constraints@row, IF(LEN([Constraint2]@row) <> 0, IF(LEN([Constraint2]@row) <> 0, FIND([Constraint2]@row, Constraints@row) + LEN([Constraint2]@row) + 1)), (FIND(",",
Constraints@row, IF(LEN([Constraint2]@row) <> 0, (FIND([Constraint2]@row, Constraints@row) + LEN([Constraint2]@row) + 1))) - (IF(LEN([Constraint2]@row) <> 0, FIND([Constraint2]@row,
Constraints@row) + LEN([Constraint2]@row) + 1)))), "")
Constraint 4: =IFERROR(MID(Constraints@row, IF(LEN([Constraint3]@row) <> 0, IF(LEN([Constraint3]@row) <> 0, FIND([Constraint3]@row, Constraints@row) + LEN([Constraint3]@row) + 1)), (FIND(",",
Constraints@row, IF(LEN([Constraint3]@row) <> 0, (FIND([Constraint3]@row, Constraints@row) + LEN([Constraint3]@row) + 1))) - (IF(LEN([Constraint3]@row) <> 0, FIND([Constraint3]@row,
Constraints@row) + LEN([Constraint3]@row) + 1)))), "")
For additional constraints, the formula for constraints 3 & 4 can be copied and modified. Definitely not a pretty solution, but it is working very effectively.
Hope this is helpful to others, have a great day!
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/95972/is-it-possible-to-evaluate-multiple-predecessors-and-match-to-a-cell-value-in-another-column","timestamp":"2024-11-06T22:09:53Z","content_type":"text/html","content_length":"413651","record_id":"<urn:uuid:c0b2f3da-784c-4f3c-bef7-f23d43b2fbae>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00453.warc.gz"} |
Dissipation in a Finite Temperature Atomic Josephson Junction
integrated_thermal_denxz_T_0p58Tc.dat (4 MB)
integrated_densityxz_T0.dat (1 MB)
imbalance_vs_time_T0_z0_0p117.dat (512.7 kB)
imbalance_vs_time_T0_z0_0p048.dat (205.08 kB)
imbalance_vs_time_T0_z0_0p08.dat (512.59 kB)
imbalance_vs_time_Jos_T160nK.dat (968.07 kB)
imbalance_vs_time_Jos_T100nK.dat (205.08 kB)
imbalance_vs_time_Jos_T40nK.dat (1.05 MB)
cond_thermal_tot_number_T160nK.dat (608.55 kB)
cond_thermal_tot_number_T100nK.dat (128.95 kB)
cond_thermal_tot_number_T40nK.dat (677.7 kB)
imbalance_vs_time_T100nK_diss.dat (1.44 MB)
imbalance_vs_time_T40nK_diss.dat (1.05 MB)
conz1_t0000100000_T0.dat (76 kB)
conz1_t0000099000_T0.dat (76 kB)
conz1_t0000098000_T0.dat (76 kB)
conz1_t0000097000_T0.dat (76 kB)
conz1_t0000096000_T0.dat (76 kB)
conz1_t0000095000_T0.dat (76 kB)
conz1_t0000094000_T0.dat (76 kB)
Dissipation in a Finite Temperature Atomic Josephson Junction
The description of the files containing the data for the paper: "Dissipation in a finite-temperature atomic Josephson junction." could be found here. In this paper
we numerically demonstrate and characterize the emergence of distinct dynamical regimes of a finite temperature bosonic superfluid in an elongated Josephson junction generated by a thin Gaussian
barrier over
the entire temperature range where a well-formed condensate can be clearly identified.
The files 'imbalance_vs_time_T0_z0_...dat' are the data for fig1 (d) and fig10(a)-(ii). They have as first column the time in arbitrary units which must be multiplied by 1/(2*pi*nu_x) in order to
get the times in seconds. The third column in the number of condensate particles in the left well N_L from which we could achieve the condensate population imbalace z_BEC(t)=1-2*N_L/N_BEC.
The file 'cond_frac..' contains the data for fig1(c) where the first column is the temperature in nK, the second column is the condensate fraction while the third one is the temperature scaled to the
critical value.
The files 'integrated_densityxz_T,,.dat' are binary files, the first column is the x-axis in units of harmonic oscillator length along x-axis l_x, the fourth column is the equilibrium density along
x-axis in arbitrary units. These are the data for fig1(a)-(b).
The files 'cond_thermal_tot_number_..dat' contains the time in arbitrary units as first column (to be multiplied by 1.e-4/omega _x in order to convert it in seconds), the number of the condensate/
thermal and total particles as second/third and fourth column respectively. The files 'imbalance_vs_time_Jos_...dat' contains the time in arbitrary units (which must be multiplied by 1/(2*pi*nu_x) in
order to get the times in seconds), the third column in the number of condensate particles in the left well N_L from which we could achieve the condensate population imbalace z_BEC(t)=1-2*N_L/N_BEC
and the fourth coulumn is the number of thermal particles on the left well. These data are for fig2 and fig5.
The files 'imbalance_vs_time_diss_...dat' contains data used for fig6 and fig9 and their description is the same as the files ''imbalance_vs_time_Jos_...dat'.
The files 'conz1_...' have as first column the x-axis in units of harmonic oscillator length along x-axis, the fourth column is the density along x-axis in arbitrary units, and the number in the file
name if multiplied by 1.e-4/(omega_x) gives the time in seconds. These data are used for creating the carpet plots in fig10.
The files 'imbalance_vs_time_..._fig11' has the same description as the files ''imbalance_vs_time_Jos_...dat'. The files 'cond_therm..fig11' has the same description as the files
'cond_thermal_tot_number_..dat'. These data are used for fig11.
The files 'Nbec_Nth_Ntot_T_..dat' has the same description as the files 'cond_frac...dat' and the files 'imbalance_vs_time_T...dat' has the same description as the files
'imbalance_vs_time_Jos_..dat'. From these files, the condensate/thermal and total population imbalance is obtained at different temperature, whose fit gives the results of Fig3, Fig4, Fig7 and Fig8.
The files 'conz1_..fig11..dat' have the same description as the files ' conz1_...', the files 'thz_...dat' have as first column the x-axis in arbitrary units while the second column is the value of
the thermal cloud density along x-axis. The files 'imbalance_vs_time_..._fixed_Ntot..dat' have the same description as the files 'imbalance_vs_time..dat'. These data are used for building fig11 and | {"url":"https://data.ncl.ac.uk/articles/dataset/Dissipation_in_a_Finite_Temperature_Atomic_Josephson_Junction/20438823/1","timestamp":"2024-11-05T11:10:09Z","content_type":"text/html","content_length":"274298","record_id":"<urn:uuid:1337e624-7a12-46b2-a24b-2a4d165a1790>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00257.warc.gz"} |
numpy.random.multinomial(n, pvals, size=None)¶
Draw samples from a multinomial distribution.
The multinomial distribution is a multivariate generalisation of the binomial distribution. Take an experiment with one of p possible outcomes. An example of such an experiment is throwing a
dice, where the outcome can be 1 through 6. Each sample drawn from the distribution represents n such experiments. Its values, X_i = [X_0, X_1, ..., X_p], represent the number of times the
outcome was i.
n : int
Number of experiments.
pvals : sequence of floats, length p
Parameters: Probabilities of each of the p different outcomes. These should sum to 1 (however, the last element is always assumed to account for the remaining probability, as long as sum
(pvals[:-1]) <= 1).
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. Default is None, in which case a single value is returned.
out : ndarray
Returns: The drawn samples, of shape size, if that was provided. If not, the shape is (N,).
In other words, each entry out[i,j,...,:] is an N-dimensional value drawn from the distribution.
Throw a dice 20 times:
>>> np.random.multinomial(20, [1/6.]*6, size=1)
array([[4, 1, 7, 5, 2, 1]])
It landed 4 times on 1, once on 2, etc.
Now, throw the dice 20 times, and 20 times again:
>>> np.random.multinomial(20, [1/6.]*6, size=2)
array([[3, 4, 3, 3, 4, 3],
[2, 4, 3, 4, 0, 7]])
For the first run, we threw 3 times 1, 4 times 2, etc. For the second, we threw 2 times 1, 4 times 2, etc.
A loaded die is more likely to land on number 6:
>>> np.random.multinomial(100, [1/7.]*5 + [2/7.])
array([11, 16, 14, 17, 16, 26])
The probability inputs should be normalized. As an implementation detail, the value of the last entry is ignored and assumed to take up any leftover probability mass, but this should not be
relied on. A biased coin which has twice as much weight on one side as on the other should be sampled like so:
>>> np.random.multinomial(100, [1.0 / 3, 2.0 / 3]) # RIGHT
array([38, 62])
not like:
>>> np.random.multinomial(100, [1.0, 2.0]) # WRONG
array([100, 0]) | {"url":"https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.multinomial.html","timestamp":"2024-11-11T20:55:19Z","content_type":"text/html","content_length":"13101","record_id":"<urn:uuid:72c8d1e5-99c4-4269-a14d-cb0ad9317b76>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00888.warc.gz"} |
As a big proponent of Tonsky’s Performance first philosophy, I believe every software engineer should pay attention to performance from the start. If used in the right places, bitwise operators can
increase the speed of your programs by several orders of magnitude — without compromising on readability.
However, unlike some performance low-hanging fruit found in complex frameworks, code that works at the bit level should be introduced early on in the project. Serialization protocols are mostly set
in stone from day 1, as updating them on a production environment normally requires downtime to perform the necessary migrations.
It’s good to practice fundamentals from time to time. You never know when these will come in handy. Sure, if you’re a frontend UI engineer, the & (different from &&), ^ or ~ operators may be outright
new to you. This is not to say that they are useless or “too primitive”: the Apollo Guidance Computer was entirely built from NOR gates.
Armstrong training in the lunar module simulator at Kennedy Space Center (Source)
To explore some use cases of these operators, we’re going to design a program to parse Minecraft’s Chunk Data packet. Videogames require various data structures to optimize the amount of bytes sent
over the wire to reduce latency. In particular, this packet contains the game’s block data serialized as a compacted array of values. This data structure must be designed with performance in mind
since a Minecraft server can send up to 120 packets per player per second.^1 Alstad, T. et al. (2015). Minecraft Computer Game Performance Analysis and Network Traffic Emulation by a Custom Bot.
Science and Information Conference (SAI), 227-236.
A compacted array holds a fixed number length of integers with bitsPerValue bits each. It has the following API:
public class CompactedArray {
public CompactedArray(final int length, final int bitsPerValue) {
// ...
public int get(final int index) {
// ...
public void set(final int index, final int value) {
// ...
The main difference between CompactedArray and a regular int[] is that it supports words of any size less than or equal to 32. Indeed, if bitsPerValue were 8, 16, 32 or 64 the primitive array
counterpart would outperform the CompactedArray implementation. Method invocation and object memory overhead come to mind. In fact, Minecraft used to store block IDs in a byte array, but they quickly
ran out of IDs for new blocks and overhauled the game’s storage and protocol format.
It seems natural that CompactedArray should be backed by a primitive array given their similarities: they both hold a fixed number of values of a certain bit length. But what type should we employ?
In the worst case scenario (bitsPerValue is 32), retrieving a CompactedArray value from a byte array requires 4 loads. Assuming the values are in the processor’s L1 cache, this should only take 4
cycles.^2 Intel Optimization Reference Manual However, we need to take into account the JVM bounds checking. Minimizing memory accesses yields better performance. A short array might still need 2
loads if bitsPerValue is greater than 16. See the pattern? As the backing array element size increases, the number of required load instructions decreases. This means a long array is the best
public class CompactedArray {
private final long[] data;
private final int bitsPerValue;
public CompactedArray(final int length, final int bitsPerValue) {
this.bitsPerValue = bitsPerValue;
// TODO Initialize data array
// ...
A new question arises: how big should the data array be? For brevity, let $l$ and $b$ be length and bitsPerValue respectively. The problem is equivalent to finding the least multiple of 64 which can
store $lb$ bits. That is,
$\text{data array size} = \left\lceil \frac{lb}{64} \right\rceil,$
where $\left\lceil \cdot \right\rceil$ denotes the ceiling function. For example, if we wanted to store 32 values of 5 bits each, the data array length would be $\left\lceil \frac{32 \times 5}{64} \
right\rceil = 3$.
public class CompactedArray {
public CompactedArray(final int length, final int bitsPerValue) {
// Using Math.ceil is suboptimal, see the appendix
this.data = new long[(int) Math.ceil(length * bitsPerValue / 64D)];
// ...
// ...
A CompactedArray can now be constructed, but it’s pretty useless in its current form. Let’s first work on value retrieval. The choice of a long array as the backing structure means a value can be
found within a single long; or overlapping between 2 longs, where the least significant bits (LSBs) can be found in the first long, and the remaining ones at the beginning of the second. More
formally, given a value starting at position $0 \leq s \lt 64$ of a long, the number of bits of that value stored in the long is $1 \leq \min\{b, 64 - s\} \leq b$, meaning $\max\{b - (64 - s), 0\}$
bits are stored in the next long. As an exercise, find how many bytes a $b$-bit value can overlap.
Given a CompactedArray element at index $i$, we know its first bit is stored in position $bi$, that lies within the $\left\lfloor \frac{bi}{64} \right\rfloor$-th long. The notation is making this
seem worse than it is. In code,
public int get(final int index) {
int bitOffset = index * bitsPerValue;
// Integer division truncates the result
int offset = bitOffset / 64;
// ...
The right-shift operator
It’s time to introduce the first bitwise operator: the right shift (>>). Imagine we want to compute $a / b$. If $b$ can be expressed as a power of 2, i.e. $b = 2^x$ for some $x$, right shifting $a$
by $x$, a >> x, is equivalent to dividing $a$ by $b$.
In Algebra class you probably learnt that dividing exponentials with the same base is equivalent to subtracting their exponents. Notice that every bit of an int represents a power of 2. For example,
letting bitOffset be 197, offset is
\begin{aligned} \biggl\lfloor \frac{197}{64} \biggr\rfloor = \left\lfloor \frac{2^7 + 2^6 + 2^2 + 2^0}{2^6} \right\rfloor &= \left\lfloor 2^1 + 2^0 + 2^{-4} + 2^{-6} \right\rfloor \ &= 2^1 + 2^0 = 3.
As 64 is equal to $2^6$, bitOffset / 64 is the same as bitOffset >> 6. We’re not interested in the fractional part (the negative powers of 2), so it is discarded.
The bitwise AND operator
Having computed the data array index where the LSBs of the value can be found, it’s time to figure out which bits we need to extract from it. The starting bit position within the long is $s = bi \
bmod 64$. In code,
public int get(final int index) {
int bitOffset = index * bitsPerValue;
int offset = bitOffset >> 6;
int startBit = bitOffset % 64;
// ...
This works, but a modulo operation involves a division. Divisions are much more expensive than bitwise operations.^3 A fast alternative to the modulo reduction
We introduce the AND (&) operator. Given two binary numbers $a, b$, we want to obtain what 1-bits they have in common. For each power of 2, this operator compares the bits from each argument: if both
are 1, the resultant bit is 1. Otherwise, it is 0. For instance, $1001 \And 0101 = 0001$.
Representing bitOffset in binary, i.e. as $\sum_{j=0}^{31} b_j 2^j$ where $b_j$ is the $j$-th bit of bitOffset, one can factorize and represent this value as $2^6(b_{31} 2^{31-6} + \dots + b_6 2^0) +
b_5 2^5 + \dots + b_0 2^0$. Reducing this value modulo $64 = 2^6$ results in the non-factorized part $b_5 2^5 + \dots + b_0 2^0$. This is the same as extracting the 6 first bits using bitOffset &
0b111111, which is much faster. This technique is known as bit masking.
The problem is now divided in two: if the value fits in the long (i.e. $64 - (bi \bmod 64) \leq b$) we need to extract $b$ bits by applying a bit mask. However, if the value is overlapped between two
longs, we need to join the least and most significant bits (MSBs).
The first case can be expressed as a “power-of-2 filter”. The value is equal to the sum of all bits that lie between $s$ and $s + b$. That is,
$\text{value} = \sum_{j=s}^{\mathclap{s + b - 1}} b_j 2^{j - s}.$
The inequality $s \leq j \lt s + b$ is equivalent to $0 \leq j - s \lt b$. Applying this shift by $s$ in code is expressed as data[offset] >>> startBit (the extra > tells Java to also shift the sign
bit^4 JLS, Section 15.19), moving bits in positions $[s, s + b)$ to $[0, b)$. However, this right shift is insufficient, as it also shifts all bits in positions greater than or equal to $s + b$. To
ignore these, we apply a bitmask that only leaves the first $b$ bits:
public class CompactedArray {
// ...
private final long bitmask;
public CompactedArray(final int length, final int bitsPerValue) {
// ...
// Left shifting 0b1 by bitsPerValue results in 2^bitsPerValue
// If bitsPerValue were 5, 2^5 - 1 = 0b100000 - 0b1 = 0b011111,
// the bitmask we're interested in.
this.bitmask = (1L << bitsPerValue) - 1;
public int get(final int index) {
int bitOffset = index * bitsPerValue;
int offset = bitOffset >> 6;
int startBit = bitOffset & 0b111111;
long lsbs = (data[offset] >>> startBit) & bitmask;
// ...
The left-shift operator
What about elements contained within two longs? Since the LSBs of the value are stored in the upper-most bits $s, \dots, 63$ of the first long, Java fills the MSBs of data[offset] >>> startBit with
zeros after shifting. The first long contains $64 - s$ bits of the final value, meaning the next long contains the remaining $b - (64 - s)$ bits. We need to somehow join the LSBs from the first long
(that we already have) with the MSBs stored in the LSBs of the second long, data[offset + 1].
Directly adding these bits would yield an incorrect result since the MSBs are misplaced: when bitsPerValue is 18, the value at index 3 is stored in bits 54 through 63 of the first long and bits 0
through 7 of the second long. If we shift the latter $64 - 54 = 10$ positions to the left, both values fit like puzzle pieces.
More rigorously, the positions we have to shift the MSBs by is exactly the number of bits we have from the first long, $64 - s$.
In contrast to its sibling, left shifting $a$ by $b$, a << b is equivalent to multiplying $a$ by $2^b$. To sum up, given the bits $b_0, \dots, b_{b - (64 - s)}$ from the second long,
$\text{value} = \text{LSBs} + \sum_{j=0}^{b - (64 - s)} 2^{j + 64 - s} b_j,$
which in code is
int shiftBy = 64 - startBit;
// Apply the bitmask to zero out the upper bits we're not interested in
long msbs = (data[offset + 1] << shiftBy) & bitmask;
The bitwise OR operator
Given the binary numbers $a, b$; the OR (|) operator tells us what bits of $a$ or $b$ are 1. For each position, the operator compares the bits from each argument: if either or both of them are 1, the
resultant bit is 1. Otherwise, it is 0. For example, $1100 \mid 1010 = 1110$.
Since the LSBs and MSBs of value don’t overlap, all the bits to the left of the LSBs are zero, and all the bits to the right of the MSBs are zero, $\text{MSBs} \mid \text{LSBs}$ yields the value
we’re looking for.
public int get(final int index) {
return (int) (msbs | lsbs);
Given an index, everything left is to check in which case we’re in: does the value fit in a long or does it overlap? This is easy, we only need to check whether the last bit $s + b$ offset is greater
than 64. Another alternative is comparing the offsets of the initial and last bits:
int lastBitOffset = (bitOffset + bitsPerValue - 1) >> 6;
if (offset != lastBitOffset) {
// The starting bit and ending bit are in different longs
On my machine, this is 1.7x slower than the greater-than comparison.JMH benchmark Generally, integer equality and inequality take the same CPU cycles to run.^5 This SO answer explains this in more
detail. The second method introduces a right shift, which takes an extra clock cycle and confuses the branch predictor. I challenge you to create a (faster) branchless version — I’ve tried and
Putting it all together,
public int get(final int index) {
int bitOffset = index * bitsPerValue;
int offset = bitOffset >> 6;
int startBit = bitOffset & 0b111111;
// Get least significant bits
long value = (data[offset] >>> startBit) & bitmask;
if (startBit + bitsPerValue > 64) {
int shiftBy = 64 - startBit;
// OR shifted most significant bits
value |= (data[offset + 1] << shiftBy) & bitmask;
return (int) value;
The set method involves the same offsets. Computing the updated data[offset] and data[offset + 1] values is a tad more tricky though. Given the previous data, this method shall replace the relevant
bits with the new integer value of bitsPerValue bits. Its LSBs will go on positions $s$ to $\min \{ s + b, 63 \}$ of the updated data[offset].
We already know how to align these bits. Namely, we apply the bitmask (to ensure the given value has exactly bitsPerValue bitsNever trust the caller!) using the AND operator and left shift the LSBs
startBits positions. The resultant value is of the form $0 \dots 0 b_{ b - 1} \dots b_0 0 \dots 0$. However, a new problem arises: how do we update these bits while leaving others untouched?
The bitwise complement operator
By zeroing the bits of data[offset] corresponding to the given index, we get two non-overlapping longs we can OR to get the new value. For this, we construct a bitmask such that, when applied to the
old value (by using the AND operator), only the bits we don’t want to update remain. If we left shift bitmask by startBit positions we get a long of the form $0 \dots 0 \underbrace{1 \dots 1}_{b \
text{ times}} 0 \dots 0$. This is the opposite of what we want.
The bitwise complement (~) operator inverts the bits of the operand. This is a unary operator, it only takes one operand. It makes every 0 bit a 1 bit and every 1 bit a 0 bit, e.g. ${\backsim 1011} =
We apply the inverted mask and OR the result with the shifted LSBs to get the new value.
public void set(final int index, final int value) {
int bitOffset = index * bitsPerValue;
int offset = bitOffset >> 6;
int startBit = bitOffset & 0b111111;
long maskedValue = value & bitmask; // Ensure value has bitsPerValue bits
data[offset] =
// The previous long bits with b zero bits in positions s to min(s + b - 1, 63)
(data[offset] & ~(bitmask << startBit))
// The new value shifted to replace the b zeros
| (maskedValue << startBit);
Similarly, the mask for updating the second long is of the form $1 \dots 1 \underbrace{0 \dots 0}_{(64 - s) \text{ times}}.$ We could construct a bitmask and invert it as in the previous example.
However, right shifting by $64 - s$ and then left shifting by this same value effectively clears the first $64 - s$ bits. The performance of both methods is comparable, but the former requires a
bitmask construction. JMH benchmark
We have assembled the complete set method:
public void set(final int index, final int value) {
int bitOffset = index * bitsPerValue;
int offset = bitOffset >> 6;
int startBit = bitOffset & 0b111111;
long maskedValue = value & bitmask;
data[offset] = (data[offset] & ~(bitmask << startBit)) | (maskedValue << startBit);
if (startBit + bitsPerValue > 64) {
int shiftBy = 64 - startBit;
data[offset + 1] = (data[offset + 1] >>> shiftBy << shiftBy) | (maskedValue >> shiftBy);
For better readability, one should replace 6 and 0b111111 by named constants. The documented class (with bounds checks and additional methods) and associated tests are available in this gist.
Appendix: Fast ceiling division by a power of 2
Let $a, k$ be non-negative integers, and let $b = 2^k$. We shall compute $\left\lceil \frac{a}{b} \right\rceil$. Note that this is different from $\left\lfloor \frac{a}{b} \right\rfloor + 1$, which
can be implemented using integer division. For $a = b$, the former yields 1 while the latter gives 2. In fact, the result is off by 1 iff $b$ divides $a$. We construct a bitmask to obtain the
remainder of the division:
int b = 1 << k; // 0b100...00
int mask = b - 1; // 0b011...11
int remainder = a & mask;
If $b$ divides $a$, the remainder is 0 and the expected result is $a / b$. Otherwise, the result is $\left\lfloor \frac{a}{b} \right\rfloor + 1$. Recall that dividing by $b$ is equivalent to right
shifting by $k$ to express this in code as
int result = (a >>> k) + (remainder > 0 ? 1 : 0);
The second term won’t compile to a branch instruction on most modern JVMs and architectures. | {"url":"https://hgsg.me/posts/thinking-in-bits/","timestamp":"2024-11-03T09:34:09Z","content_type":"text/html","content_length":"141269","record_id":"<urn:uuid:101453a9-d35f-4a27-bc4b-31d36e3193ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00794.warc.gz"} |
Iterative Algorithms for Determining Optimal Solution Set of Interval Linear Fractional Programming Problem
Determining the optimal solution (OS) set of interval linear fractional programming (ILFP) models is generally an NP-hard problem. Few methods have been proposed in this field which have only been
able to obtain the optimal value of the objective function. Thus, there is a need for an appropriate method to determine the OS set of the ILFP model. In this paper, we introduce three algorithms to
obtain the OS of ILFP. In the first and second algorithms, using the definition of strong and weak feasible solutions, the objective function of ILFP has been transformed to a linear objective
function on the largest feasible region (LFR) and we obtain the OS of ILFP. These two algorithms, only introduce one point
as the feasible OS. Since ILFP is an interval model, we seek an algorithm, where for the first time a solution set is obtained as the OS set by solving two sub-models. Hence, we transform the ILFP
model into two pessimistic and optimistic sub-models, as one is in the smallest feasible region (SFR) and the other on the LFR. We add constraints to the optimistic model to ensure that the OS set is
feasible. Then, we introduce pessimistic and modified optimistic model (PMOM) algorithm. In this algorithm, each PMOM is solved separately. The OSs obtained from these two models give the OS set so
that this OS set is feasible. Note that the union of feasible OSs obtained from the proposed algorithms will be a more complete feasible OS set.
• There are currently no refbacks. | {"url":"http://journal.pmf.ni.ac.rs/filomat/index.php/filomat/article/view/12896","timestamp":"2024-11-02T14:38:53Z","content_type":"application/xhtml+xml","content_length":"17809","record_id":"<urn:uuid:e51dfc2f-e4e3-457e-9f22-7d256ec170cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00204.warc.gz"} |
Open and Closed Set Differences in Metric Spaces
Open and Closed Set Differences in Metric Spaces
Suppose that $(M, d)$ is a metric space and that $A, B \subseteq M$. Suppose that we know that $A$ is an open subset and $B$ is a closed subset. What can we say about the differences $A \setminus B$
and $B \setminus A$? Are they necessarily open? Are they necessarily closed? The theorem below will tell us that $A \setminus B$ is always open and $B \setminus A$ is always closed. In the theorems
below, we use the important fact that $A \setminus B = A \cap (M \setminus B)$ and $B \setminus A = B \cap (M \setminus A)$:
Theorem 1: Let $(M, d)$ be a metric space and let $A, B \subseteq M$. If $A$ is an open subset and $B$ is a closed subset then $A \setminus B$ is an open subset.
• Proof: Let $A, B \subseteq M$ and let $A$ be an open subset and let $B$ be a closed subset. Then $B^c = M \setminus B$ is an open subset and
\quad A \setminus B = A \cap (M \setminus B)
• Then $A \setminus B$ is the intersection of two open sets. A finite intersection of open sets is open, so $A \setminus B$ is open. $\blacksquare$
Theorem 2: Let $(M, d)$ be a metric space and let $A, B \subseteq M$. If $A$ is an open subset and $B$ is a closed subset then $B \setminus A$ is a closed subset.
• Proof: Let $A, B \subseteq M$ and let $A$ be an open subset and let $B$ be a closed subset. Then $A^c = M \setminus A$ is a closed subset and:
\quad B \setminus A = B \cap (M \setminus A)
• Then $B \setminus A$ is the intersection of two closed sets. Any union of closed sets is closed (in particular, a finite union of closed sets is closed), so $B \setminus A$ is closed. $\ | {"url":"http://mathonline.wikidot.com/open-and-closed-set-differences-in-metric-spaces","timestamp":"2024-11-09T06:11:14Z","content_type":"application/xhtml+xml","content_length":"16145","record_id":"<urn:uuid:da1285c6-08ab-4e82-bab6-5be63903bf8b>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00853.warc.gz"} |
Degradation of Anti-Nutritional Factors in Maize Gluten Feed by Fermentation with Bacillus subtilis: A Focused Study on Optimizing Fermentation Conditions
School of Food and Pharmacy, Zhejiang Ocean University, Zhoushan 316022, China
College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China
Author to whom correspondence should be addressed.
Submission received: 19 September 2024 / Revised: 11 October 2024 / Accepted: 27 October 2024 / Published: 31 October 2024
Maize gluten feed is rich in micronutrients and serves as a good source of protein and dietary fiber, but also contains anti-nutritional factors. In this study, fermentation conditions for the
degradation of phytic acid and water-unextractable arabinoxylans in maize gluten feed using Bacillus subtilis were optimized. Key variables influencing the fermentation process were identified from
seven potential parameters using the Plackett–Burman design. Three statistically significant factors, i.e., fermentation time, inoculum dose, and material-to-liquid ratio were further optimized
through a central composite design and the efficiency of fermentation conditions was predicted. The accuracy of the predicted model was validated by subsequent experimentation. The optimum
fermentation conditions were determined to be a fermentation time of 84.5 h, inoculum dose of 17.1%, and material-to-liquid ratio of 1:3.4. Under these conditions, 48% of phytic acid and 32%
water-unextractable arabinoxylans were degraded. Following fermentation, the activities of protease, xylanase, phytase, and cellulase in maize gluten feed were significantly increased (p < 0.001),
contributing to the breakdown of phytic acid and water-unextractable arabinoxylans, which improved the protein dispersibility index, in vitro protein digestibility, and mineral bioavailability. These
findings suggest that fermenting maize gluten feed with Bacillus subtilis is a practical and effective approach to reducing anti-nutrients and enhancing its nutritional quality.
1. Introduction
Maize (
Zea mays
), a monocotyledon of the Gramineae family, is one of the major cereal grains cultivated all over the world. Maize gluten feed (MGF), a by-product of the wet-milling process used for starch (or
ethanol) production, is primarily composed of germ meal, bran, and dried steep liquor [
]. MGF contains 20–28% protein, and protein from maize germ offers a favorable balance of the essential amino acids and high biological value [
]. In addition to its protein content, MGF is rich in dietary fiber, which has entered the limelight as a potential high dietary fiber food ingredient. It also contains significant levels of
carotenoids, polyphenols, and other bioactive compounds [
]. However, the utilization of MGF in food products is limited due to the existence of anti-nutrients, mycotoxins, and its adverse effects on textures and flavors. Our present study is primarily
focusing on addressing the anti-nutritional substances in MGF.
Phytic acid (PA) and arabinoxylans are two of the primary anti-nutritional substances in MGF. Normally, PA predominantly exists in cereal grains in form of phytate, where it forms covalent bonds with
mineral cations, such as calcium, iron, and zinc. This binding reduces the bioavailability of these essential minerals and decreases the digestibility of protein and starch in the digestive tract [
]. Approximately 80% of the PA in maize is concentrated in the germ [
], which is transferred into MGF during germ processing. Previous studies had shown that MGF contains 11.4 ± 0.2 mg/g of phosphorus, significantly higher than the 2.6 ± 0.2 mg/g found in maize, with
most of the phosphorus present as phytate [
]. In addition to PA, MGF is also rich in non-starch polysaccharides, particularly arabinoxylans [
]. Studies on wheat by-products have demonstrated that arabinoxylans can reduce nutrient digestibility [
]. Arabinoxylans contain a linear backbone of β-(1–4)-linked D-xylopyranosyl and with α-L-arabinofuranose as the side chains [
]. The majority of arabinoxylans in MGF are water-unextractable arabinoxylans (WU-AXs), which are more resistant to digestion [
]. The anti-nutritional effects of WU-AXs are attributed to their binding within cell walls by covalent or non-covalent interactions with other nutrients, making them inaccessible to the digestive
system [
Anti-nutritional substances in cereals and their co-products can be decreased through various processing techniques, including physical, chemical, and biological proccessing techniques [
]. Among these, fermentation, one of the oldest and most efficient techniques, is widely used in food production to improve nutritional quality by leveraging microbial activity.
Bacillus subtilis
, Gram-positive aerobic bacteria, is categorized as Generally Recognized as Safe (GRAS) by the FDA [
]. It commonly used in diverse Asian traditional fermentation food owing to its ability to produce considerable amounts of dissimilar enzymes such as proteases, phytase, and cellulase [
]. Previously studies have shown that exogenous xylanases can break down long WU-AX backbones into smaller fragments, enhancing nutrient absorption and generating arabinoxylooligosaccharides with
prebiotic effects [
]. Studies have also reported that
species could reduce the level of anti-nutritional factors and enhance the nutritional value of soybean products [
]. While there has been considerable research into the use of exogenous enzymes to reduce anti-nutritional factors, fewer studies have explored the full potential of microbial fermentation,
particularly using
Bacillus subtilis
, for this purpose in MGF. Our study is novel in that it focuses on optimizing the fermentation process to simultaneously target the degradation of both PA and WU-AX in MGF, offering a more
integrated and sustainable approach compared to chemical or enzyme-only treatments.
Aiming at decreasing anti-nutrients, the fermentation of MGF with
Bacillus subtilis
was carried out to establish optimal fermentation parameters for the degradation of PA and WU-AX simultaneously. In this study, Plackett–Burman design (PBD) and central composite design (CCD) were
employed to optimize the fermentation conditions. Plackett–Burman design efficiently screens and identifies key factors, while central composite design, a response-surface methodology (RSM), is one
of the most effective tools for process optimization. The combination of Plackett–Burman design and central composite design has been reported in many process optimizations [
]. This work proposes a relatively simple processing technology to decrease anti-nutrients in maize processing co-products with potential applications in the food manufacturing industry.
2. Materials and Methods
2.1. Sample Preparation
Maize gluten feed (MGF) was obtained from Cargill Biochemical Co., Ltd. (Songyuan, China), containing 27.13% protein (N × 6.25), 11% moisture, 9% ash, and 2% fat. The plant material was milled using
a universal high-speed crusher (150 T, Yongkang Boou Hardware Products Co., Ltd., Jinhua, China) and passed through sieves with mesh sizes of 18, 40, 80, 140, and 200. As a result, MGF samples with
mean particle sizes of 585, 189, 123, 30, and 14 μm, respectively, were obtained.
Bacillus subtilis (CICC 24602) was obtained from the China Center of Industrial Culture Collection (CICC, Beijing, China). Bacillus subtilis (CICC 24602), isolated from Baijiu Daqu, has been shown to
secrete various enzymes, including saccharifying enzyme, protease, amylase, cellulase, and phytase. The freeze-dried powder was cultured on a medium (peptone 5.0 g, beef extract 3.0 g, NaCl 5.0 g,
agar 15.0 g, distilled water 1.0 L, pH 7.0) for activation and propagation, following the method provided by the CICC. A seed culture was prepared by inoculating a single loop of Bacillus subtilis
into 50 mL of sterile medium (peptone 5.0 g, beef extract 3.0 g, and NaCl 5.0 g, distilled water 1.0 L, pH 7.0) in a 250 mL flask and incubating at 37 °C for 8 h with shaking at 150 rpm (shaker
incubator THZ-98c, Shanghai Bluepard Experimental Instrument Co., Ltd., Shanghai, China). The resulting seed culture, containing approximately 10^8 cfu/mL, was used for MGF fermentation.
2.2. Fermentation of Maize Gluten Feed
Weighed MGF samples were placed into 380 mL glass containers (7 × 7 × 8 cm^3, with a circular opening diameter of 6 cm), covered with breathable sealing film to allow air exchange, and then
autoclaved at 121 °C for 20 min (Panasonic MLS-3751L-PC, Kadoma, Japan). After cooling to room temperature, the samples were inoculated with the seed culture, supplemented with a specific volume of
sterile water, and thoroughly mixed under sterile conditions in an ultra-clean workbench. The containers were then incubated in a biochemical incubator (LHS-HC-Ι, Shanghai Bluepard Experimental
Instrument Co., Ltd., Shanghai, China). After a designated fermentation period, the fermented MGF was vacuum freeze-dried, followed by milling and sieving through an 80-mesh screen.
2.3. Determination of Phytic Acid
The PA content was determined according to the method described by Buddrick et al. [
]. Briefly, PA was extracted with 0.2 mol/L hydrochloric acid and precipitated with a ferric chloride solution of known iron concentration. The decrease in iron in the supernatant is taken as a
measure of phytic acid content. Absorbance was recorded at 519 nm, and the method was calibrated using reference solutions prepared by diluting a stock solution with 0.2 mol/L HCl, yielding PA
concentrations ranging from 0.13 to 1.3 mg/mL.
2.4. Determination of Water-Unextractable Arabinoxylan
The determination of arabinoxylans was carried out following the methods described by Douglas [
] and Rouau and Surget [
], with some modification. The concentration of WU-AX was calculated by subtracting the water-extractable arabinoxylan (WE-AX) content from the concentration of total AX. Briefly, WE-AX was extracted
by dispersing the sample in distilled water (10%
) and shaking at 4 °C (cold extraction). For total AX extraction, the sample was treated with 1 mol/L sulfuric acid (10%
) and boiled for 2.5 h. After cooling to room temperature, the sample was neutralized with 2 mol/L sodium carbonate. The arabinoxylans content in both extracts was quantified using the phloroglucinol
colorimetric assay, following the method described by Hernán-dez-Espinosa et al. [
], with some modification based on the original protocol proposed by Rouau and Surget [
2.5. Optimization Experimental Design
Fermentation conditions were optimized using PA and WU-AX content as indicators. The optimization process involved three steps: (1) a single-factor test to establish the appropriate range for each
factor, (2) the Plackett–Burman design to identify key factors influencing the fermentation process, and (3) further optimization of key variables using a central composite design.
2.5.1. Single-Factor Test
Seven factors were considered in the single-factor experiments, including fermentation time, temperature, initial pH, inoculum dose, particle size, substrate filling rate, and material-to-liquid
ratio. In each test, one factor was varied while the other six factors were kept constant at their baseline levels. The baseline conditions for the seven factors were as follows: fermentation time of
72 h, temperature of 37 °C, initial pH of 6.5, inoculum concentration of 10%, particle size of 123 μm, substrate filling rate of 3.43%, and a material-to-liquid ratio of 1:2. Six different
fermentation durations were set at 24, 48, 72, 96, 120, and 144 h. Six fermentation temperatures were tested at 25, 28, 31, 34, 37, and 40 °C. Seven initial pH levels were adjusted to 5.0, 5.5, 6.0,
6.5, 7.0, 7.5, and 8.0. Maize germ meal was milled for different durations using a high-speed grinder and then passed through sieves with mesh sizes of 18, 40, 80, 140, and 200, resulting in average
particle sizes of 585, 189, 123, 30, and 14 μm, respectively. Six different substrate filling rates were set at 4, 7, 10, 13, 16, and 19 g, corresponding to 1.05%, 1.84%, 2.63%, 3.42%, 4.21%, and
5.00% of the total volume of the fermentation vessel, respectively. Seven material-to-water ratios were tested: 1:1, 1:1.5, 1:2, 1:2.5, 1:3, 1:3.5, and 1:4. Additionally, six inoculation levels were
used: 5%, 10%, 15%, 20%, 25%, and 30%. Each treatment in the single-factor experiments was repeated three times, and the average value was reported as the experimental result.
2.5.2. Plackett–Burman Design
Plackett–Burman design was employed to assess the relative significance of the seven factors on the content of PA (Y
) and WU-AX (Y
), thereby identifying the key independent variables for further optimization. Based on the results of the single-factor tests (
Supplementary Table S1
), the seven factors were evaluated at two levels: low (−1) and high (+1). The design involved 12 experimental runs, each with a different combination of factor levels, along with a 13th run under
baseline conditions. All experiments were performed in triplicate, and the mean PA and WU-AX contents in the fermented samples were recorded as the dependent variables (responses). A first-order
polynomial model was applied to fit the Plackett–Burman design, assuming no interactions between variables, as shown in Formula (1).
$Y = β 0 + ∑ i = 1 7 β i X i$
where Y represents the predicted response, β
denotes the intercept, β
corresponds to the linear regression coefficient, and X
refers to the coded independent variable.
Factors with confidence levels exceeding 95% (p ≤ 0.05) were considered to have a statistically significant effect on the degradation of PA and WU-AX and were selected for further optimization.
2.5.3. Central Composite Design
In the central composite design, three factors—fermentation time (hour), inoculum dose (%,
), and the material-to-liquid ratio—denoted as X
, X
, and X
respectively, were selected from the Plackett–Burman design for further factorial optimization. Each variable was tested at five coded levels (−α, −1, 0, +1, +α, with α = 2), while the remaining
factors from the Plackett–Burman design were held at their optimal levels. A total of 19 experimental runs were designed, including 5 replicates at the central point, with all runs performed in
triplicate. The degradation of anti-nutritional substances was analyzed using a second-order polynomial equation, and the data were fitted through a multiple regression procedure. The mathematical
relationship between the response variables Y
(PA content) and Y
(WU-AX content) and the significant independent variables X
, X
, and X
was expressed by the following quadratic polynomial equation (Formula (2)):
$Y = β 0 + ∑ i = 1 3 β i X i + ∑ i = 1 3 β i i X i 2 + ∑ i = 1 2 ∑ j = i + 1 3 β i j X i X j$
where Y is the response, the contents of PA (Y
) and the contents of WU-AX (Y
); β
is the constant coefficient, β
represents the linear coefficients, β
represents the quadratic coefficients, β
represents the interaction coefficients, and X
and X
are the coded values of the independent variables.
The fitted polynomial equation was expressed as a surface in order to visualize the relationship between the response and experimental levels of each factor and to deduce the optimum conditions. The
analysis of the experimental design and calculation of predicted data were carried out using Design Expert software Version 8.0.6.1 (Stat-Ease, Inc., Minneapolis, MN, USA) to estimate the response of
the independent variables. Subsequently, three additional validation experiments were conducted to verify the validity of the statistical experimental strategies.
2.6. Enzymatic Activity Analysis
2.6.1. Phytase Activity Assay
Phytase activity was determined based on the ammonium vanadate-molybdate method [
]. Phytase catalyzes the hydrolysis of phytate, generating orthophosphate and inositol derivatives. The released orthophosphate reacts with ammonium vanadate-molybdate under acidic conditions to form
a yellow phosphomolybdic acid complex, which is quantified colorimetrically at 415 nm. One unit of phytase activity (U) was defined as the amount of enzyme required to released 1 μmol inorganic
phosphorus per minute from a 5 mmol/L sodium phytate substrate under the specified assay conditions.
2.6.2. Xylanase Activity Assay
Xylanase activity was quantified using the 3,5-dintrosalicylic acid (DNS) assay for reducing sugars, following the method outlined by Dhaver et al. [
]. Xylanase catalyzes the degradation of xylan into reducing oligosaccharides and monosaccharides, which subsequently react with DNS under boiling conditions. The resulting colorimetric reaction
produces an absorption peak at 540 nm. One unit of xylanase activity (U) was defined as the amount of enzyme required to release 1 μmol of reducing sugars per minute from a 5 mg/mL xylan solution
under the specified assay conditions.
2.6.3. Cellulase Activity Assay
Cellulase activity was measured by quantifying the reducing sugars released during hydrolysis, using the DNS method [
]. Cellulase hydrolyzes filter paper strips (1 × 6 cm), producing reducing sugars such as cellobiose and glucose. These sugars react with DNS under alkaline conditions, forming a reddish-brown
compound. One unit of cellulase activity (U) was defined as the amount of enzyme required to release 1 μmol of glucose per minute from the filter paper under the specified assay conditions.
2.6.4. Protease Activity Assay
Protease activity was determined using the Folin–Ciocalteu’s phenol reagent method following the method described by Wang et al. [
]. Briefly, an appropriately diluted enzyme sample was added to a casein solution, and the reaction was terminated by the addition of 10% trichloroacetic acid. After centrifugation, the absorbance of
the supernatant was measured at 680 nm to quantify the amount of tyrosine released. A standard curve was generated using tyrosine (5–50 μg/mL). One unit of protease activity (U) was defined as the
amount of enzyme required to release 1 μg of tyrosine per minute from casein under the specified assay conditions.
2.7. Protein Nutritional Analysis
The protein dispersibility index (PDI) was determined following the method described by Zhang et al. [
]. In this procedure, 0.5 g of the sample was dissolved in 20 mL of deionized water and stirred at 500 r/min for 1 h. The mixture was then centrifuged at 10,000×
for 20 min, after which the total protein content in the supernatant and the original sample was quantified using the Kjeldahl method. PDI was calculated using the following formula (Formula (3)):
$P D I = The protein content in supernatant Total protein content in sample × 100$
The degree of hydrolysis (DH) is defined as the percentage of free amino groups cleaved from protein and was calculated from ratio of free amino nitrogen of hydrolysate amino nitrogen and total
nitrogen [
]. DH was determined using the ninhydrin colorimetric method, as described by Pearce, Karahalios, and Friedman [
]. A standard curve was generated using glycine as the amino acid standard, with absorbance readings at 570 nm. DH was calculated using the following formula (Formula (4)):
$D H = Free amino nitrogen of sample Total nitrogen in sample × 100$
In vitro protein digestion (IVPD) was performed following Kamble et al. [
] with modifications. Briefly, a 1.0 g sample was incubated with 10 mL of pepsin solution (20 mg/mL, pH 2.0) at 37 °C with shaking at 190 r/min for 3 h. Subsequently, 2.0 mL of 0.5 mol/L NaOH and 30
mL of trypsin solution (5 mg/mL, pH 8.0) were added, and the mixture was shaken at 37 °C for 2 h. After centrifugation, 10 mL of 10% trichloroacetic acid was added to the supernatant, followed by a
1-h incubation and centrifugation. The protein content was determined using the Kjeldahl method. The in vitro digestion rate of the protein was calculated using the following formula (Formula (5)):
$I V P D = Nitrogen in sample − Nitrogen i n residue Nitrogen in sample × 100$
2.8. In Vitro Minerals Digestion
The in vitro mineral digestion rate was evaluated following the method of Kumar et al. [
], with modifications. A 5.0 g sample was mixed with 30 mL distilled water and shaken at 150 rpm for 2 h at room temperature. After adding 2 mL of α-amylase solution (6.25 g/L), the mixture was
incubated at 37 °C for 30 min. The pH was then adjusted to 4.0 with 1 mol/L HCl, followed by the addition of 8 mL pepsin (0.125 g/L), and further incubation at 37 °C for 1 h. The pH was adjusted to
6.0 with 1 mol/L NaHCO
, followed by the addition of 10 mL pancreatin solution (20 g/L), and incubated at 37 °C for 30 min. The mixture was centrifuged at 10,000 r/min for 10 min at 4 °C, and the supernatant was filtered
through a 0.45 μm filter. The mineral contents (Fe, Mn, Cu, Zn) in the filtrate and samples were analyzed by ICP-OES. Mineral bioavailability was calculated using formula (Formula (6)).
$In vitro digestion of minerals ’ bioavailability = Minerals in digested supernatant Minerals in sample × 100$
2.9. Statistical Analysis
The data were expressed as the mean value ± standard deviation (SD), all experiments were carried out at least in triplicate. Statistically significant differences were determined using Duncan’s
multiple range test, performed with SPSS software (version 20.0), at a significance level of p ≤ 0.05. A one-sample t-test was conducted to compare predicted and experimental responses under optimal
conditions. Experimental design, regression analysis, and surface plot generation were carried out using Design Expert software (version 8.0.6, Stat-Ease, Inc., Minneapolis, MN, USA) and Minitab 19
(Minitab, LLC, State College, PA, USA). A fitted model based on the experimental data was developed, and the statistical significance of the model terms was assessed through regression analysis and
analysis of variance (ANOVA).
3. Results and Discussion
3.1. Effects of Independent Factors on PA and WU-AX
The individual effects of seven variables on the PA and WU-AX contents are shown in
Figure 1
Supplementary Figure S1
. These factors can be categorized into two behavioral patterns. The first pattern shows a sharp initial decrease in anti-nutrient levels, followed by stabilization (
Figure 1
a and
Figure S1
). Fermentation time and inoculum dose exhibit this behavior. As fermentation time increased and inoculum dose was elevated, the PA and WU-AX contents initially dropped significantly, from 12 to 9 mg
/g and 109 to 90 mg/g, respectively, before reaching stable levels of approximately 8.3 mg/g and 88.6 mg/g. The second pattern, observed in the effects of fermentation temperature, initial pH,
substrate particle size, substrate filling rate, and the material-to-liquid ratio, follows a broad “U” shape. Anti-nutrient content changed slowly throughout the process, decreasing at lower levels
of the independent variables, remaining relatively stable at medium levels, and increasing gradually at higher levels (
Figure 1
b and
Figure S1
). Further analysis suggests that factors such as temperature, time, initial pH, inoculum dose, substrate filling rate, particle size, and the material-to-liquid ratio play key roles in influencing
the growth and metabolic activity of
Bacillus subtilis
, thereby affecting the fermentation progression and anti-nutrient degradation.
As shown in
Figure 1
a, the contents of PA and WU-AX decrease rapidly with increasing fermentation time. However, after 72 h of fermentation, the rate of reduction for both PA and WU-AX slows down significantly.
Therefore, considering the perspective of cost saving, 72 h was chosen as the central point for the subsequent optimization process. At the beginning of the fermentation stage, more nutrients were
available in the MGF, which could meet the rapid growth needs of the strains. With the extension of the fermentation time, the nutrients in the fermentation material were consumed more, and the
strains began to gradually age, and the amount of enzyme production was also reduced gradually. At the same time, as the fermentation proceeds, the substrate surface becomes sticky, the gap between
MGF becomes narrow, and the effective diffusion coefficients of both oxygen and carbon dioxide are reduced, which is unfavorable to the enzyme production of
Bacillus subtilis
Bacillus subtilis
exhibits social cell behavior, with its growth highly dependent on cell density [
]. An appropriate inoculum dose is positively correlated with the growth of
Bacillus subtilis
and production of metabolic components. Too little inoculum is detrimental to the growth of the microorganisms and will increase the microbial latency period, while too much inoculum brings excessive
metabolic by-products and accelerates the senescence of the microorganisms [
]. Since the reduction in anti-nutrients was the primary objective of this study, 15% was selected for further study. Although
Bacillus subtilis
possesses heat resistance, high temperatures are not conducive to enzyme production. Additionally, temperature plays a key role in energy consumption during fermentation. A fermentation temperature
of 28–34 °C was found to be most effective for reducing PA and WU-AX levels. The pH of the substrate influences both the cell membrane of the microorganism and the intracellular enzymes. Our findings
suggest that the optimal pH for the degradation of PA and WU-AX is approximately 6.5. Additionally, pH levels were measured at the final stages of fermentation, showing that under varying
fermentation conditions, the final pH was consistently around 8.3, with no significant differences between conditions. Our findings align with previous studies [
], demonstrating that
Bacillus subtilis
fermentation leads to an increase in the pH of the fermentation substrate. Being an aerobic bacterium,
Bacillus subtilis
requires sufficient oxygen for growth [
]. The particle size of the substrate (MGF) affects the growth of
Bacillus subtilis
; larger particles may inhibit nutrient utilization, while smaller particles may hinder oxygen penetration. Therefore, a particle size range of 30–189 μm was selected for further study. Similar
analyses were conducted to determine the optimal substrate filling rate and material-to-liquid ratio.
To achieve the greatest reduction in PA and WU-AX, the following factors and levels were selected for further fermentation experiments with Bacillus subtilis: fermentation time (48–96 h), temperature
(28–34 °C), initial pH (6.0–7.0), inoculum dose (10–20%), particle size (30–189 μm), substrate filling rate (1.84–3.42% in 380 mL glass), and material-to-liquid ratio (1:2.5–1:3.5).
3.2. Screening of Significant Factors Using Plackett–Burman Design
The Plackett–Burman design was employed to assess the significance of independent variables on the contents of PA and WU-AX during the fermentation process and to identify the most critical factors
for further optimization. The experimental design and corresponding responses (Y
, content of PA; and Y
, content of WU-AX) are presented in
Table 1
, with the ANOVA results shown in
Table 2
. Generally, variables with a
-value less than 0.05 are considered significant parameters at the 95% confidence interval [
]. The findings revealed that the factors exhibited similar significance for both responses, with X
(fermentation time), X
(inoculum dose), and X
(material-to-liquid ratio) being statistically significant (
≤ 0.05). In contrast, X
(fermentation temperature), X
(initial pH), X
(particle size), and X
(substrate filling rate) were determined to be non-significant.
As follows, the first-order model equations for PA and WU-AX content were developed using the Plackett–Burman design.
$Y 1 = 8.8583 − 0.3950 X 1 − 0.0567 X 2 + 0.0283 X 3 − 0.2133 X 4 + 0.0350 X 5 − 0.0867 X 6 − 0.2633 X 7$
$Y 2 = 91.8450 − 2.4283 X 1 − 0.0733 X 2 + 0.2683 X 3 − 1.5533 X 4 + 0.3383 X 5 − 0.3233 X 6 − 1.4833 X 7$
where Y
represents the phytic acid (PA) content, Y
denotes the water-unextractable arabinoxylan (WU-AX) content, and X
, X
, X
, X
, X
, X
, and X
correspond to the coded variables of fermentation time, fermentation temperature, initial pH, inoculum dose, particle size, substrate filling rate, and the material-to-liquid ratio, respectively.
Based on the ANOVA, the factors influencing response Y
(PA content) in decreasing order of significance were as follows: X
(fermentation time) > X
(material-to-liquid ratio) > X
(inoculum dose) > X
(substrate filling rate) > X
(fermentation temperature) > X
(particle size) > X
(initial pH). For response Y
(WU-AX content), the ranking was X
> X
> X
> X
> X
> X
> X
. A Pareto chart can present the effect of factors on responses and check the statistical significance; thus, it was employed here to identify the significant factors [
]. The relative size of the effect degree of each parameter on the PA and WU-AX contents was evaluated by comparing the t-value of the effect. The resulting Pareto chart plotted by the t-value of the
effect versus each parameter is shown in
Figure 2
. A parameter with a t-value higher than the t-value limit line indicated that it had a confidence level greater than 95% and could be considered as significant [
]. In addition, a Bonferroni limit line (5.74) and t-value limit line (2.77) were applied to determine the extremely significant (the t-value was above the Bonferroni limit line), significant
(t-value was between the Bonferroni limit line and the t-value limit line), and insignificant (below the t-value limit line) coefficients of different factors [
]. The t-value of the fermentation time, material-to-liquid ratio, and inoculum dose on both responses were above the t-value limit line, which indicated that the three factors were considered as
significant factors. Consequently, the fermentation time (X
), inoculum dose (X
), and material-to-liquid ratio (X
) were selected for the further optimization of fermentation conditions. Both fermentation time and inoculum dose have been identified as critical parameters in other fermentation processes as well.
In light of the Plackett–Burman design results, and considering fermentation efficiency and cost considerations, the non-significant variables—fermentation temperature, initial pH, particle size, and
substrate filling rate—were fixed at 31 °C, pH 6.5, 189 μm, and 2.63%, respectively.
3.3. Statistical Analysis of Central Composite Design
The factors and levels of variables in the response-surface central composite design arrangement and experimental responses of PA content (Y
) and WU-AX content (Y
) are presented in
Table 3
. Multiple regression analysis was carried out on the experimental data, and the second-order polynomial stepwise equations were obtained, as shown in Equations (9) and (10).
$Y 1 = 8.41 − 0.40 X 1 − 0.24 X 4 − 0.25 X 7 + 0.19 X 1 X 4 − 0.001 X 1 X 7 − 0.042 X 4 X 7 + 0.27 X 1 2 + 0.30 X 4 2 + 0.17 X 7 2$
$Y 2 = 85.56 − 3.65 X 1 − 2.80 X 4 − 1.57 X 7 − 0.20 X 1 X 4 + 1.76 X 1 X 7 − 0.53 X 4 X 7 + 3.11 X 1 2 + 2.38 X 4 2 + 0.92 X 7 2$
where Y
represents the phytic acid (PA) content, Y
denotes the water-unextractable arabinoxylan (WU-AX) content, and X
, X
, and X
are the coding variables of fermentation time, inoculum dose, and the material-to-liquid ratio, respectively.
These equations demonstrate the quantitative impact of the factors (X
, X
, and X
) and their interactions on the response variables. ANOVA was performed to assess the significance of the central composite design model and validate the accuracy of the fitting curve [
]. The model coefficients were evaluated using F-values and
-values, with a higher F-value and smaller
-value (
≤ 0.05) indicating greater model significance [
]. The ANOVA results, along with goodness-of-fit and model adequacy, are presented in
Table 4
. The model coefficients were validated based on F-values and
-values. As shown, the F-values and corresponding low
-values for both the PA and WU-AX responses confirmed the high significance of the models. The lack of fit for both response models was insignificant (
> 0.05), with values of 0.2272 and 0.6234 for PA and WU-AX, respectively. This suggests that no outliers were present in the data, and higher-order terms were unnecessary, confirming the
appropriateness of the selected models. The high coefficient of determination (R
), 0.9469 for PA and 0.9712 for WU-AX, indicates that the factor terms explain 94.69% and 97.12% of the variance in the models for PA and WU-AX, respectively, implying the models are reliable.
Furthermore, the R
values were close to their respective adjusted R
values, further demonstrating the high explanatory power of the regression models used in this study [
]. The coefficient of variation (CV) values, which reflect the degree of variability in the mean response, were 2.45% for PA and 1.67% for WU-AX, indicating low variability between the predicted and
experimental responses. Therefore, the mathematical models established in this study have been proven reliable and can be utilized for subsequent prediction and optimization steps.
The response-surface three-dimensional graphs were generated based on the second-order polynomial equation to analyze the interaction and quadratic effects of the variables. The changes in model
parameters were examined by varying two factors while holding the remaining factors constant at their central levels. The three-dimensional representations of the interaction and quadratic effects on
PA and WU-AX contents are shown in
Figure 3
Figure 4
, respectively. These response-surface plots all tended to flatten out towards the fermentation time side, and the PA and WU-AX contents decreased significantly and stabilized at the later stages of
fermentation, which was consistent with the results of the one-way experiments (
Figure 1
a). The coefficients of the linear terms (X
, X
, and X
) and the quadratic terms (X
, X
, and X
) were statistically significant for both PA and WU-AX responses (
≤ 0.05). The two-factor interaction term X
had a significant effect (
≤ 0.05) on PA, while the interaction term X
showed a significant influence (
≤ 0.05) on WU-AX. Notably, X
had the most significant effect, followed by X
and X
for the PA response. In contrast, for the WU-AX response, the order of variable influence was X
> X
> X
Table 4
). These significant effects of X
, X
, and X
are consistent with the results of the significance analysis in the Plackett–Burman design. The values of the interaction terms further indicated strong interactions between the independent
variables, particularly X
for the PA response and X
for the WU-AX response.
3.4. Optimum Conditions and Authenticity of Predictive Model
Further analysis of the model optimization revealed that the theoretical optimal fermentation parameters for MGF were 84.48 h, a 17.09% inoculum dose, and a material-to-liquid ratio of 1:3.35. To
enhance operational feasibility, the optimal fermentation conditions were adjusted as follows: raw materials were ground to pass through an 80-mesh sieve, with a material-to-liquid ratio of 1:3.4, an
initial pH of 6.5, and a fermentation vessel filling rate of 2.6%. An inoculum dose of 17.1% was used, with fermentation conducted for 84.5 h at 31 °C. Under these optimal conditions, the predicted
PA and WU-AX contents were 8.16 mg/g and 83.55 mg/g, respectively. To verify the accuracy of the model, triplicate validation experiments were performed under the optimal conditions, and the
experimental results were compared to the predicted values. The observed PA and WU-AX contents were 8.2 ± 0.78 mg/g and 83.1 ± 1.09 mg/g, respectively, which were in close agreement with the
predicted values. This strong correlation confirmed the model’s adequacy in predicting the optimization outcomes. When comparing the PA and WU-AX contents of untreated MGF samples, which were 16.0 mg
/g and 122.1 mg/g, respectively, fermentation under the optimized conditions reduced their levels by 48% and 32%, respectively.
3.5. Changes in Enzymatic Activity Before and After Fermentation
The product obtained from MGF through the optimized fermentation process is referred to as FMGF (fermented maize gluten feed, FMGF). The enzymatic activities of phytase, xylanase, cellulase, and
protease were compared between MGF and FMGF. As shown in
Figure 5
, the activities of all four enzymes—phytase, xylanase, cellulase, and protease—significantly increased after fermentation with
Bacillus subtilis
< 0.001). Under optimal fermentation conditions, FMGF exhibited a phytase activity of 8.2 ± 0.24 U/g, xylanase activity of 126.3 ± 6.24 U/g, cellulase activity of 19.3 ± 0.87 U/g, and protease
activity of 1182 ± 87 U/g, respectively.
PA, also known as myo-inositol hexaphosphate, is considered an anti-nutrient because it can form complexes with minerals, starch, and proteins, thereby limiting their bioavailability [
]. Phytase, secreted by
Bacillus subtilis
during fermentation, degrades myo-inositol hexaphosphate into inositol and free phosphates. AX predominantly exists in the form of WU-AX, which diminishes nutrient absorption through sequestration
within cell walls via covalent or non-covalent interactions [
]. Xylanase, a complex enzyme system capable of hydrolyzing both the main and side chains of xylan, breaks down WU-AX into smaller fragments, disrupting the fiber structure and releasing nutrients [
]. Some researchers have suggested that xylanase and phytase work synergistically, with xylanase degrading non-starch polysaccharides to facilitate phytase-mediated PA degradation, thereby releasing
additional nutrients [
]. Cellulase, a multi-component enzyme composed of endoglucanase, cellobiohydrolase, and β-glucosidase, is responsible for converting cellulose into soluble saccharides, providing energy for
Bacillus subtilis
growth [
]. Neutral protease, an extracellular protease produced by
Bacillus subtilis
, has an optimal pH range between 6.0 and 7.5 and hydrolyzes proteins into small peptides and amino acids. These proteases are produced after the exponential growth phase and are believed to play a
role in spore formation, cell wall turnover, and enzyme clearance [
]. A previous study has demonstrated that
Bacillus subtilis
possesses the capacity to secrete a diverse array of enzymes during fermentation processing, including protease, phytase, and cellulase [
]. The combined action of cellulase, xylanase, phytase, and protease disrupts the surface integrity of MGF cell walls, leading to the disruption of the fibrous network structure of cellulose, thereby
facilitating the release of bound nutritional components. Furthermore, the combination of microbial fermentation with enzyme activity results in a multi-enzyme approach, producing not only phytase
but also xylanase, cellulase, and protease, which together act synergistically to break down the complex structures of anti-nutritional factors. This multi-faceted enzymatic action makes fermentation
Bacillus subtilis
a more holistic and effective treatment method compared to the single-enzyme supplementation strategies commonly used in the industry.
3.6. Comparison of the Nutritional Values of MGF and FMGF
As shown in
Table 5
, the fermentation of MGF resulted in an increase in crude protein from 27.1 ± 0.13% to 28.6 ± 0.08%, PDI from 37.9 ± 1.02% to 46.7 ± 0.58%, DH from 2.3 ± 0.11% to 3.3 ± 0.12%, and IVPD from 44.3 ±
1.07% to 58.5 ± 0.78%, respectively. The increase in protein was primarily attributed to a relative loss of dry matter due to microbial hydrolysis and the metabolism of carbohydrates and lipids as an
energy source [
]. Our results demonstrated that
Bacillus subtilis
produced proteases, which hydrolyzed proteins into soluble peptides or amino acids, resulting in significant increases in PDI, DH, and IVPD (
< 0.001). PDI reflects the proportion of protein dispersed in water under controlled extraction conditions and serves as an indicator of protein solubility. Fermentation significantly improved the
water solubility of MGF proteins. DH represents the percentage of free amino nitrogen relative to the total nitrogen content; the proteolytic activity during fermentation exposed more amino groups,
resulting in an increased DH. The DH calculation method used in this study follows Zhang et al. [
], differing from many other methods in the literature [
], as it does not subtract the original free amino nitrogen in MGF, allowing for a more accurate comparison between FMGF and raw MGF.
Compared to MGF, FMGF exhibited a significant 31.92% increase in IVPD. The IVPD assessment method employed in this study is a classical two-step pepsin–trypsin digestion model, which closely
correlates with the in vivo protein digestibility results obtained from rat feeding trials [
]. The increased of IVPD of FMGF can be attributed to three primary factors: the proteases produced during fermentation partially break down proteins into smaller peptides or amino acids; the
structural alterations in proteins following fermentation increase their susceptibility to protease activity, making them easier to hydrolyze; and the degradation of anti-nutritional factors like PA
and WU-AX further enhances protein digestibility.
Additionally, the bioavailability of essential minerals (Fe, Mn, Cu, and Zn) in MGF increased following fermentation, with Fe increasing from 22.8 ± 1.51% to 33.7 ± 2.15%, Mn from 38.3 ± 1.5% to 53.2
± 1.38%, Cu from 46.0 ± 1.04% to 57.5 ± 1.06%, and Zn from 12.6 ± 1.23% to 16.7 ± 0.95%. These findings suggest that fermentation with Bacillus subtilis significantly enhances the bioavailability of
essential minerals in MGF, thereby improving its overall nutritional value. PA forms covalent bonds with minerals, rendering them resistant to digestion in the mammalian gastrointestinal system and
impairing mineral absorption. Additionally, WU-AX binds minerals through both covalent and non-covalent interactions. Our results indicate that Bacillus subtilis fermentation effectively degrades PA
and WU-AX in MGF, releasing bound minerals and thus increasing their bioavailability.
4. Conclusions
Fermentation with Bacillus subtilis presents a promising technology for the degradation of anti-nutrients, phytic acid, and water-unextractable arabinoxylans, in maize gluten feed. Among the seven
parameters studied, the fermentation time, inoculum dose, and material-to-liquid ratio were identified as the most influential factors. Notably, the interactions between fermentation time and
inoculum dose, as well as between fermentation time and the material-to-liquid ratio, significantly impacted the degradation of anti-nutritional factors. The optimal fermentation conditions were
determined as follows: raw materials ground to pass through an 80-mesh sieve, a material-to-liquid ratio of 1:3.4, an initial pH of 6.5, and a fermentation vessel filling rate of 2.6%. An inoculum
dose of 17.1% was applied, followed by fermentation for 84.5 h at 31 °C. Under these conditions, the post-fermentation contents of phytic acid and water-unextractable arabinoxylans were 8.2 ± 0.78 mg
/g and 83.2 ± 1.09 mg/g, respectively—representing reductions of 48% and 32% compared to the raw materials. The fermented maize gluten feed exhibited enhanced protease, xylanase, phytase, and
cellulase activity, which facilitated the breakdown of phytic acid and water-unextractable arabinoxylans. Additionally, fermented maize gluten feed showed significant increases in the protein
dispersibility index, in vitro protein digestibility, and mineral bioavailability compared with unfermented maize gluten feed. While the current experimental results are promising, they have been
obtained on a laboratory scale, and further validation is necessary for industrial applications. We have also identified potential avenues for future research, including investigating the effects of
fermentation on the functional and structural properties of maize gluten feed (MGF), comparing the efficacy of fermentation treatment with that of commercial enzyme treatments, and assessing the
impact on animal growth performance. In conclusion, this fermentation process offers an efficient method for reducing anti-nutritional factors in maize gluten feed, with substantial potential for
producing nutritionally improved food ingredients.
Supplementary Materials
The following supporting information can be downloaded at
, Figure S1: Effects of fermentation conditions on PA and WU-AX contents. The univariate tests were of the fermentation temperature (a), inoculum dose (b), particle size (c), substrate filling rate
(d), and material-to-liquid ratio (e). PA, phytic acid; WU-AX, water-unextractable arabinoxylans; Table S1: Factors and levels of Plackett–Burman experimental design.
Author Contributions
Conceptualization, X.S. and J.L.; methodology, X.S.; software, L.M.; validation, Y.X. and L.M.; writing—original draft preparation, X.S.; writing—review and editing, X.S.; supervision, J.L.; project
administration, J.L.; funding acquisition, J.L. and X.S. All authors have read and agreed to the published version of the manuscript.
This project was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation; 328017493/GRK 2366; International Research Training Group “Adaption of maize-based food-feed-energy
systems to limited phosphate resources”). This work was also supported by the Special Fund for Introduced Talent to Initiate Scientific Research of Zhejiang Ocean University, China (JX6311130923).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Data are contained within the article.
Conflicts of Interest
The authors declare no conflicts of interest.
1. Zhang, R.; Ma, S.; Li, L.; Zhang, M.; Tian, S.; Wang, D.; Liu, K.; Liu, H.; Zhu, W.; Wang, X. Comprehensive Utilization of Corn Starch Processing By-Products: A Review. Grain Oil Sci. Technol.
2021, 4, 89–107. [Google Scholar] [CrossRef]
2. Guo, Y.; Wang, K.; Wu, B.; Wu, P.; Duan, Y.; Ma, H. Production of ACE Inhibitory Peptides from Corn Germ Meal by an Enzymatic Membrane Reactor with a Novel Gradient Diafiltration Feeding
Working-Mode and in vivo Evaluation of Antihypertensive Effect. J. Funct. Foods 2020, 64, 103584. [Google Scholar] [CrossRef]
3. Rocha-Villarreal, V.; Hoffmann, J.F.; Vanier, N.L.; Serna-Saldivar, S.O.; García-Lara, S. Hydrothermal Treatment of Maize: Changes in Physical, Chemical, and Functional Properties. Food Chem.
2018, 263, 225–231. [Google Scholar] [CrossRef] [PubMed]
4. Lyu, Z.; Li, Y.; Liu, H.; Li, E.; Li, P.; Zhang, S.; Wang, F.; Lai, C. Net Energy Content of Rice Bran, Defatted Rice Bran, Corn Gluten Feed, and Corn Germ Meal Fed to Growing Pigs Using Indirect
Calorimetry. J. Anim. Sci. 2018, 96, 1877–1888. [Google Scholar] [CrossRef] [PubMed]
5. Ortiz de Erive, M.; Wang, T.; He, F.; Chen, G. Development of High-Fiber Wheat Bread Using Microfluidized Corn Bran. Food Chem. 2020, 310, 125921. [Google Scholar] [CrossRef]
6. Bloot, A.P.M.; Kalschne, D.L.; Amaral, J.A.S.; Baraldi, I.J.; Canan, C. A Review of Phytic Acid Sources, Obtention, and Applications. Food Rev. Int. 2023, 39, 73–92. [Google Scholar] [CrossRef]
7. Shi, C.; Zhang, Y.; Lu, Z.; Wang, Y. Solid-State Fermentation of Corn-Soybean Meal Mixed Feed with Bacillus subtilis and Enterococcus faecium for Degrading Antinutritional Factors and Enhancing
Nutritional Value. J. Anim. Sci. Biotechnol. 2017, 8, 50. [Google Scholar] [CrossRef]
8. Sun, X.; Ma, L.; Lux, P.E.; Wang, X.; Stuetz, W.; Frank, J.; Liang, J. The Distribution of Phosphorus, Carotenoids and Tocochromanols in Grains of Four Chinese Maize (Zea mays L.) Varieties. Food
Chem. 2022, 367, 130725. [Google Scholar] [CrossRef]
9. Noureddini, H.; Malik, M.; Byun, J.; Ankeny, A.J. Distribution of Phosphorus Compounds in Corn Processing. Bioresour. Technol. 2009, 100, 731–736. [Google Scholar] [CrossRef]
10. Sun, H.; Cozannet, P.; Ma, R.; Zhang, L.; Huang, Y.K.; Preynat, A.; Sun, L.-h. Effect of Concentration of Arabinoxylans and a Carbohydrase Mixture on Energy, Amino Acids and Nutrients Total Tract
and Ileal Digestibility in Wheat and Wheat by-Product-Based Diet for Pigs. Anim. Feed Sci. Technol. 2020, 262, 114380. [Google Scholar] [CrossRef]
11. Huang, M.; Bai, J.; Buccato, D.G.; Zhang, J.; He, Y.; Zhu, Y.; Yang, Z.; Xiao, X.; Daglia, M. Cereal-Derived Water-Unextractable Arabinoxylans: Structure Feature, Effects on Baking Products and
Human Health. Foods 2024, 13, 2369. [Google Scholar] [CrossRef] [PubMed]
12. Wang, J.; Bai, J.; Fan, M.; Li, T.; Li, Y.; Qian, H.; Wang, L.; Zhang, H.; Qi, X.; Rao, Z. Cereal-Derived Arabinoxylans: Structural Features and Structure–Activity Correlations. Trends Food Sci.
Technol. 2020, 96, 157–165. [Google Scholar] [CrossRef]
13. Rosicka-Kaczmarek, J.; Komisarczyk, A.; Nebesny, E.; Makowski, B. The Influence of Arabinoxylans on the Quality of Grain Industry Products. Eur. Food Res. Technol. 2016, 242, 295–303. [Google
Scholar] [CrossRef]
14. Bautil, A.; Verspreet, J.; Buyse, J.; Goos, P.; Bedford, M.R.; Courtin, C.M. Age-Related Arabinoxylan Hydrolysis and Fermentation in the Gastrointestinal Tract of Broilers Fed Wheat-Based Diets.
Poult. Sci. 2019, 98, 4606–4621. [Google Scholar] [CrossRef] [PubMed]
15. Endalew, H.W.; Atlabachew, M.; Karavoltsos, S.; Sakellari, A.; Aslam, M.F.; Allen, L.; Griffiths, H.; Zoumpoulakis, P.; Kanellou, A.; Yehuala, T.F.; et al. Effect of Fermentation on Nutrient
Composition, Antinutrients, and Mineral Bioaccessibility of Finger Millet Based Injera: A Traditional Ethiopian Food. Food Res. Int. 2024, 190, 114635. [Google Scholar] [CrossRef] [PubMed]
16. Watanakij, N.; Visessanguan, W.; Petchkongkaew, A. Aflatoxin B[1]-Degrading Activity from Bacillus subtilis BCC 42005 Isolated from Fermented Cereal Products. Food Addit. Contam. Part A 2020, 37,
1579–1589. [Google Scholar] [CrossRef]
17. Iqbal, S.; Begum, F.; Rabaan, A.A.; Aljeldah, M.; Al Shammari, B.R.; Alawfi, A.; Alshengeti, A.; Sulaiman, T.; Khan, A. Classification and Multifaceted Potential of Secondary Metabolites Produced
by Bacillus subtilis Group: A Comprehensive Review. Molecules 2023, 28, 927. [Google Scholar] [CrossRef]
18. Suprayogi, W.P.S.; Ratriyanto, A.; Akhirini, N.; Hadi, R.F.; Setyono, W.; Irawan, A. Changes in Nutritional and Antinutritional Aspects of Soybean Meals by Mechanical and Solid-State Fermentation
Treatments with Bacillus subtilis and Aspergillus oryzae. Bioresour. Technol. Rep. 2022, 17, 100925. [Google Scholar] [CrossRef]
19. Bruno Siewe, F.; Kudre, T.G.; Narayan, B. Optimisation of Ultrasound-Assisted Enzymatic Extraction Conditions of Umami Compounds from Fish by-Products Using the Combination of Fractional
Factorial Design and Central Composite Design. Food Chem. 2021, 334, 127498. [Google Scholar] [CrossRef]
20. Buddrick, O.; Jones, O.A.H.; Cornell, H.J.; Small, D.M. The Influence of Fermentation Processes and Cereal Grains in Wholegrain Bread on Reducing Phytate Content. J. Cereal Sci. 2014, 59, 3–8. [
Google Scholar] [CrossRef]
21. Douglas, S.G. A Rapid Method for the Determination of Pentosans in Wheat Flour. Food Chem. 1981, 7, 139–145. [Google Scholar] [CrossRef]
22. Rouau, X.; Surget, A. A Rapid Semi-Automated Method for the Determination of Total and Water-Extractable Pentosans in Wheat Flours. Carbohydr. Polym. 1994, 24, 123–132. [Google Scholar] [CrossRef
23. Hernández-Espinosa, N.; Posadas-Romano, G.; Dreisigacker, S.; Crossa, J.; Crespo, L.; Ibba, M.I. Efficient Arabinoxylan Assay for Wheat: Exploring Variability and Molecular Marker Associations in
Wholemeal and Refined Flour. J. Cereal Sci. 2024, 117, 103897. [Google Scholar] [CrossRef] [PubMed]
24. Akpoilih, B.U.; Adeshina, I.; Chukwudi, C.F.; Abdel-Tawwab, M. Evaluating the Inclusion of Phytase Sources to Phosphorus-Free Diets for GIFT Tilapia (Oreochromis niloticus): Growth Performance,
Intestinal Morphometry, Immune-Antioxidant Responses, and Phosphorus Utilization. Anim. Feed Sci. Technol. 2023, 303, 115678. [Google Scholar] [CrossRef]
25. Dhaver, P.; Pletschke, B.; Sithole, B.; Govinden, R. Optimization, Purification, and Characterization of Xylanase Production by a Newly Isolated Trichoderma Harzianum Strain by a Two-Step
Statistical Experimental Design Strategy. Sci. Rep. 2022, 12, 17791. [Google Scholar] [CrossRef]
26. Al Talebi, Z.A.; Al-Kawaz, H.S.; Mahdi, R.K.; Al-Hassnawi, A.T.; Alta’ee, A.H.; Hadwan, A.M.; Khudhair, D.A.; Hadwan, M.H. An Optimized Protocol for Estimating Cellulase Activity in Biological
Samples. Anal. Biochem. 2022, 655, 114860. [Google Scholar] [CrossRef]
27. Wang, Y.; Xu, K.; Lu, F.; Wang, Y.; Ouyang, N.; Ma, H. Application of Ultrasound Technology in the Field of Solid-State Fermentation: Increasing Peptide Yield through Ultrasound-Treated Bacterial
Strain. J. Sci. Food Agric. 2021, 101, 5348–5358. [Google Scholar] [CrossRef]
28. Zhang, Y.; Ishikawa, M.; Koshio, S.; Yokoyama, S.; Dossou, S.; Wang, W.; Zhang, X.; Shadrack, R.S.; Mzengereza, K.; Zhu, K.; et al. Optimization of Soybean Meal Fermentation for Aqua-Feed with
Bacillus Subtilis Natto Using the Response Surface Methodology. Fermentation 2021, 7, 306. [Google Scholar] [CrossRef]
29. Pearce, K.N.; Karahalios, D.; Friedman, M. Ninhydrin Assay For Proteolysis in Ripening Cheese. J. Food Sci. 1988, 53, 432–435. [Google Scholar] [CrossRef]
30. Kamble, D.B.; Singh, R.; Rani, S.; Kaur, B.P.; Upadhyay, A.; Kumar, N. Optimization and Characterization of Antioxidant Potential, in vitro Protein Digestion and Structural Attributes of
Microwave Processed Multigrain Pasta. J. Food Process. Preserv. 2019, 43, e14125. [Google Scholar] [CrossRef]
31. Kumar, A.; Lal, M.K.; Kar, S.S.; Nayak, L.; Ngangkham, U.; Samantaray, S.; Sharma, S.G. Bioavailability of Iron and Zinc as Affected by Phytic Acid Content in Rice Grain. J. Food Biochem. 2017,
41, e12413. [Google Scholar] [CrossRef]
32. Zhang, L.; Yang, Y.; Sun, J.; Shen, Y.; Wei, D.; Zhu, J.; Chu, J. Microbial Production of 2,3-Butanediol by a Mutagenized Strain of Serratia Marcescens H30. Bioresour. Technol. 2010, 101,
1961–1967. [Google Scholar] [CrossRef] [PubMed]
33. Terlabie, N.N.; Sakyi-Dawson, E.; Amoa-Awua, W.K. The Comparative Ability of Four Isolates of Bacillus subtilis to Ferment Soybeans into Dawadawa. Int. J. Food Microbiol. 2006, 106, 145–152. [
Google Scholar] [CrossRef] [PubMed]
34. Dayana Priyadharshini, S.; Bakthavatsalam, A.K. Optimization of Phenol Degradation by the Microalga Chlorella Pyrenoidosa Using Plackett-Burman Design and Response Surface Methodology. Bioresour.
Technol. 2016, 207, 150–156. [Google Scholar] [CrossRef]
35. Chen, F.; Zhang, Q.; Fei, S.; Gu, H.; Yang, L. Optimization of Ultrasonic Circulating Extraction of Samara Oil from Acer Saccharum Using Combination of Plackett–Burman Design and Box–Behnken
Design. Ultrason. Sonochem. 2017, 35, 161–175. [Google Scholar] [CrossRef]
36. Xi, J.; Xiang, B.; Deng, Y. Comparison of Batch and Circulating Processes for Polyphenols Extraction from Pomelo Peels by Liquid-Phase Pulsed Discharge. Food Chem. 2021, 340, 127918. [Google
Scholar] [CrossRef]
37. Chen, W.; Xu, D. Phytic Acid and Its Interactions in Food Components, Health Benefits, and Applications: A Comprehensive Review. Trends Food Sci. Technol. 2023, 141, 104201. [Google Scholar] [
38. Dahiya, S.; Kumar, A.; Singh, B. Enhanced Endoxylanase Production by Myceliophthora thermophila Using Rice Straw and Its Synergism with Phytase in Improving Nutrition. Process Biochem. 2020, 94,
235–242. [Google Scholar] [CrossRef]
39. Tse, T.; Schendel, R.R. Cereal Grain Arabinoxylans: Processing Effects and Structural Changes during Food and Beverage Fermentations. Fermentation 2023, 9, 914. [Google Scholar] [CrossRef]
40. Liu, Y.; Li, H.; Liu, W.; Ren, K.; Li, X.; Zhang, Z.; Huang, R.; Han, S.; Hou, J.; Pan, C. Bioturbation Analysis of Microbial Communities and Flavor Metabolism in a High-Yielding Cellulase
Bacillus subtilis Biofortified Daqu. Food Chem X 2024, 22, 101382. [Google Scholar] [CrossRef]
41. Reynaud, Y.; Lopez, M.; Riaublanc, A.; Souchon, I.; Dupont, D. Hydrolysis of Plant Proteins at the Molecular and Supra-Molecular Scales during in vitro Digestion. Food Res. Int. 2020, 134,
109204. [Google Scholar] [CrossRef] [PubMed]
42. Zhu, X.; Wang, L.; Zhang, Z.; Ding, L.; Hang, S. Combination of Fiber-Degrading Enzymatic Hydrolysis and Lactobacilli Fermentation Enhances Utilization of Fiber and Protein in Rapeseed Meal as
Revealed in Simulated Pig Digestion and Fermentation in vitro. Anim. Feed Sci. Technol. 2021, 278, 115001. [Google Scholar] [CrossRef]
Figure 1. Effects of fermentation time (a) and initial pH (b) on the PA and WU-AX contents. PA, phytic acid; WU-AX, water-unextractable arabinoxylan.
Figure 2. Pareto chart illustrating the effects of seven variables on the responses of Y[1] (a) and Y[2] (b). Variables with t-values exceeding the critical value of 2.77 are considered statistically
significant. X[1], fermentation time, hour; X[2], fermentation temperature, °C; X[3], initial pH; X[4], inoculum dose, %; X[5], particle size, μm; X[6], substrate filling rate, %; X[7],
material-to-liquid ratio; Y[1], the content of phytic acid (PA), mg/g; Y[2], the content of water-unextractable arabinoxylan (WU-AX), mg/g.
Figure 3. Response-surface plots illustrating the effects on PA content and the interactions between (a) fermentation time and inoculum dose, (c) fermentation time and the material-to-liquid ratio,
and (e) inoculum dose and the material-to-liquid ratio. Corresponding 2D contour plots depict the interactions between (b) fermentation time and inoculum dose, (d) fermentation time and the
material-to-liquid ratio, and (f) inoculum dose and the material-to-liquid ratio.
Figure 4. Response-surface plots illustrating the effects on WU-AX content and the interactions between (a) fermentation time and inoculum dose, (c) fermentation time and the material-to-liquid
ratio, and (e) inoculum dose and the material-to-liquid ratio. Corresponding 2D contour plots depict the interactions between (b) fermentation time and inoculum dose, (d) fermentation time and the
material-to-liquid ratio, and (f) inoculum dose and the material-to-liquid ratio.
Figure 5. Effects of fermentation on the activities of phytase (a), xylanase (b), cellulase (c), and protease (d). MGF, maize germ feed; FMGF, fermented maize germ feed; *** means significance level
of p < 0.001.
Factors Responses
Run X[1] X[2] X[3] X[4] X[5] X[6] X[7] Y[1] Y[2]
1 96 (+) 28 (−) 7 (+) 20 (+) 189 (−) 3.42 (+) 1:2.5 (−) 8.6 ± 0.54 88.9 ± 0.94
2 96 (+) 28 (−) 6 (−) 10 (−) 30 (+) 3.42 (+) 1:3.5 (+) 8.3 ± 0.26 90.0 ± 1.88
3 48 (−) 34 (+) 6 (−) 10 (−) 189 (−) 3.42 (+) 1:3.5 (+) 9.1 ± 0.20 92.3 ± 1.01
4 48 (−) 28 (−) 6 (−) 10 (−) 189 (−) 1.84 (−) 1:2.5 (−) 9.9 ± 0.36 97.9 ± 1.40
5 72 (0) 31 (0) 6.5 (0) 15 (0) 123 (0) 2.63 (0) 1:3 (0) 8.3 ± 0.30 87.8 ± 1.24
6 48 (−) 34 (+) 7 (+) 10 (−) 30 (+) 1.84 (−) 1:2.5 (−) 10.0 ± 0.38 98.3 ± 0.84
7 96 (+) 34 (+) 6 (−) 20 (+) 30 (+) 1.84 (−) 1:3.5 (+) 8.2 ± 0.33 86.9 ± 0.96
8 48 (−) 28 (−) 7 (+) 20 (+) 30 (+) 1.84 (−) 1:3.5 (+) 8.7 ± 0.31 91.8 ± 0.91
9 96 (+) 28 (−) 7 (+) 10 (−) 189 (−) 1.84 (−) 1:3.5 (+) 8.4 ± 0.07 89.3 ± 0.57
10 96 (+) 34 (+) 6 (−) 20 (+) 189 (−) 1.84 (−) 1:2.5 (−) 8.5 ± 0.31 88.8 ± 1.08
11 96 (+) 34 (+) 7 (+) 10 (−) 30 (+) 3.42 (+) 1:2.5 (−) 8.8 ± 0.74 92.5 ± 0.59
12 48 (−) 34 (+) 7 (+) 20 (+) 189 (−) 3.42 (+) 1:3.5 (+) 8.9 ± 0.40 91.8 ± 0.43
13 48 (−) 28 (−) 6 (−) 20 (+) 30 (+) 3.42 (+) 1:2.5 (−) 9.0 ± 0.43 93.6 ± 0.92
Note: X[1], fermentation time, hour; X[2], fermentation temperature, °C; X[3], initial pH; X[4], inoculum dose, %; X[5], particle size, μm; X[6], substrate filling rate, %; X[7], material-to-liquid
ratio; Y[1], the content of phytic acid (PA), mg/g; Y[2], the content of water-unextractable arabinoxylan (WU-AX), mg/g. Values of Y[1] and Y[2] are given as means ± standard deviation (n = 3).
PA (Phytic Acid)
Source Sum of Squares Degree of Freedom Mean Square F-Value p-Value Significance
Model 3.4036 7 0.4862 10.4564 0.0194 *
X[1] 1.8723 1 1.8723 40.2645 0.0032 **
X[2] 0.0385 1 0.0385 0.8287 0.4142
X[3] 0.0096 1 0.0096 0.2072 0.6726
X[4] 0.5461 1 0.5461 11.7448 0.0266 *
X[5] 0.0147 1 0.0147 0.3161 0.6040
X[6] 0.0901 1 0.0901 1.9384 0.2363
X[7] 0.8321 1 0.8321 17.8953 0.0134 *
WU-AX (Water-Unextractable Arabinoxylan)
Source Sum of Squares Degree of Freedom Mean Square F-Value p-Value Significance
Model 129.6758 7 18.5251 17.4466 0.0075 **
X[1] 70.7616 1 70.7616 66.6420 0.0012 **
X[2] 0.0645 1 0.0645 0.0608 0.8174
X[3] 0.8640 1 0.8640 0.8137 0.4180
X[4] 28.9541 1 28.9541 27.2685 0.0064 **
X[5] 1.3736 1 1.3736 1.2937 0.3189
X[6] 1.2545 1 1.2545 1.1815 0.3382
X[7] 26.4033 1 26.4033 24.8662 0.0076 **
Note: X[1], fermentation time, hour; X[2], fermentation temperature, °C; X[3], initial pH; X[4], inoculum dose, %; X[5], particle size, μm; X[6], substrate filling rate, %; X[7], material-to-liquid
ratio; statistical significance: * p < 0.05, ** p < 0.01.
Table 3. Central composite design with experimental responses for the PA and WU-AX contents in MGF under different fermentation conditions.
Factors Responses
Run X[1] X[4] X[7] Y[1] Y[2]
5 48 (−1) 10 (−1) 1:3.5 (1) 9.7 ± 0.27 94.8 ± 0.70
1 48 (−1) 10 (−1) 1:2.5 (−1) 10.2 ± 0.46 102.2 ± 1.59
12 72 (0) 25 (2) 1:3 (0) 8.9 ± 0.32 89.5 ± 1.25
13 72 (0) 15 (0) 1:2 (−2) 9.4 ± 0.13 90.9 ± 1.13
4 96 (1) 20 (1) 1:2.5 (−1) 9.3 ± 0.17 87.1 ± 0.32
16 (C) 72 (0) 15 (0) 1:3 (0) 8.3 ± 0.30 83.5 ± 1.46
10 120 (2) 15 (0) 1:3 (0) 8.4 ± 0.30 89.7 ± 0.80
8 96 (1) 20 (1) 1:3.5 (1) 8.6 ± 0.25 84.6 ± 0.65
3 48 (−1) 20 (1) 1:2.5 (−1) 9.2 ± 0.70 97.0 ± 0.73
2 96 (1) 10 (−1) 1:2.5 (−1) 9.2 ± 0.07 91.7 ± 0.97
7 48 (−1) 20 (1) 1:3.5 (1) 9.0 ± 0.36 88.9 ± 0.99
6 96 (1) 10 (−1) 1:3.5 (1) 8.7 ± 0.31 92.8 ± 1.05
19 (C) 72 (0) 15 (0) 1:3 (0) 8.3 ± 0.29 87.9 ± 0.56
17 (C) 72 (0) 15 (0) 1:3 (0) 8.2 ± 0.46 84.5 ± 0.72
14 72 (0) 15 (0) 1:4 (2) 8.6 ± 0.25 86.8 ± 0.91
9 24 (−2) 15 (0) 1:3 (0) 10.4 ± 0.44 105.5 ± 0.78
11 72 (0) 5 (−2) 1:3 (0) 10.1 ± 0.19 99.9 ± 1.08
18 (C) 72 (0) 15 (0) 1:3 (0) 8.4 ± 0.28 85.8 ± 1.24
15 (C) 72 (0) 15 (0) 1:3 (0) 8.6 ± 0.61 85.3 ± 0.51
Note: X[1], fermentation time, h; X[4], inoculum dose, %; X[7], material-to-liquid ratio; Y[1], phytic acid (PA) content, mg/g; Y[2], water-unextractable arabinoxylan (WU-AX) content, mg/g. Values of
Y[1] and Y[2] are given as means ± standard deviation (n = 3).
Table 4. ANOVA of central composite design for the PA and WU-AX contents in MGF under different fermentation conditions.
Source PA WU-AX
Sum of Squares F-Value p-Value Sum of Squares F-Value p-Value
Model 7.8627 17.8433 0.0001 696.2109 33.6951 <0.0001
X[1] 2.5440 51.9596 <0.0001 213.1600 92.4500 <0.0001
X[4] 0.9312 19.0195 0.0018 125.3280 54.3563 <0.0001
X[7] 0.9702 19.8160 0.0016 39.4384 17.1049 0.0025
X[1] × X[4] 0.2813 5.7443 0.0401 0.3200 0.1388 0.7181
X[1] × X[7] 0.0008 0.0163 0.9011 24.7808 10.7477 0.0096
X[4] × X[7] 0.0145 0.2951 0.6001 2.2261 0.9655 0.3515
X[1]^2 1.7536 35.8157 0.0002 229.7474 99.6442 <0.0001
X[4]^2 2.0730 42.3401 0.0001 134.6781 58.4115 <0.0001
X[7]^2 0.7016 14.3295 0.0043 20.0899 8.7132 0.0162
Residual 0.4407 20.7511
Lack of Fit 0.3247 2.2411 0.2272 10.0901 0.7572 0.6234
C.V.% 2.45 1.67
Pure Error 0.1159 10.6610
Cor Total 8.3034 719.9620
R^2 0.9469 0.9712
R^2-adjusted 0.8939 0.9424
Note: X[1], X[4], and X[7] represent the linear effects of fermentation time (h), inoculum dose (%), and the material-to-liquid ratio, respectively. X[1]^2, X[4]^2, and X[7]^2 denote the quadratic
effects, while X[1] × X[4], X[1] × X[7], and X[4] × X[7] represent the interaction effects.
Items MGF FMGF p-Value Change (%)
Protein (%) 27.1 ± 0.13 28.63 ± 0.08 <0.001 5.57
PDI (%) 37.9 ± 1.02 46.67 ± 0.58 <0.001 23.17
DH (%) 2.3 ± 0.11 3.26 ± 0.12 <0.001 43.61
IVPD (%) 44.3 ± 1.07 58.48 ± 0.78 <0.001 31.92
Minerals’ bioavailability
Fe (%) 22.8 ± 1.51 33.68 ± 2.15 <0.001 47.72
Mn (%) 38.3 ± 1.53 53.23 ± 1.38 <0.001 39.05
Cu (%) 46.0 ± 1.04 57.54 ± 1.06 <0.001 25.20
Zn (%) 12.6 ± 1.23 16.67 ± 0.95 <0.001 31.88
Note: Cu, copper; DH, degree of hydrolysis; Fe, iron; FMGF, fermented maize gluten feed; IVPD, in vitro protein digestion; MGF, maize gluten feed; Mn, manganese; PDI, protein dispersibility index;
Zn, zinc.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Sun, X.; Ma, L.; Xuan, Y.; Liang, J. Degradation of Anti-Nutritional Factors in Maize Gluten Feed by Fermentation with Bacillus subtilis: A Focused Study on Optimizing Fermentation Conditions.
Fermentation 2024, 10, 555. https://doi.org/10.3390/fermentation10110555
AMA Style
Sun X, Ma L, Xuan Y, Liang J. Degradation of Anti-Nutritional Factors in Maize Gluten Feed by Fermentation with Bacillus subtilis: A Focused Study on Optimizing Fermentation Conditions. Fermentation.
2024; 10(11):555. https://doi.org/10.3390/fermentation10110555
Chicago/Turabian Style
Sun, Xiaohong, Lei Ma, Yaoquan Xuan, and Jianfen Liang. 2024. "Degradation of Anti-Nutritional Factors in Maize Gluten Feed by Fermentation with Bacillus subtilis: A Focused Study on Optimizing
Fermentation Conditions" Fermentation 10, no. 11: 555. https://doi.org/10.3390/fermentation10110555
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2311-5637/10/11/555","timestamp":"2024-11-07T06:50:56Z","content_type":"text/html","content_length":"520732","record_id":"<urn:uuid:b246c62e-cd0e-4a59-a7e5-474f0776cd59>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00375.warc.gz"} |
Other Knight's Tour Algorithms
Backtracking Algorithm
Backtracking is a general algorithm for finding all (or some) solutions to some computational problems that incrementally builds candidates to the solutions and abandons a candidate as soon as it
determines that the candidate cannot possibly be completed to a valid solution.
For the Knight's Tour:
1. Start from an initial position.
2. Recursively try all possible moves from the current position.
3. If a move leads to a solution, return true. Otherwise, backtrack and try other moves.
4. If no move leads to a solution, return false.
While this method guarantees finding a solution if one exists, it can be slow for large board sizes.
Divide and Conquer
This approach breaks down the problem into smaller, manageable parts:
1. Divide the board into smaller sections.
2. Solve the Knight's Tour for each section independently.
3. Connect the solutions of the smaller sections to form a complete tour.
This method is particularly effective for larger board sizes.
Neural Network Approach
Artificial Neural Networks can be trained to solve the Knight's Tour problem:
• The network is trained on many examples of valid Knight's Tours.
• It learns to predict the next move based on the current board state.
• This approach can generate solutions quickly once trained, but requires significant computational resources for training.
Genetic Algorithms
Genetic Algorithms mimic the process of natural selection to find solutions:
1. Generate a population of random tours.
2. Evaluate the fitness of each tour (how close it is to a complete, valid tour).
3. Select the fittest tours and "breed" them to create a new generation.
4. Introduce random mutations to maintain diversity.
5. Repeat until a valid tour is found or a maximum number of generations is reached.
This method can find novel solutions but may require many iterations.
Mathematical Constructions
For certain board sizes, mathematical constructions can generate Knight's Tours:
• These methods use properties of number theory and graph theory.
• They can quickly generate tours for specific board sizes but are not generally applicable to all sizes.
Each of these algorithms has its strengths and weaknesses, and the choice of algorithm often depends on the specific requirements of the problem, such as board size, computational resources
available, and whether finding any solution quickly or finding all possible solutions is the goal. | {"url":"https://knightstourchallenge.com/knights-tour-algorithms","timestamp":"2024-11-02T08:27:31Z","content_type":"text/html","content_length":"11813","record_id":"<urn:uuid:36db6f30-978a-4220-94da-d875afa520a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00055.warc.gz"} |
Math Tower of Hanoi
Play Online Game
Do you like this game?
This game on a smartphone:
Embed Code:
Share with Friends:
The Tower of Hanoi is a puzzle game invented in 19th century by the French mathematician Eduard Lucas. This guy got inspired by the legend about priests in Hindi castle. Those priests got 3 bars, one
of them had 64 golden disks. Each bar was slightly larger than the previous one. As the legend says, they had to move the tower from one bar to another one following the two conditions. First of all,
they could only move one disk at a time. Secondly, they were not allowed to put a larger disk on a smaller one. According to the legend, when they completed the work, the castle would turn into dust,
and the world would vanish. We calculated that 2^64-1=18 446 744 073 709 551 615 steps are needed to solve the puzzle with 64 disks. If 1 second is spent to complete each step, the tower will be
moved during 586 billion years. The Earth is about 4 and a half billion years old now.
Based on this puzzle, we created a game called Math Tower of Hanoi. The game contains of 18 levels - from simple ones to complex ones. The smallest disk has number 1 on it, the next disk is larger
and has number 2 and so on up to the largest disk that has number 8, accordingly. The main goal of this game is to move the disks so that the amount of the digits on all the disks in the bars would
match the target. The fewer steps you make during the game, the more points you will get for the puzzle. You are allowed to move only one disk at a time. You cannot put a larger disk on a smaller
one. You will get other math tasks between the levels. Play and improve your skills in mental counting.
Ready to try? | {"url":"https://playcoolmath.com/en/math-games/math-hanoi-tower","timestamp":"2024-11-09T04:35:34Z","content_type":"text/html","content_length":"24698","record_id":"<urn:uuid:d0db4302-ded6-4a05-948b-da26857c6252>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00194.warc.gz"} |
Kelly McCown
Congratulations to this month's winners! They each won a $5 TPT gift card to use for their classroom.
Did you know I give away math freebies every month?
Did you know I give away TPT gift cards too?
Click HERE=> http://bit.ly/FREEMathPuzzles
Happy Teaching!
Do you need practice and review of Kindergarten Math skills?
Do you want to change up your lessons with interactive math materials?
These interactive notebook activities are intended to help students understand how to decompose numbers less than or equal to 10 into pairs in more than one way, write numbers from 0 to 20,
understand the relationships between numbers and quantities, count “how many?”, identify whether the number of objects is greater than, less than, or equal to, compare two numbers between 1 and 10,
add & subtract with objects, solve addition and subtraction word problems, fluently add and subtract within 5, and compose and decompose numbers from 11 to 19, count to 100 by ones and by tens, count
forward beginning from a given number, correctly name shapes, analyze & compare shapes, identify shapes as two-dimensional (lying in a plane, "flat") or three-dimensional ("solid"), Describe
measurable attributes of objects, such as length or weight, describe several measurable attributes of a single object, and directly compare two objects with a measurable attribute in common, to see
which object has "more of"/"less of" the attribute, and describe the difference.
Cutting out the manipulatives. High Quality Clipart Images.
Gluing into their notebooks with precision and taking ownership of their work.
Reviewing key skills and demonstrating mastery with review.
✔Common Core State Standards Covered:
CCSS.Math.Content.K.OA.A.1Represent addition and subtraction with objects, fingers, mental images, drawings1, sounds (e.g., claps), acting out situations, verbal explanations, expressions, or
CCSS.Math.Content.K.OA.A.2Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem.
CCSS.Math.Content.K.OA.A.3Decompose numbers less than or equal to 10 into pairs in more than one way, e.g., by using objects or drawings, and record each decomposition by a drawing or equation (e.g.,
5 = 2 + 3 and 5 = 4 + 1).
CCSS.Math.Content.K.OA.A.4For any number from 1 to 9, find the number that makes 10 when added to the given number, e.g., by using objects or drawings, and record the answer with a drawing or
CCSS.Math.Content.K.OA.A.5Fluently add and subtract within 5.
CCSS.Math.Content.K.CC.A.1Count to 100 by ones and by tens.
CCSS.Math.Content.K.CC.A.2Count forward beginning from a given number within the known sequence (instead of having to begin at 1).
CCSS.Math.Content.K.CC.A.3Write numbers from 0 to 20. Represent a number of objects with a written numeral 0-20 (with 0 representing a count of no objects).
CCSS.Math.Content.K.CC.B.4Understand the relationship between numbers and quantities; connect counting to cardinality.
CCSS.Math.Content.K.CC.B.5Count to answer "how many?" questions about as many as 20 things arranged in a line, a rectangular array, or a circle, or as many as 10 things in a scattered configuration;
given a number from 1-20, count out that many objects.
CCSS.Math.Content.K.CC.C.6Identify whether the number of objects in one group is greater than, less than, or equal to the number of objects in another group, e.g., by using matching and counting
CCSS.Math.Content.K.CC.C.7Compare two numbers between 1 and 10 presented as written numerals.
CCSS.Math.Content.K.NBT.A.1Compose and decompose numbers from 11 to 19 into ten ones and some further ones, e.g., by using objects or drawings, and record each composition or decomposition by a
drawing or equation (such as 18 = 10 + 8); understand that these numbers are composed of ten ones and one, two, three, four, five, six, seven, eight, or nine ones.
CCSS.Math.Content.K.G.A.2Correctly name shapes regardless of their orientations or overall size.
Identify shapes as two-dimensional (lying in a plane, "flat") or three-dimensional ("solid").
CCSS.Math.Content.K.G.B.4Analyze and compare two- and three-dimensional shapes, in different sizes and orientations, using informal language to describe their similarities, differences, parts (e.g.,
number of sides and vertices/"corners") and other attributes (e.g., having sides of equal length).
Describe measurable attributes of objects, such as length or weight. Describe several measurable attributes of a single object.
Directly compare two objects with a measurable attribute in common, to see which object has "more of"/"less of" the attribute, and describe the difference. For example, directly compare the heights
of two children and describe one child as taller/shorter.
This April Interactive Math Notebook FEATURES:
✔50 Pages of April Math Activities in Color
✔50 Pages of April Math Activities in Black & White
✔Skills Reviewed: Represent, Count, and Write Numbers 6 to 9
✔Skills Reviewed: Represent, Count, and Write Numbers to 10
✔Skills Reviewed: Addition
✔Skills Reviewed: Subtraction
✔Skills Reviewed: Represent, Count, and Write Numbers 11 to 19
✔Skills Reviewed: Represent, Count, and Write Numbers 20 and more
✔Skills Reviewed: Two Dimensional Shapes
✔Skills Reviewed: Three Dimensional Shapes
✔Skills Reviewed: Measurement
✔Skills Reviewed: Making Graphs
✔Fun April activities centered on reviewing Common Core State Standards
✔Packed with common core math problems for review and practice
✔Lots of coloring fun. A MUST: Using a set of crayons or markers.
Reviewing math skills with different methods deepens students' conceptual knowledge. Other teachers have used these materials for RTI and review to help students develop mastery. I hope your students
enjoy learning and engaging with these math skills too!
Happy Teaching!
Do you want interactive math activities for Easter?
Do you want to review important math skills with your students?
These interactive notebook activities are intended to help students understand how to decompose numbers less than or equal to 10 into pairs in more than one way, write numbers from 0 to 20,
understand the relationships between numbers and quantities, count “how many?”, compare two numbers, add & subtract with objects, correctly name shapes, and analyze & compare shapes.
How to use it:
1. A fun review of grade level Math CCSS skills
2. Substitute packet for days when you are sick or not at school
3. Morning Work, Classwork, RTI, and more
This Easter Interactive Math Notebook FREEBIE FEATURES:
✔5 Pages of Easter Math Activities in Color
✔Skills Reviewed: Represent, Count, and Write Numbers to 10
✔Skills Reviewed: Addition
✔Skills Reviewed: Subtraction
✔Skills Reviewed: Two Dimensional Shapes
✔Fun Easter activities centered on reviewing Common Core State Standards
✔Packed with common core math problems for review and practice
✔Lots of coloring fun. A MUST: Using a set of crayons or markers.
Happy Easter!
Do you want to review math skills for Spring?
Do your students need an Easter Egg challenge?
This Easter & Spring Elementary Math Activities NO PREP packet that will keep your third, fourth, and fifth graders engaged! This packet is just plain fun. Not only is it PACKED with grade level
common core math problems, it also gives students fun coloring, puzzles, and problem solving. Use this packet for bellwork, classwork, extra credit, fast finishers, or homework.
Review of Addition & Subtraction
Number Puzzles
*6 different Engaging Math Activities
*FUN activities and puzzles centered on reviewing math curriculum.
*Packed with 3rd, 4th, 5th grade common core math problems for review and practice.
*HIGH QUALITY CLIPART is included
*Topics covered: Addition & Subtraction, Operations with Money, Multiplying by Decimals, Operations with Division, Number Puzzle with Addition, Subtraction & Multiplication
Activities Included:
-Easter Egg Math Adding {with answer key}
-Easter Egg Math Subtracting {with answer key}
-Exchanging Eggs {with answer key}
-Sorting Jellybeans {with answer key}
-Ready, Set, Color! {with answer key}
-Easter Number Puzzle {with answer key}
Spring is a great season for renewal and refreshing math skills. Students enjoy the Easter season for the candy and engagement with egg activities. Have fun with your students and make positive math
memories this Spring.
Happy Easter!
Are you planning for April?
Do you want to engage your students in a fun Spring math review?
This April Math NO PREP packet that will keep your fourth graders engaged! This packet is just plain fun. Not only is it PACKED with fourth-grade common core math problems, it also gives students fun
coloring, puzzles, and problem solving. Use this packet for bellwork, classwork, extra credit, fast finishers, or homework!
Topics Covered:
-Multiplying by 2 digits
-Dividing by 1 digit numbers
-Factors & Multiples
-Comparing Fractions
-Adding & Subtracting Mixed Numbers
-Multiplying Fractions by Whole Numbers
-Relating fractions and decimals
-Line Plots
-Angle Measurements
-Area of Rectangles and Squares
Engaging Review
Math Skill Applications
Spring Themed Packet
Giving your students the opportunity for math fluency and enrichment in number sense helps all their other math skills flourish. Increasing time to practice math fluency and different math challenges
helps students think critically about the math process. Other teachers have commented, "Cute, easy to use and not just your everyday ordinary practice pages come with this set! Thank you!"
Happy Teaching! | {"url":"https://www.kellymccown.com/2018/03/","timestamp":"2024-11-06T12:05:51Z","content_type":"application/xhtml+xml","content_length":"262722","record_id":"<urn:uuid:428d2246-af1e-4708-8417-193ce2ca6307>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00505.warc.gz"} |
Two-dimensional Geometric Shapes (GNU Octave)
Draw a rectangular patch defined by pos and curv.
The variable pos(1:2) defines the lower left-hand corner of the patch and pos(3:4) defines its width and height. By default, the value of pos is [0, 0, 1, 1].
The variable curv defines the curvature of the sides of the rectangle and may be a scalar or two-element vector with values between 0 and 1. A value of 0 represents no curvature of the side,
whereas a value of 1 means that the side is entirely curved into the arc of a circle. If curv is a two-element vector, then the first element is the curvature along the x-axis of the patch and
the second along y-axis.
If curv is a scalar, it represents the curvature of the shorter of the two sides of the rectangle and the curvature of the other side is defined by
min (pos(1:2)) / max (pos(1:2)) * curv
Additional property/value pairs are passed to the underlying patch command.
If the first argument hax is an axes handle, then plot into this axes, rather than the current axes returned by gca.
The optional return value h is a graphics handle to the created rectangle object. | {"url":"https://docs.octave.org/v4.2.2/Two_002ddimensional-Geometric-Shapes.html","timestamp":"2024-11-11T13:04:46Z","content_type":"text/html","content_length":"6611","record_id":"<urn:uuid:ab9db222-401c-458d-a02b-b43c6e00b1ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00559.warc.gz"} |
Solve it did you? Speak Yoda how to
Earlier today I set you the following puzzle about the peculiar grammar of Yoda, Star Wars’ pointy-eared Jedi master.
Yoda inverts pairs of phrases before speaking. If Yoda says “Believe you I don’t”, we know what he means is “I don’t believe you.”
Here’s a way to mark up a Yoda sentence to recover its original meaning.
<[believe you] [I don’t]>
In this notation the “[ ]” means preserve the relative order of the phrases inside the brackets (of which there must be exactly two) and the “< >“ means invert the order. So in this case the
annotation means “you” comes after “believe” and “don’t” comes after “I”, but “I don’t” comes before “believe you”.
Puzzle Part I
For each of these following annotated Yoda sentences, write down the original.
1) < go [ you must ] >
2) < [ strong [ with [ the force ] ] ] < [this [ one is ] ] < think I > > >
3) < [ < < home [ milk < coming before > ] > [ < to forget > < up pick > ] > tonight ] < don’t please > >
1) You must go
2) I think this one is strong with the force
3) please don’t forget to pick up milk before coming home tonight
Puzzle Part II
Mark up the following Yoda sentences in such a way that they each recover the original meaning, which is ‘use the Force Luke’. It might be the case that there are multiple solutions (in which case
say so) or there may be no solutions.
1) use Force Luke the
2) Luke the Force use
3) Luke Force the use
4) the Luke use Force
5) the Luke Force use
1) [ Use < [ force Luke ] the > ]
2) < Luke < [ the force ] Use > > MORE POSSIBLE
3) < Luke < force < the Use > > > MORE POSSIBLE
4) NOT POSSIBLE
5) < [ the < Luke force > ] Use >
Puzzle Part III
There are 24 ways of ordering four objects. That’s because there are four choices for the first position, three choices left for the second position, two left for the third position, and one left for
the fourth, making 4 x 3 x 2 x 1 = 24 possible choices.
If Yoda was able to rearrange the words ‘use the Force Luke’ in any way he wanted, therefore, there are 24 ways he could do this. If he is only allowed to use the rules of this puzzle, that is using
the [ ] and the < > brackets, how many ways are there that he can rearrange ‘use the Force Luke’?
You could find this out by listing all 24 possibilities and getting stuck on two of them. Or you could notice the pattern from the previous part of the puzzle. If we number the words of “use the
force Luke” as 1, 2 3 and 4, then 2413 fails, but the following are possible:
Add the notation:
• [ 1 < [ 3 2 ] 4 > ]
• < 4 < [ 2 3 ] 1 > >
• < 4 < 3 < 2 1 > > >
• < [ 2 < 4 3 > ] 1 >
In each case the two numbers that you must apply the bracket rule to first are adjacent numbers, meaning they are next to each other in the normal order of numbers. (In the first two they are 2 and
3, in the third 1 and 2, and in the fourth 3 and 4). Since the numbers are adjacent, the application of the rule will keep them adjacent, and they will stay adjacent on all subsequent applications of
rules. Eventually, the rules produce a sequence of three numbers that are all adjacent and then finally four, which will get you a solution.
In other words, when you get two adjacent numbers a solution is possible.
However, look at 2413. It contains no sequence of two adjacent numbers, and this is why it is impossible to turn it into a sequence of four adjacent numbers. There are only two permutations of four
digits which contain no adjacent numbers:
2413 and 3142.
• the Luke use Force
• Force use Luke the
are the only phrases that cannot pass Yoda’s mouth.
I hope you enjoyed today’s puzzle. I’ll be back in two weeks.
I set a puzzle here every two weeks on a Monday. I’m always on the look-out for great puzzles. If you would like to suggest one, email me.
Thanks again to Jonathan May, who wrote the puzzle, and to the North American Computational Linguistics Olympiad, where the puzzle originally appeared. | {"url":"https://www.newsgroove.co.uk/solve-it-did-you-speak-yoda-how-to/","timestamp":"2024-11-11T07:34:30Z","content_type":"text/html","content_length":"110473","record_id":"<urn:uuid:1523747d-8d74-4edf-9b87-825c24120b3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00079.warc.gz"} |
How would you calculate the energy of a photon with a known frequency? | Socratic
How would you calculate the energy of a photon with a known frequency?
For example, if the frequency is "8x10^13Hz"
1 Answer
The energy of the photon can be calculated using $E = h v$
In the formula $E = h v$:
$E$ is the energy in Joules
$h$ is Planck's Constant and
$v$ is the frequency of the photon (light)
So therefore the Energy of a photon with a frequency of $8 \cdot {10}^{13}$ Hz will be:
$E = h v$
$E = 6.626 \cdot {10}^{- 34}$${m}^{2}$.$k g$/$s$ x $8 \cdot {10}^{13}$ Hz
$E = 5.3 \cdot {10}^{-} 20$ Joules
Therefore the energy of the photon is $5.3 \cdot {10}^{-} 20$ Joules
Hope I helped :)
Impact of this question
4605 views around the world | {"url":"https://socratic.org/questions/how-would-you-calculate-the-energy-of-a-photon-with-a-known-frequency#190178","timestamp":"2024-11-10T12:31:35Z","content_type":"text/html","content_length":"33637","record_id":"<urn:uuid:af6f5a46-badf-4f5e-b7c8-eec8ca62ccb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00143.warc.gz"} |
The Office Pool 1
My office buddies have access to point spreads too. They tend to look at the spread (or at least look at each team's respective record, which is just as accurate) and then pick a couple upsets each
week. Over the past five years, the spread identifies winners correctly 66.2% of the time. So, normally, their upsets would be correct on average 33.8% of the time (100%-66.2%), but they won't be
picking upsets in lopsided match-ups. (Although we can't always assume rationality, we will assume sanity.) So in their 34 chosen upsets (2 per week), my buddies will be right 45% of the time on
average. I would have a 10% accuracy advantage in those games.
After doing the math, each of my buddies would average a 63.3% accuracy rate (66.2% * 232 games + 45% * 34 games). And I'd average 66.2% accuracy. Man, I can't wait to collect my winnings!
But wait. Because of luck, some would be slightly more accurate, and some would be less accurate. In fact, the only thing that really matters is how well each of them do on the 34 games they deviate
from picking the published favorite. In the other 232 games, we'd have identical picks. Of the 34 games in question, each game that one of my buddies gets right, I must have been been wrong. One of
my 10 friends needs to be correct greater than 50% of the time in his 34 games to beat me.
The mathematical bottom line is, "How often is someone correct at least in 18 out of 34 trials with a 0.45 probability of being correct in any given trial?" The binomial distribution gives us the
answer--it's 22.4% of the time. That's pretty good, right? I have a 77.6% chance of beating any one of my opponents. The only problem is that there are 10 of them.
The chance I would beat all 10 of my buddies is the conjunctive probability of beating one of them. It's 77.6% * 77.6%... and so on, for however many opponents I have. In this case, it's:
0.776 ^10 = 0.079
In other words, my chances of winning the office pool are just 7.9%--significantly less than a fair chance of 1 in 11. That's why just picking the favorites is a bad strategy. I'd actually be better
off choosing the less accurate strategy of my buddies. At least then I'd have fair chance at 1 in 11.
I realize that it is counter-intuitive that a strategy that is less accurate overall is better than a more accurate strategy. But in a contest against several opponents, the more risky strategy--with
a greater deviation of outcomes--may be best.
Note: Phil Birmbaum points out that the odds of the opponents are not independent of one another, and therefore the simple compound probability I calculated here is far too low. If one opponent
happens to beat you, then the other opponents may be more likely to beat you as well, and vice versa. In the end, picking all favorites may be the better play. See his comments for an explanation.
3 Responses to “The Office Pool 1”
Phil Birnbaum says:
I don't think this is quite right ... you can't multiply the 10 probabilities together like that, because they're not independent. Suppose you beat the first opponent. Then the chance
that you did very well on your favorites is higher, and so the chance you also beat the second, third, fourth ... opponent is more than 77.6%.
(Look at it this way: suppose you run the same calculation for the first opponent, who picks underdogs. His chance of beating you is 23.4%. His chance of beating the others is 50%.
Multiply nine 50%s and one 23.4%, and you obviously get a lot less than 7.9%. So that can't be right.)
I think the best strategy is to pick all favorites. The disadvantage is not that it reduces your odds of winning, but that it increases the chance of a tie and a split pot. You might be
better off taking all favorites except one underdog, and hoping nobody matches you.
The best strategy, assuming all your opponents also use it, is probably to take all favorites with probability X, and one underdog with probability (1-X). It's not that hard to calculate
X, but I don't feel like it. :)
Brian Burke says:
As long as the opponents aren't colluding to pick the same upsets, and there are enough upsets to pick, wouldn't it be independent? I guess not; your calculation based on the 'first
opponent' proves that.
I suppose if there are only a handful of reasonable upsets to pick, that would make it non-independent. (I actually just went and calculated--there are 58.4 games per year with spreads <4
pts.) I guess that's not enough.
Would it really matter how well the favorites do, as long as the opponents are basing their picks on the "favorites except a couple upsets" method? In other words, the record of the
favorites becomes a floating zero-point, a baseline from which the other contestants' records deviate. Each contestant has a 23.4% chance of beating that baseline, whatever it is. (But
not independently as you pointed out.)
I suppose if the favorites had an unusually strong year (say 70%) then we'd have to slightly reduce the 45% upset accuracy rate I assumed (and that was just an assumption). But then
again, there is equal chance of an unusually weak year for favorites. For example, in 2006 the Vegas favorites won only 59% of the time.
Phil Birnbaum says:
Yeah, the records of the underdog pickers (assuming they're all different games) are independent, but the *probability of beating the favorite picks* is not independent.
If the first underdog-picker beats the favorite-picker, it's likely the favorite-picker picked badly, and so more likely the second underdog-picker will beat him too, and the third, and
so on.
I think the problem is that the effect of the favorites' record is not symmetrical. The baseline is 55%. If the favorites win only 50%, every picker has an equal chance (since there's no
such thing as a favorite any more), and the favorite-picker has a 1/11 (9.1%) chance of winning the pool.
But if the favorites win 60%, then it's going to be very hard for a bunch of 40% underdogs to beat the favorites in 34 games. I won't do the math, but perhaps the probability of the
favorite-picker winning will now be 40%.
Extrapolating linearly (which we shouldn't do), you can average the 50% and 60% numbers -- 9.1% and 40% (or whatever the correct number is) -- to estimate the chance of the
favorite-picker winning at 55% per game. This naive estimate comes out to about 25%, which I bet is pretty close. Probably easy to do a simulation, maybe I'll try it. | {"url":"https://www.advancedfootballanalytics.com/2008/03/office-pool-1.html","timestamp":"2024-11-03T07:49:38Z","content_type":"application/xhtml+xml","content_length":"101648","record_id":"<urn:uuid:e0e888c4-9642-4dee-b693-9d2ef66ee791>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00051.warc.gz"} |
Tarjan's Algorithm for finding Strongly Connected Components in Directed Graph
• After going through this chapter, definitely look at the chapters on Cycles:
Often times problems that can be solved by using the concept of Strongly Connected Components can also be solved by leveraging the concept of Cycles. For any nodes π and π , the node π
and π belong to the same cycle if and only if π and π are strongly connected.
Video Explanation of finding Strongly Connected Components using Tarjan's Algorithm:
NOTE: Throughout the chapter I would be using the terms node and vertex interchangeably.
Please read the chapter
Articulation Points
before going through this chapter because all the fundamentals of
Tarjan's Algorithm
is discussed there. The algorithm of
Finding Strongly Connected Components in a Directed Graph
is built on top of the concept of
introduced in
Articulation Points
What are Strong Connected Components in Directed Graph ?
Two vertices v and w are
strongly connected
if they are mutually reachable from each other. There is a directed path from v to w and a directed path from w to v. A directed graph is strongly connected if all of its vertices are strongly
connected to each other. Al the 4 graphs in the diagram below are
strongly connected directed graph
If a particular component in a directed graph is
strongly connected
then we call that component
Strongly Connected Component or SCC
. In an SCC all nodes are reachable from all other nodes.
If you think deeply you would observe two important things about
strong connected components or SCCs
1. Strongly Connected Components are basically cycles.
2. What is unique about SCC is that THERE IS NO WAY TO FIND A WAY THAT GOES OUT OF SCC AND THEN COMES BACK IN.
Look at all the diagrams in this chapter and get this statement clear to you. You should be able to convince yourself and this statement holds true for all SCCs. Because if you understand this
simple concept it would be enough for you to figure out the algorithm to find SCCs yourself.
3. SCCs have no back-edge. Any outbound edge from any node in an SCC is always a Cross Edge leading to another SCC.
Look at the directed graphs and their
Strongly Connected Components
in the below two diagrams:
Take any
Strongly Connected Component
with more than one vertices in it from the above two diagrams. Just like in
Finding Articulation Points
here too if you construct a DFS spanning tree, since in any SCC
THERE IS NO WAY TO FIND A WAY THAT GOES OUT OF SCC AND THEN COMES BACK IN
, the
for all the vertices in an SCC will be the same as the
of the root of the SCC in the DFS Spanning Tree, because SCCs are cyclic and the DFS root of the SCC is the earliest discovered node that is reachable from any node in the SCC. Take any SCC from the
above two diagrams and this would hold true. Take a pause and please get this clear to you, because if you have understood this, congrats! you have already gotten the core of the algorithm.
So basically what we will do is,
we will construct the DFS Spanning Tree in the same way as were doing for finding Articulation Points, and whenever we are done doing DFS on all of the adjacent vertex of a node (say, v) check: what
is the lowestDiscoveredVertexReachable you got for the node v ?.
From above discussion we can summarize:
If we are at node v and if for node v we get
earliestDiscoveredVertexReachable[v] = discoveryTime[v],
then the vertex v is root of an SCC.
Cross Edges:
If you have observed, any node in an SCC can have an outbound edge going out of the SCC, but in all cases that outbound edge is either (1) a cross-edge, or (2) an edge that is leading to the next
SCC. But the next SCC won't be part of the current SCC since according to our above discussing if a component is SCC there is no way to come back to it even if there is a way to get out.
Please note if an SCC has a
to other SCCs, we should discount those edges because
cross edge
always points a vertex which is already visited. This means the other SCC that the cross-edge is pointing to has already been processed. We would see in our code how we are identifying cross-edge.
Draw two SCCs and connect them by one directed edge and you would be able to figure it out.
Quick Note:
• Back edges point from a node to one of its ancestors in the DFS tree.
• Forward edges point from a node to one of its descendants.
• Cross edges point from a node to a previously visited node that is neither an ancestor nor a descendant.
How would you keep track of all the vertices in a Strongly Connected Component in our code?
To track the subtree rooted at a specific node (say, vertex v), we can use a stack and keep pushing node while visiting. When we encounter the node v again while backtracking, pop all nodes from
stack till you get head out of stack. This will make sure we pop only the nodes belonging to the SCC rooted at vertex v. Why ? Because if V is root of an SCC we will be visiting only the nodes belong
to that SCC before we get back to vertex v while backtracking. Since SCC has no link to any other component we won't be visiting nodes outside of SCC before we backtrack to vertex v. This will become
clearer when we look at the code.
To make sure, we donβ t consider cross edges, when we reach a node which is already visited, we should process the visited node only if it is present in stack, else ignore the node.
Relationship with Topological Sort:
While there is nothing special about the order of the nodes within each strongly connected component, one useful property of the algorithm is that no strongly connected component will be identified
before any of its successors. Therefore, the order in which the strongly connected components are identified constitutes a reverse topological sort of the DAG formed by the strongly connected
This observation becomes quite intuitively if you recall the
Topological Sort using DFS
. There too we get the nodes in the opposite order of the topological order after the Depth First Search is done. Same thing here too since
Tarjan's Algorithm
Course Scheduling
demonstrates how we can leverage this observation to solve various problems.
The diagram below explains the above fact. Feel free to click on to open in a new tab:
Donald Knuth
Tarjan's algorithm
as one of his favorite implementations in the book "
The Stanford GraphBase
". He also wrote: "The data structures that he devised for this problem fit together in an amazingly beautiful way, so that the quantities you need to look at while exploring a directed graph are
always magically at your fingertips. And his algorithm also does topological sorting as a byproduct.""
algorithm Tarjan :
input: graph G = (V, E)
output: set of strongly connected components (sets of vertices)
index := 0
S := empty stack
for each v in V do
if v.index is undefined then
end if
end for
function strongconnect(v)
// Set the depth index for v to the smallest unused index
v.index := index
v.lowlink := index
index := index + 1
v.onStack := true
// Consider successors of v
for each (v, w) in E do
if w.index is undefined then
// Successor w has not yet been visited; recurse on it
v.lowlink := min(v.lowlink, w.lowlink)
else if w.onStack then
// Successor w is in stack S and hence in the current SCC
// If w is not on stack, then (v, w) is an edge pointing to an SCC already found and must be ignored
// Note: The next line may look odd - but is correct.
// It says w.index not w.lowlink; that is deliberate and from the original paper
v.lowlink := min(v.lowlink, w.index)
end if
end for
// If v is a root node, pop the stack and generate an SCC
if v.lowlink = v.index then
start a new strongly connected component
w := S.pop()
w.onStack := false
add w to current strongly connected component
while w β v
output the current strongly connected component
end if
end function
Working Solution:
Java code:
Login to Access Content
Python code:
Login to Access Content
Time Complexity:
We are basically doing a
to compute the SCCs. Time Complexity = Time Complexity of
= O(V + E), where V = total number of vertices in the given directed graph, E = total number of edges in the given directed graph. How we got time complexity of DFS as O(V + E) is discussed
Space Complexity:
In worst case we would have the whole graph as a big SCC, and the SCC Stack would need to hold all the vertices. Space Complexity = O(V), where V = total number of vertices in the given directed
More on Tarjan's Algorithm: | {"url":"https://thealgorist.com/Algo/GraphTheory/Tarjan/SCC","timestamp":"2024-11-13T21:18:19Z","content_type":"text/html","content_length":"57246","record_id":"<urn:uuid:533976ad-b6cb-4d4e-9e8b-cd09e5e997a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00678.warc.gz"} |
Square Yards to Hectares Conversion (sq yd to ha)
Square Yards to Hectares Converter
Enter the area in square yards below to convert it to hectares.
Do you want to convert hectares to square yards?
How to Convert Square Yards to Hectares
To convert a measurement in square yards to a measurement in hectares, multiply the area by the following conversion ratio: 8.3613E-5 hectares/square yard.
Since one square yard is equal to 8.3613E-5 hectares, you can use this simple formula to convert:
hectares = square yards × 8.3613E-5
The area in hectares is equal to the area in square yards multiplied by 8.3613E-5.
For example,
here's how to convert 50,000 square yards to hectares using the formula above.
hectares = (50,000 sq yd × 8.3613E-5) = 4.180637 ha
Square yards and hectares are both units used to measure area. Keep reading to learn more about each unit of measure.
What Is a Square Yard?
One square yard is equivalent to the area of a square with sides that are each 1 yard in length. One square yard is roughly equal to 9 square feet or 0.836127 square meters.
The square yard is a US customary and imperial unit of area. A square yard is sometimes also referred to as a square yd. Square yards can be abbreviated as sq yd, and are also sometimes abbreviated
as yd². For example, 1 square yard can be written as 1 sq yd or 1 yd².
You can use a square yards calculator to calculate the area of a space if you know its dimensions.
Learn more about square yards.
What Is a Hectare?
One hectare is equal to 10,000 square meters,^[1] or the area of a square with 100 meter sides.
The hectare is an SI accepted unit for area for use with the metric system. In the metric system, "hecto" is the prefix for 10^2. Hectares can be abbreviated as ha; for example, 1 hectare can be
written as 1 ha.
Learn more about hectares.
Square Yard to Hectare Conversion Table
Table showing various square
yard measurements converted to
Square Yards Hectares
1 sq yd 0.000083613 ha
2 sq yd 0.000167 ha
3 sq yd 0.000251 ha
4 sq yd 0.000334 ha
5 sq yd 0.000418 ha
6 sq yd 0.000502 ha
7 sq yd 0.000585 ha
8 sq yd 0.000669 ha
9 sq yd 0.000753 ha
10 sq yd 0.000836 ha
100 sq yd 0.008361 ha
1,000 sq yd 0.083613 ha
10,000 sq yd 0.836127 ha
100,000 sq yd 8.3613 ha
1. Cambridge Dictionary, hectare, https://dictionary.cambridge.org/us/dictionary/english/hectare
More Square Yard & Hectare Conversions | {"url":"https://www.inchcalculator.com/convert/square-yard-to-hectare/","timestamp":"2024-11-14T17:36:43Z","content_type":"text/html","content_length":"67038","record_id":"<urn:uuid:2280f7e0-436d-4784-ada8-86a689f371b5>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00111.warc.gz"} |
ground state Latest Research Papers | ScienceGate
In this paper, we give a further discussion of short-distance teleportation. We propose bidirectional, rotation and cyclic rotation teleportation schemes for short-distance participants,
respectively. In our bidirectional transmission scheme, the quantum channel is still an EPR pair and an auxiliary qubit in the ground state [Formula: see text], and two participants can transmit an
unknown single-qubit state to each other. In the rotation and cyclic rotation schemes, bidirectional transmission is performed between two adjacent participants in turn. The unknown state qubits of
the participants collapse into the ground state after one bidirectional transmission, and can be used as auxiliary qubits in subsequent bidirectional transmission. After a complete state rotation,
each participant has held the unknown state of the other participants, and the last one owned by the participant is still the original unknown state. Although the schemes we proposed are applicable
to a small range of transmission, they have certain advantages in saving quantum resources. | {"url":"https://www.sciencegate.app/keyword/21773","timestamp":"2024-11-04T03:48:32Z","content_type":"text/html","content_length":"112556","record_id":"<urn:uuid:635b6eb8-dd07-4df6-80a1-75609f3a4635>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00186.warc.gz"} |
Sum of first and last digit in C++ | Dremendo
Sum of first and last digit in C++
cin - Question 5
In this question, we will see how to input a two digit number in C++ programming using the cin statement and find the sum of its first and last digit. To know more about cin statement click on the
cin statement lesson.
Q5) Write a program in C++ to input a two digit number and find the sum of its first and last digit.
#include <iostream>
#include <conio.h>
using namespace std;
int main()
int n,fd,ld,sum;
cout<<"Enter a two digit number\n";
cout<<"First digit="<<fd<<endl;
cout<<"Last digit="<<ld<<endl;
cout<<"Sum of first and last digit="<<sum;
return 0;
Enter a two digit number
First digit=5
Last digit=6
Sum of first and last digit=11 | {"url":"https://www.dremendo.com/cpp-programming-tutorial/cpp-cin-questions/q5-sum-of-first-and-last-digit-in-cpp","timestamp":"2024-11-05T00:08:43Z","content_type":"text/html","content_length":"34427","record_id":"<urn:uuid:5db4e1b8-5c26-42f5-9fdc-ed5b2eed4d69>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00319.warc.gz"} |
Is it dead of Data Science and Rise of Quantum Computing | A Quantum Roadmap
IBM Quantum Computer
What is Quantum Computing?
Quantum computing is a type of computation that harnesses the collective properties of quantum states, such as superposition, interference, and entanglement, to perform calculations.
This technology can solve real-world problems with high efficiency. The devices which perform quantum computations are known as Quantum Computers. They can create vast multidimensional spaces in
which to represent these immense problems. Classical supercomputers cannot do this.
Algorithms that employ quantum wave interference are used to find solutions in this space and translate them into forms we can use and understand. One promising quantum algorithm that uses these
techniques is called Grover's search. Suppose you need to find one item from a list of N items. On a classical computer, you'd have to check N/2 items on average, and in the worst case, you would
need to check all N.
Using Grover's search on a quantum computer, you would find the item after checking roughly √N of them. This efficiency represents a remarkable increase in processing efficiency and time saved.
Why is it related to Data Science?
The Data Science domain is an ocean. By inculcating Machine Learning in Quantum computing in writing algorithms and application development, this will be the leading technology in the coming future.
We have many data Scientists now but fewer quantum developers. The situation was the same as a decade ago when there was a boom in the Java developer role, but demand decreased after a few years.
With the ongoing progress in quantum computing, we may see the sudden need for quantum developers in the coming years.
IBM's Quantum Hardware Roadmap.
IBM has been a pioneering giant in the quantum industry. It has been developing a qubits(quantum bits) quantum processor for the Quantum Computer.
Qubits are the quantum mechanical term of classic bit. N classical computing, the information is encoded in bits, where each bit can have the value zero or one. In quantum computing, the data is
encoded in qubits. The current quantum processor consists of 127 qubits.
The next step in quantum development is to embed qubit processors in the developer ecosystem. The community of developers like kernel developers will contribute to Advanced Operating systems or
Runtimes. Next set of developers, like Algorithm developers to built pre-built packages, modules, functions, etc. Last group of developers, Model Developers where they will be building applications
with quantum integrated technology.
There is a massive chance for developers to contribute to the field of quantum. There are many free resources available to kickstart your journey in This field. Word of advice: Those planning to do
an MS in Data Science can also have other options of Quantum Computing since there is a lot of demand for this futuristic technology.
Please do follow to keep updated on different blogs in Quantum Computing. | {"url":"https://likhiwrites.medium.com/is-it-dead-of-data-science-and-rise-of-quantum-computing-a-quantum-roadmap-852a336f50b1?source=author_recirc-----aca09a4f7077----0---------------------7c885867_9e6f_4f75_a6a1_f541655a5542-------","timestamp":"2024-11-14T07:01:14Z","content_type":"text/html","content_length":"100784","record_id":"<urn:uuid:fa4f5f04-e834-4a37-803e-cb67506faf68>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00654.warc.gz"} |
Summarizing multiple multi-select columns
Good morning,
I have 5 project managers using the same template of a sheet to track their current ongoing projects in one spot. One of the columns is a multi-select where they must choose all the suitable
categories that the project are covering.
Is there any way in my report to summarize how many of each category option have been selected across all of the project manager's selections? For instance, Option 1 has been selected 26 times across
the 5 Project manager sheets.
Anybody have any ideas? The closest thing I've managed to find is using a COUNTIF equation and trying to use "find" for the wording, but even that has not brought back accurate results.
=COUNTIFS({Category Column of PM}, FIND("Option 1") > 0)
• @AMCP For multi-select columns, the HAS function is your best bet. HAS searches for the text as a distinct value within a multi-select cell.
=COUNTIFS({Category Column of PM}, HAS(@cell, "Option 1"))
Then to include the remaining 4 sheets, just add additional iterations of the same formula searching the other sheets' category columns:
=COUNTIFS({Category Column of PM}, HAS(@cell, "Option 1")) + COUNTIFS({Category Column of PM Sheet2}, HAS(@cell, "Option 1")) + COUNTIFS({Category Column of PM Sheet3}, HAS(@cell, "Option 1"))
... and so on.
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
• Hi Jeff,
Thanks for the reply, however in your equation provided "=COUNTIFS({Category Column of PM}, HAS(@cell, "Option 1"))" I'm not sure what the @cell I have bolded makes reference to in the equation,
leaving me with errors when I try to use it. Do I need to plug a different value in there?
• @AMCP What error are you getting?
The way "@cell" works is to tell the function to check each cell in the range individually for the criteria. So for the {Category Column of PM}, check each cell to see if it HAS a value of
"Option 1" in it.
Here it is working in my test sheet:
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/101161/summarizing-multiple-multi-select-columns","timestamp":"2024-11-02T21:15:23Z","content_type":"text/html","content_length":"440467","record_id":"<urn:uuid:d302c73b-34a1-4087-ad6a-6e6b3a5a982b>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00183.warc.gz"} |
seminars - Gaussian quantum technologies: teleportation and beyond
Title : Lectures on continuous variable quantum information with Gaussian states
Abstract: The study of Gaussian states has arisen to a privileged position in continuous variable quantum information in recent years. This is due to vehemently pursued experimental realisations and
a magnificently elegant mathematical framework. In these lectures, we introduce the basic concepts of quantum information with Gaussian states and their contemporary applications. After introducing
the subject material and outlining the essential toolbox of continuous variable systems, we define the basic notions needed to understand Gaussian states and Gaussian operations. In particular,
emphasis is placed on the mathematical structure combining notions of algebra and symplectic geometry that are fundamental to a complete understanding of Gaussian informatics. Furthermore, we discuss
the quantification of different forms of quantum correlations and informational measures for Gaussian states, paying special attention to recently developed measures. The lectures are concluded by
exploring applications to quantum technologies including the seminal example of continuous variable teleportation, as well as succinctly expressing the main Gaussian state limitations and outlining
some open questions for quantum information processing with continuous variable systems.
$10/24\left(수\right)16:00\text{}17:00$ 129동 301호 Lecture I.
Quantum phase space methods and the symplectic group
$10/25\left(목\right)17:00\text{}18:00$ 129동 301호 Lecture II.
Gaussian states: informational properties and correlations
$10/26\left(금\right)10:00\text{}11:00$ 129동 301호 Lecture III.
Gaussian channels: description and classification
$10/27\left(토\right)10:00\text{}11:00$ 129동 104호 Lecture IV.
Gaussian quantum technologies: teleportation and beyond | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=83&sort_index=room&order_type=desc&l=en&document_srl=788137","timestamp":"2024-11-04T20:39:02Z","content_type":"text/html","content_length":"60077","record_id":"<urn:uuid:195d487f-3f0e-48f7-b81b-a347598dad9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00074.warc.gz"} |
The Incomplete Treatment of Functions in HSC Mathematics
Related Content Outcome:
• MA-F1 Working with Functions
□ F1.2: Introduction to Functions
What Does The Syllabus Say?
The first port of call for all students and teachers when learning or teaching HSC Mathematics is of course the Syllabus. Here is the excerpt from the Syllabus (Page 31) about how a function is
• define and use a function and a relation as mappings between sets, and as a rule or a formula that defines one variable quantity in terms of another
□ define a relation as any set of ordered pairs \( (x,y) \) of real numbers
□ understand the formal definition of a function as a set of ordered pairs \( (x,y) \) of real numbers such that no two ordered pairs have the same first component (or \(x\)-component)
• use function notation, domain and range, independent and dependent variables
The suggestion from the Syllabus to formally define a function as a set of ordered pairs of real numbers is neither formal nor is it the correct definition of a function:
• Restricting the definition of a function to ordered pairs of only real numbers as opposed to elements from any set of objects – is incorrect.
• The correct part of the definition is the notion of ordered pairs that links the independent variable \( x \) with the dependent variable \( y \), and the first component only occurs once for all
ordered pairs. This is fine.
• The so-called formal definition lacks any terminology such as domain, codomain and range. In fact, the Syllabus dot-points lack any mention of the term “co-domain” which is a worry if the course
aims to prepare students for university, and these fundamental concepts in the definition of a function is used from Day 1 of any MATH1001 course.
In the next dot-point, the Syllabus mentions the use of function notation, domain and range, independent and dependent variables – again, this is missing any mention of the word “co-domain”.
Function Notation in the HSC Syllabus is usually taught like this:
\[ f(x) = \sqrt{x-4} \]
Here, the Domain and Range of the function is implicitly defined by the maximal subset of the real numbers for which this function is well-defined (i.e. behaves correctly). Unfortunately, this is not
the usual practice once out of high school and implicit definitions are not always a good idea in a discipline of knowledge as rigorous and precise as Mathematics.
What Does The Glossary Say?
When one looks up the Syllabus Glossary on the definition of a function, it’s as though two completely different teams of people wrote the Syllabus and the Glossary separately. Here is how the
Glossary defines a function:
A function \(f\) is a rule that associates each element \(x\) in a set \(S\) with a unique element \( f(x) \) from a set \(T\).
The set \(S\) is called the domain of \(f\) and the set \(T\) is called the co-domain of \(f\). The subset of \(T\) consisting of those elements of \(T\) which occur as values of the function is
called the range of \(f\). The functions most commonly encountered in elementary mathematics are real functions of a real variable, for which both the domain and co-domain are subsets of the real
If we write \(y=f(x)\), then we say that \(x\) is the independent variable and \(y\) is the dependent variable.
Mathematics Advanced Stage 6 Syllabus 2017 – Page 70
What the?
There’s technical terms in here such as domain, range, co-domain. There’s also the use of universal quantification of “each element \(x\)”. There’s mention of elements in sets, and uniqueness. The
mathematical language used here makes the definition clear, with no room for ambiguity or contradiction.
The Syllabus’ formal definition pales in comparison to the one found in the glossary! Both are in the same document published by NESA!
The glossary definition also mentions that most functions encountered in elementary mathematics are real functions of a real variable – as opposed to all of them as incorrectly implied in the
Syllabus dot-point.
When I teach my students the definition of a function, this is the one I would use. However, they should note that the weaker and incorrect definition with no mention of co-domain in the Syllabus
would imply that the understanding of the co-domain will not be assessed in the HSC.
I recall my first year of university back in 2010, having completed HSC Extension II in the year before: I was at a loss when the lecturer started mentioning the co-domain of a function as opposed to
just range – judging from the murmurs and hands going up, I was not the only one confused in that lecture hall until I unlearned the HSC ‘definition’, and relearned the complete definition.
The Complete Notation
Let’s unpack the Glossary definition a little bit further. For any students reading this article, the notation introduced in this section will be useful in their preparation for any Mathematics units
of study in university.
A function \(f\) is a rule that associates each element \(x \) in a set \( S \) with a unique element \( f(x) \) from a set \( T \).
The set \(S\) is called the domain of \(f\) and the set \(T\) is called the co-domain of \(f\)…
Mathematics Advanced Stage 6 Syllabus (2017) – Page 70
To denote that there is a function \(f\) that has domain \(S\) and co-domain \(T\), we write this:
\[ f: S \rightarrow T \]
To denote that \(x\) is associated with a unique element \( f(x) \), we write this:
\[ x \mapsto f(x) \]
The \(\mapsto\) symbol is read as “maps to”.
So where does Range fit into all of this? The Glossary defines Range as this:
The subset of \(T\) consisting of those elements of \(T\) which occur as values of the function is called the range of \(f\).
In Set notation, this would be \( \{ f(x) | x \in S \} \). We can learn how these terms are used in the following example.
Consider the following function:
f: \{1,2,3\} \rightarrow \mathbb{R}\\
x \mapsto x^2
The domain is the set \(\{1,2,3\}\), the co-domain is the set of real numbers \(\mathbb{R}\), and the range is the set \(\{1,4,9\}\).
Sometimes, the rule can’t be defined with an algebraic rule such as the following function.
f: \{a,b,c,d\} \rightarrow \{1,2,3,4,5\}\\
a \mapsto 1\\
b \mapsto 3\\
c \mapsto 2\\
d \mapsto 5
In this example, the domain is the set \(\{a,b,c,d\}\), the co-domain is the set \(\{1,2,3,4,5\}\), and the range is the set \(\{1,2,3,5\}\).
Why Though?
Why is the more complete definition more useful? What advantage does it have over the Syllabus one?
As previously mentioned, the more complete definition prepares students for university much better. When they learn more technical definitions and applications in their first year that rely on a
clear understanding of these concepts, such as Surjections, Injections, Bijections – and in later years: Homomorphisms, Homeomorphisms, Isomorphisms, Automorphisms, etc (no, they are not swear words)
– students who have already learned the correct definitions will not be hung on the basic concepts that should have been covered at the high school level.
The application of functions is not limited to only real variables, but to wherever there is a relationship between any two types of objects. For example, the notion of something being countable is
tied to bijective functions with a domain of the set of Integers \(\mathbb{Z}\). This goes on to defining what “countable” and “uncountable” infinities are – or you may have heard the terms
“discrete” and “continuous” in your Statistics classes.
In computer programming, functions are used all the time where the input (domain) and the type of output (co-domain) is defined by the programmer. Rarely would one be able to work out the range, or
need to care, of such functions.
If there is one pure mathematical reason to learn the correct definition it is this: rigour, proof and correctness is the foundation of logical thinking and all of knowledge. To model this correctly
to students, a fundamental concept such as Functions should be taught correctly! | {"url":"https://www.ringomok.com/2020/10/13/the-incomplete-treatment-of-functions-in-hsc-mathematics/","timestamp":"2024-11-09T00:38:08Z","content_type":"text/html","content_length":"46058","record_id":"<urn:uuid:50b72a54-9388-448b-9787-88a79e7b16bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00437.warc.gz"} |
basf2 release-08-01-10 documentation
Tracking for Special Classes of Tracks
22.4. Tracking for Special Classes of Tracks#
22.4.1. V0 Finding#
V0s are neutral particles we reconstruct from their decay into two charged tracks, such as \(K_S\to\pi^+\pi^-\) or \(\Lambda\to p\pi^-\). Due to their relatively long lifetime, they mostly decay
outside of the beam pipe. By default, the track parameters extrapolated to the point of closest approach (POCA) to IP (0, 0, 0) are stored in the mDST. This extrapolation also includes the correction
for potential material effects and energy losses. Because the daughters of V0s are not produced at the POCA, they need some special treatment. The V0Finder module takes care of this.
Also photon conversions (\(\gamma\to e^+e^-\) inside material) need the same kind of special treatment for the same reasons, therefore the V0Finder takes care of these as well.
The V0Finder takes care specifically of V0s with a decay vertex outside of the beam pipe (i.e. transverse distance from origin above 1 cm). V0s that decay inside the beam pipe can be reconstructed at
analysis level using the standard reconstruction procedure (i.e. with reconstructDecay plus vertex fit), and are therefore ignored by the V0Finder.
The V0Finder is run during reconstruction (i.e. raw data processing for mDST production) so that it can access all the information it needs (hits attached to tracks, magnetic field map, detector
geometry), and that are not available when running analyses. It performs the following steps.
• Combination: consider each pair made of one positive and one negative track.
• Preselection: compute the invariant mass of the two tracks.
□ This requires knowledge of the angle between the momenta
\[m^2 = (E_++E_-)^2 - (\vec p_+ + \vec p_-)^2 = (E_++E_-)^2 - p_+^2 - p_-^2 - 2p_+p_-\cos\alpha\]
which is unavailable because the vertex position is not yet known; therefore a range of possible invariant masses is computed (with the minimum obtained assuming \(\cos\alpha=1\) and the
maximum assuming \(\cos\alpha=-1\)), and this range is required to overlap with an invariant mass window (see massRangeKshort and massRangeLambda module parameters)
□ This cut is not applied to photon conversions.
□ Since release-08, an additional cut is applied to reject pairs of tracks that definitely intersect within the beam pipe, and thus must not be handled by the V0Finder. This is meant to speed
up the module by skipping some pairs without performing the vertex fitting (see below). The tracks are approximated with straight lines (that pass by the POCA to the IP and have the same
direction as the momentum at the POCA to the IP), and the point of closest approach between the two lines is found analytically. If the distance of such point from the IP is smaller than
precutRho (0.5 cm by default), the pair is rejected. Also, pairs of almost-parallel tracks (\(\cos\alpha\) > precutCosAlpha, 0.9 by default) are always accepted, as the straight-line
approximation for the vertex is less reliable in this case.
• Vertex fitting: the vertex position is obtained from a so-called vertex fit. The V0Finder employs the RAVE fitter from the GenFit package and exploits the knowledge about first hit position,
energy losses of the tracks inside detector material, magnetic field non-uniformities; if the fit does not converge (\(\chi^2/NDF > \) module parameter), or the vertex is found to be inside the
beam pipe, the candidate is rejected.
• Inner hits removal: if the tracks were produced at the fitted vertex, they can not have left any hit in the helix segments that come before the vertex; if any such hit is attached, it must be
wrong and can bias the track fit, therefore they are removed, then the tracks are refitted, and the vertex is fitted again; the check for inner hits is also repeated.
□ If the fit with the refitted tracks fails, the previous result is kept.
□ This step can be skipped with the v0FitterMode module parameter
• Selection: now that the vertex is fitted, the invariant mass can be computed and a cut applied to it (see invMassRangeKshort, invMassRangeLambda and invMassRangePhoton module parameters)
Candidates that pass the selection are stored to V0 objects, which contain the reference (array index) to two Tracks and two TrackFitResults with the parameters of the helices at the decay vertex
The TrackFitResults associated to the Tracks normally store the helix parameters at the perigee (point of closest approach to the IP), but these might be different from the ones at the decay vertex
position due to energy losses, magnetic field non-uniformities and material effects. In order to reconstruct the V0 vertex correctly, the parameters at the decay vertex must be used (this is
particularly important when computing the angle between the tracks).
During analysis, V0 lists are loaded using functions such as stdKshorts and stdLambdas. What these functions do is
• Take candidates from V0 objects
□ Make candidates using the TrackFitResults associated to the V0 object for the daughters
□ Fit their vertices (using TreeFit or KFit)
□ Apply a cut on the invariant mass
• Reconstruct V0s that decayed inside the beam pipe
□ Use reconstructDecay to make candidates with a loose invariant mass cut (if the V0 decayed inside the beam pipe, the error we make on the invariant mass because we don’t know the vertex
position yet is small)
□ Fit their vertices (using TreeFit or KFit)
□ Apply a cut on the invariant mass
• Merge the two candidates lists, keeping only the candidate from a V0 object in case of duplicates
This is a simple V0 finder for X = Ks, Lambda and converted fotons which matches all positive tracks with all negative tracks, fitting a vertex for each pair. Depending on the outcome of each
fit, a corresponding Belle2::V0 is stored or not.
A loose cut on the invariant mass (massRangeX) is applied before the fit (not considering material effects), then a vertex fit is performed and only pairs passing a chi^2 (vertexChi2CutOutside)
and a second cut on the invariant mass (invMassRangeX) are stored as Belle2::V0.
No V0s with vertex inside the beam pipe are saved as they can be recovered at analysis level.
CopiedRecoTracks (str, default=’CopiedRecoTracks’)
RecoTrack StoreArray name (used for track refitting)
RecoTracks (str, default=’’)
RecoTrack StoreArray name (input)
TrackFitResults (str, default=’’)
Belle2::TrackFitResult StoreArray name (in- and output). Note that the V0s use pointers indices into these arrays, so all hell may break loose, if you change this.
Tracks (str, default=’’)
Belle2::Track StoreArray name (input). Note that the V0s use pointers indices into these arrays, so all hell may break loose, if you change this.
V0ValidationVertices (str, default=’’)
V0ValidationVertex StoreArray name (optional output)
V0s (str, default=’’)
V0 StoreArry name (output).
Validation (bool, default=False)
Create output for validation.
beamPipeRadius (float, default=1.0)
Radius at which we switch between the two classes of cuts. The default is a little inside the beam pipe to allow some tolerance.
invMassRangeKshort (tuple(float, float), default=(0.425, 0.575))
mass range in GeV for reconstructed Kshort after removing material effects and inner hits
invMassRangeLambda (tuple(float, float), default=(1.09, 1.14))
mass range in GeV for reconstructed Lambda after removing material effects and inner hits
invMassRangePhoton (tuple(float, float), default=(0.0, 0.1))
mass range in GeV for reconstructed Photon after removing material effects and inner hits
massRangeKshort (tuple(float, float), default=(0.45, 0.512611))
mass range in GeV for reconstructed Kshort used for pre-selection of candidates (to be chosen loosely as used momenta ignore material effects)
massRangeLambda (tuple(float, float), default=(1.085683, 1.145683))
mass range in GeV for reconstructed Lambda used for pre-selection of candidates (to be chosen loosely as used momenta ignore material effects)
precutCosAlpha (float, default=0.9)
preselection cut on the cosine of opening angle between two tracks. Those above this cut are always accepted.
precutRho (float, default=0.5)
preselection cut on the transverse radius of the point-of-closest-approach of two tracks. Set value to 0 to accept all.
useNewV0Fitter (bool, default=False)
on true use new V0 fitter, othewise use the old one
v0FitterMode (int, default=1)
designate which fitAndStore function is called in V0Fitter.
0: store V0 at the first vertex fit, regardless of inner hits; 1: remove hits inside the V0 vertex position; 2: mode 1 + don’t use SVD hits if there is only one available SVD
vertexChi2CutOutside (float, default=10000.0)
Maximum chi^2 for the vertex fit (NDF = 1) | {"url":"https://software.belle2.org/release-08-01-10/sphinx/tracking/doc/specials.html","timestamp":"2024-11-14T14:59:09Z","content_type":"text/html","content_length":"71490","record_id":"<urn:uuid:93b791cd-6bcc-4868-b08f-bf0a613a1ef9>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00722.warc.gz"} |
The Weakest Link in Many Cryptosystems – Part 1 of 2
6 min read
The Weakest Link in Many Cryptosystems – Part 1 of 2
It is well-known and appreciated by most users - even if often ignored(!) - that if you choose a weak password, you are exposing yourself to various risks. Whether your password is used for
encryption of confidential data or just for access control doesn't really matter, so let's assume for a minute that it is actually used to encrypt your data - or perhaps to encrypt a key that is used
to encrypt your data. The situation you are in is that you are using a random bit generator of poor quality (in this case yourself!) to generate your key, in the sense that the output - your password
- can easily be guessed or predicted.
One aspect to keep in mind here is the size of the space in which your password theoretically might be chosen, which is determined from the length (in bits or number of digits or whatever). A
PIN-code for a debit- or credit card is 4 digits, which yields 10^4 = 10,000 possibilities. The number is limited to 4, as you need to be able to memorise it, and 4 digits is considered the limit for
most humans. This very small number is then compensated by the fact that you will only have three attempts to key in the right PIN at an ATM or POS-terminal. In contrast, for encryption schemes, the
key must be so long that it is not feasible to run through all the combinations in a methodical manner within a reasonable timescale, also known as exhaustive search.
Most of us constantly use cryptosystems, whether transparent to us such as in an underlying SSL protocol, or by conscious choice, such as use of S/MIME or PGP or whatever. In most scenarios, you do
not have to choose the key to be used, in fact most of the time you cannot even influence the generation of it. So how do you know if it has been properly generated, and what exactly does that
entail? Well you don't, that's the problem.
The main point of this article is that it is not enough to have a strong algorithm with a sufficiently long key. There is at least one more important aspect - apart from secure storage and protection
of course. The missing link is known as randomness, or entropy.
Mathematically speaking, we have a means for measuring randomness. Not in one particular key, or one bit string, but in a variable such as a distribution of keys. This measure is known as the
entropy, or Shannon entropy, which was introduced in a famous paper by Claude Shannon in 1948 [S]. Interestingly enough, this theory has a lot in common with the theory of entropy in thermodynamics -
but that is another story. Suffices it here to mention that the higher the entropy of a system being processed on a processor, the higher the heat radiation from the processor.
Going back to Shannon entropy, it measures how random a variable is, i.e. what can be predicted about the expected values. If the variable is to flip a coin, there is 1 bit of information in the
outcome, and if you spin the coin 3 times, there are 3 bits of information in the outcome. In contrast, if the variable is not evenly distributed, then even if there still are 8 options, the entropy
may be less than 3 bits. One aspect of Shannon's theory is that this implies you need less than 3 bits on average to describe the value of the variable. For instance if A occurs with probability 93%,
and B,C,D,E,FG and H each occur with probability 1% you can use the coding in the diagram below (known as Huffman coding), and you will discover that the average number of bits you need to describe
an event is 1.21 rather than 3. So the smaller the entropy, the less the
randomness of the variable. In the extreme, an entropy of 0 means that the outcome is always the same. For more on information entropy, we refer to the first book on the subject, [SW], which has been
reprinted several times, or [MOV].
Seeds and Pseudo Randomness
Given a key, there is normally no way you can decide if it is chosen sufficiently random or not. As an example, consider all binary keys of length 8. Most people would feel that 10101010 does not
look random at all, whereas keys like 01110110, 10111100, 01011100, 10000111 and 01001101 perhaps do? Yet they are all connected, and in a very simple manner. Indeed, just as 376 is interpreted as
3x10^2+7x10^1+6x1, the binary number 10101010 is the number 1x2^7+1x2^5+1x2^3+1x2 = 128+32+8+2 = 170. Now take the (prime) 257 (= 100000001 in binary) and calculate 170^2 mod 257 by which we mean the
remainder you get when you divide 170^2 with 257, which is 116 as 170^2 = 28900 and 28900 = 112x257 + 116. Now write 116 as a binary number, which is done in the following way: Subtract the largest
possible power of 2, which is 64 = 2^6, and continue in the fashion: 116 = 64 + 32 + 16 + 4 +2, or as a binary number of length 8: 01110110. Continue with 170^3 mod 257 = 116x170 mod 257 = 188, or in
binary form, 10111100. Moving on to 170^4, we get 92 or, 01011100, then 170^5 which yields 220 or 11011100, etc.
You will recognize the sequence thus generated as the sequence we listed above. It may look random, yet it is not random at all as we have just shown. Nevertheless if we scale this approach and start
with a random source, we have a good way of mass producing what is known as pseudorandom numbers. More on this later.
Most people have a good feeling for achieving randomness: flip a coin, or spin a racket, and let another person choose preferably as soon as you have starting flipping or spinning, but before the
result becomes predictable. This works well in principle, but is by far too tedious for cryptographic keys. Triple DES requires 112 bits, and AES typically even more.
However, if we can get hold of one truly random bit sequence of sufficient length - called a seed - we can use the idea sketched above to make as many - not random, but pseudo-random - bits as we
need. By pseudo-random we simply mean that statistically, they behave as if they really were truly random. This is not entirely true actually, we need one thing more: an algorithm, or method, that
will generate bits and has the property that if you can predict the next bit with a probability larger than ½, you can solve a hard mathematical problem that mathematicians have not be able to solve
so far, such as factoring a composite number into prime numbers. That is precisely what we demonstrated above using the prime 257. If instead of 257 we use a product of two (large) primes, we
actually know (meaning that we can give a mathematical proof) that given a product of two primes, say n, if we can predict the lowest bit of m^2 mod n for various m, then we can factor n. This is
known as the Blum-Blum-Shub pseudorandom bit generator. For more we refer to [MOV].
Most professional cryptographic libraries have proper algorithms for generating pseudo-random numbers, so the Achilles heel is the initial random seed. This cannot be provided by the library, as it
is de facto a secret key in most solutions. And this is the culprit in most scenarios, where the solution is realised with weak keys. Indeed, the software engineer who integrates the crypto library
into some larger solution often lacks skills to find a sufficiently strong source for the initial seed. This is the course of the problem. Suppose - just for the sake of the argument - that he takes
the time and the date. Even if this is down to one hundredth of a second, the total space of choice is 24x3600x100 or about 8.6 mill, which is only about 23 bits of information and thus is prone to
an exhaustive search attack.
The Birthday Paradox
In a school class of unrelated pupils (that is, no twins!) and assuming birthdays are evenly distributed, how many pupils do you need on average in the class in order for the probability that at
least 2 have a birthday in common is ½?
Well, surprisingly, it turns out to be 23 only!
This is not that hard to calculate: Assume there are n children in the class, and let's calculate the probability that none have their birthday on the same day. The total number of combinations is
365^n. The number of combinations where none have their birthday on the same day is calculated as follows: If there is only one, it is obviously 365. If there are several pupils, the second will only
have 364 options, the third 363 options, and so on. So the probability of no common birthday is 365x364x363x....x(365-n)/365^n. Obviously, as n increases, this number gets smaller and smaller, and
the first time it moves below ½ turns out to be already when n = 23.
This has the following implication for cryptographic functions and in particular attacks on these: Consider a function f that maps bit sequences to bit sequences of a fixed length n. So the maximal
number of images under this function is 2^n. What we are after is the situation where two different sequences, x and y, are mapped to the same image, i.e. f(x) = f(y), known as a collision. The
function f could be a hash function, or it could be - - a key generation programme!
The calculation given above can be generalised, and it turns out that for large image spaces (e.g. n large), and a random function f, we can expect collisions with probability ½ if we calculate about
1.25∙2^n/2 (and in the example above comes out at 1.25∙√365 = 23.88) values. This means that in the scenario above, if people would - sometimes and perhaps too often even - take the caution and use a
random timestamp down to 1/100^th of a second, we would amongst all keys generated only need to choose about 1.25√8.6mil = 3665 keys to find a collision with probability ½. A collision here would
mean that people were using the same secret key because of a lousy key generation procedure.
[MOV] Menezes, Alfred J., van Oorschot, Paul C. & Vanstone, Scott A., Handbook of Applied Cryptography, CRC Press, 1997.
[S] Shannon, Claude E. (July/October 1948). "A Mathematical Theory of Communication". Bell System Technical Journal 27 (3): 379-423.
[SW] Shannon, Claude E. & Weaver, Warren. The Mathematical Theory of Communication. Univ of Illinois Press, 1949. | {"url":"https://www.cryptomathic.com/blog/the-weakest-link-in-many-cryptosystems-part-1-of-2","timestamp":"2024-11-06T05:12:04Z","content_type":"text/html","content_length":"371349","record_id":"<urn:uuid:3b6f991a-b2ac-48fc-8e39-2cebe22a9e6c>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00481.warc.gz"} |
Measures of Plan Distance
This vignette introduces some of the most common measures of some basic measures of the distance between plans. This offers a quick look at how similar or different a pair of plans are. See the
vignette “Using redistmetrics” for the bare-bones of the package.
We first load the redistmetrics package and data from New Hampshire. For any function, the shp argument can be swapped out for your data, pop for your population, and the plans argument can be
swapped out for your redistricting plans (be it a single plan, a matrix of plans, or a redist_plans object).
Note that, when computing distance between plans, you always want to provide more than one plan. For that reason, we will also load nh_m, a matrix of plans for New Hampshire.
We subset it to its first four columns (the first four plans).
Variation of Information
The recommended distance between plans is the variation of information. This considers the distance between plans by looking at the joint distributions of people across plans.
The Variation of Information can be computed with:
Hamming Distance
The Hamming distance is a simpler metric which just considers how many units are assigned to different districts between pairs of plans.
The Hamming distance can be computed with:
Manhattan Distance
The Manhattan distance measures how many “blocks” you would need to move to get between plans. This is most useful in MCMC contexts, rather than general contexts.
The Manhattan distance can be computed with:
Euclidean Distance
The Euclidean distance measures the square root of the summed distances you would need to move to get between plans. This is most useful in MCMC contexts, rather than general contexts.
The Euclidean distance can be computed with: | {"url":"https://cran.r-project.org/web/packages/redistmetrics/vignettes/distances.html","timestamp":"2024-11-08T12:27:44Z","content_type":"text/html","content_length":"15120","record_id":"<urn:uuid:42d07cb8-5317-4e1c-ad1f-7f1339c8e049>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00572.warc.gz"} |
Algorithm Question By Google
We can determine how "out of order" an array A is by counting the number of inversions it has.
Two elements A[i] and A[j] form an inversion if A[i] > A[j] but i < j.
That is, a smaller element appears after a larger element.
Given an array, count the number of inversions it has. Do this faster than O(N^2) time.
You may assume each element in the array is distinct.
For example, a sorted list has zero inversions.
The array [2, 4, 1, 3, 5] has three inversions: (2, 1), (4, 1), and (4, 3).
The array [5, 4, 3, 2, 1] has ten inversions: every distinct pair forms an inversion.
Top comments (3)
Areahints •
Hi @theghostyced . so I decided to try and solve this on my own and I want to share my insights
my first solution was to [DEL:write some logic:DEL] google the question and I came across a solution that didn't satisfy the complexity
def getInvCount(arr, n):
inv_count = 0
for i in range(n):
for j in range(i + 1, n):
if (arr[i] > arr[j]):
inv_count += 1
return inv_count
I then tried to rewrite this using list comprehension, you can see my effort here
def getInvCount(arr, n):
return sum([1 if (arr[i] > arr[j]) else 0 for i in range(n) for j in range(i+1,n)])
but this also revealed the answers which you can use to solve this question.
CED •
Thanks @areahints but I think the time complexity still remains the same irrespective of the list comprehension.
Areahints •
yes, I mentioned that my list comprehension didn't satisfy the complexity.
consider this solution;
def getInvCount(arr,n):
return mergeGetInvCount(arr)[1]
def mergeGetInvCount(arr):
if len(arr) <= 1:
return arr, 0
middle = int(len(arr) / 2)
left, a = mergeGetInvCount(arr[:middle])
right, b = mergeGetInvCount(arr[middle:])
result, c = mergeGetSplitInvCount(left, right)
return result, (a + b + c)
def mergeGetSplitInvCount(left, right):
result = []
count = 0
i,j = 0,0
left_len = len(left)
while i < left_len and j < len(right):
if left[i] <= right[j]:
i += 1
count += left_len - i
j += 1
result += left[i:]
result += right[j:]
return result, count
# Driver Code
arr = [1, 20, 6, 4, 5]
n = len(arr)
print("Number of inversions are", getInvCount(arr,n))
Number of inversions are 5
that's an O(n log n) implementation that I mentioned has been revealed in the link. did you read through it?
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://dev.to/theghostyced/algorithms-2d5g","timestamp":"2024-11-12T06:14:21Z","content_type":"text/html","content_length":"96855","record_id":"<urn:uuid:6a61fe33-4466-44ba-ac22-da4e13f05c5c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00484.warc.gz"} |
How to convert revolutions per second to revolutions per minute?
Question #214 Submitted by Answiki on 11/15/2020 at 11:27:35 AM UTC
How to convert revolutions per second to revolutions per minute?
Answer Submitted by Answiki on 11/15/2020 at 11:22:51 AM UTC
The formula to convert revolutions per second to revolutions per minute is given by :
1 rpm = 60 rps
Question by Answiki 11/15/2020 at 11:30:51 AM
What is the formula to convert rps to rpm?
Question by Answiki 11/15/2020 at 11:28:03 AM
What is the formula to convert revolutions per second to revolutions per minute?
Question by Answiki 11/15/2020 at 11:27:35 AM
How to convert revolutions per second to revolutions per minute?
Answer by Answiki on 11/15/2020 at 11:22:51 AM
The formula to convert revolutions per second to revolutions per minute is given by :
1 rpm = 60 rps
Question by Answiki 11/15/2020 at 11:22:40 AM
How to convert revolution per second to revolution per minute?
Question by Answiki 11/15/2020 at 11:22:30 AM
How to convert rps to rpm?
Answer by Answiki on 11/15/2020 at 11:21:37 AM
The formula to convert revolutions per second to revolutions per minute is given by :
1 rpm = 60 rps
Icons proudly provided by | {"url":"https://en.ans.wiki/214/how-to-convert-revolutions-per-second-to-revolutions-per-minute/","timestamp":"2024-11-06T08:40:36Z","content_type":"text/html","content_length":"70515","record_id":"<urn:uuid:197c2887-899b-47e8-a5e6-f190ba9cfcba>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00789.warc.gz"} |
Hexaware Ltd Placement Paper
Hexaware Ltd
Hexaware Ltd.
Directions for questions 1-10: Expand the following terms
1. ODBC
Ans. Open Database Connectivity.
2. HTML
Ans. Hyper Text Markup Language
3. RISC
Ans. Reduced Instruction Set Computing
4. ASCII
Ans. American Standard Code For Information Interchange
Ans. American National Standard Institute.
6. XML
Ans. Extended Markup Language
7. FLOPS
Ans. Floating Point Operating Per Second
8. SQL
Ans. Sequential Query Language
9. QBE
Ans. Query By Example
10. ALE
Ans. Address Latch Enable
11. What is lagging in DBMS ?
Ans. Reduced Redundancy.
Directions 12 to 20: For the following questions find the odd man out
12. Unix
Ans. CMOS
13. Oracle
Ans. LISP
14. Laser
Ans. Mouse
15. Dir
Ans. Csh
16. Bit
Ans. Digit
17. Hard Disk
• Floppy Drive
• CD ROM
• Cache
Ans. Cache
18. SQL
Ans. Oracle
19. C++
Ans. PASCAL
20. Projection Operation
• Selection Operation
• Intersection
• Set Difference Operation
Ans. Intersection
21. Which of the following is a universal gate ?
• (a) OR
• (b) AND
• (c) XOR
• (d) NOR
Ans. NOR
22. The default back end of the VB is
• (a) Oracle
• (b) Sybase
• (c) Informics
Ans. Sybase
23. What is meant by Superconductivity?
Ans. No reistance
24. Viscosity
Ans. Friction
25. What is the Lock Based Protocol used for?
Ans. Concurrency Control in DBMS
Directions for question 25 to 32: Convert the decimal numbers on the left to the required form
25. 9’s complement of 28
Ans. 71
26. Binary of 58
Ans. 111010
27. Octal of 359
28. Hexadecimal of 650
29. BCD of 18
Ans.0001 1000
30. BCD of 34.8
Ans.0011 0100.1000
31. Excess-3 code of 6
32. Excess-3 code of 9
33. If Ax + By = 1F16; Cx + Dy = 2510 .Find the value of x and y
34. Semaphore is used for
(a) synchronization
(b) dead-lock avoidance
(c) box
(d) none
Ans. a
35. For addressing 1 MB memory, the number of address lines required,
(d) 24
Ans. b
36. Which of the following remains in memory temporarily
(a) Resident portion of COMMAND.COM
(b) Transient portion of COMMAND.COM
(c) API
(d) Disk BIOS
Ans. b
37. Pick the odd man out
• (a) IO.SYS
• (b) MSDOS.SYS
• (c) ROM-BIOS
• (d) COMMAND.COM
Ans. c
38. OS/2 is a
• (a) Single User OS
• (b) Multi User OS
• (c) Multi Tasking OS
• (d) None of these
Ans. c
39. Bootstrap loader program is a program belonging to
• (a) ROM startup software
• (b) ROM extension software
• (c) ROM BIOS software
• (d) ROM Basic software
Ans. a
40. The entry of starting cluster of a file is present in
• (a) Boot Parameters
• (b) Directory
• (c) FAT
• (d) Partition Table and master boot program
Ans. c
Directions for questions 1-6: Find the correct meaning of the following phrases
1. A man of letters
2. A man of straw
3. To be in the air
4. To bite the dust
5. Man of few words
6. Penny wise pound foolish
7. Find the reminder when 333666777888999 divided by 3 or 9 or 11 ?
8. Which is the biggest perfect square amongst the following 15129, 12348, 23716, 20736
9. The greatest area of the following
• (a) The radius of circle is 4
• (b) The square of diagonal is 4
• (c) The square of side is 4
10. The area of the maximum size of the circle described from the 10 square inch square?
11. In the series 0, 3, 8, 15,__ What is the next number?
12. X < 0, Y <> 0 then what is the possibility that the result is always positive?
Ans. xy
13. 3 red and 4 blue balls are in a basket. A member of PPTeam is drawing balls from the basket. What is the probability of getting the 3 red balls simultaneously?
14. Let ax2 + bx + c = 0
If the sum of the equal roots is equal to the product of the same roots.Then which of the following hold true
• (a) a + b = 0
• (b) a = 0
• (c) c = 0
• (d) a + c = 0
15. A fold density is 19 times greater than the water and for copper it is 9 times.At what ratio you can mix gold and
copper to get 15 times denser than water.
Ans. 3 : 2
16. Find the value of (1.99)2
Ans. 3.9601
17. There is a room with 6′ x 8′. A 1′ tile is fixed along the 4 walls in one row. How many 1″ tiles require to finish the work.
Ans. 24
18. 2 persons can finish a job in 8 days. First person alone can finish the work in 24 days. How many days does the second person take to finish the job?
Ans. 12 days
19. A 4″ cube is painted in all its faces and then it is cut down into 1″ blocks. How many 1″ blocks are there even without a single face being painted?
Ans. 8
20. A cylinder is inserted in a sphere d/h = 2/3. Find the surface area of the cylinder ?
21. In a car wheel, two spokes cover 15 degree. Then for the entire car,how many spokes are there?
Ans. 24.
22. What is the angle of degree suspended when two hands of clock showing the time 2.30.
Ans. 105 degrees
23. The age difference between two brothers is 3 years. After 6 years the ratio between the age is 9:8 What are their ages?
Ans. 21 and 18
24. A person’s salary is getting reduced by 20%. What percentage should be added to get back his original salary?
Ans. 25%
25. Two persons start at the same point, walk in opposite directions with 5km/hr and 5.5km/hr respectively. What is the distance separated after 2 and half hrs?
Ans. 26.25 (approx)
26. A person starts walking at a speed of 5km/hr through half the distance, rest of the distance he covers with a speed 4km/hr. Total time of travel is 9 hours. What is the maximum distance he can
Ans. 40km.
27. Initially two cups of same volume are present with milk filled upto 3/5th and 4/5th of their volumes.Water is then filled. Then two mixtures are mixed. Find the ratio of water to milk in the
Ans. 3 : 7
28. 16 grams of radioactive material decays into 8 grams in 10 years. How long will it take to decay to 1 gram ?
Ans. 70 yrs.
29. In a rectangle the length is increased by of the original length . By what proportion should the width be reduced so that the area will be the same?
Ans. 33
30. Find the nth number in the series is 1, -3, 5, -7.___
Ans. (-1)*(2n-1)
31. If a square is formed by the diagonal of the square as an edge, what is the ratio between the area?
Ans. 2
32. The perimeter of a rhombus is 52 units. One of its diagonal is 24 units.What is its second diagonals length?
Ans. 10
33. A cubical rectangular bar has the dimensions with the ratio 5 : 4 : 3. Its volume is 7500. What is the surface area of the bar?
Ans. 2350
34. In a class total 34 students, 16 are have a brother, 15 are have sisters, 9 students don’t have either brothers or sisters.Find the number of students having both brother and sisters.
Ans. 6
35. A batsman scored 18 runs in his 18th innings and that makes his average 18. Find his average upto the 17th innings?
Ans. 19
36. 6 women can do 75 units of work in 8 days by working 5hrs/day. In how many days can 4 women do 30 units of work by working 8hrs/day ?
37. A persons salary iis decreased by steps of 20%, 15% and 10%. What will be the percentage decrease, if the salary is decreased in a single shot?
38. The ratio of the length : breadth : height of a cuboid is 5 : 4: 3, and the volume is 7500. What will be its surface area ?
39. If the circumference of a circle is 100 units, Then what will the length of the arc described by an angle of 20 degree ?
40. 3 persons started FresherHome with a capital of Rs.3000 . B invest Rs.600 less than A, C invest Rs.300 less
than B. Then what is the share amount of B in a profit of Rs.886 ? Directions for 41-50: Which of the following is the correct spelling for the word
41. supercede and supersede
42. recommend and reccomend
43. superitendent and superitendant
44. separate and seperate
45. succeed and suceed
46. coolly and cooly
47. despair and dispair
48. ridiculous and rediculous
49. indespensible and indepensable
50. tranquility or tranquillity
Directions: For the given sample program give the output of the program(30 marks)
int a[]={ 2,4,6,8,10 };
int i;
for( i = 0; i <= 4; i++)
printf("\n %d",a[i]);
change( int *b, int n){
int i;
for( i = 0; i < n; i++)
*(b+i) = *(b+i) + 5; | {"url":"https://placement.freshershome.com/hexaware/hexaware-ltd_26.html","timestamp":"2024-11-12T00:20:36Z","content_type":"text/html","content_length":"88552","record_id":"<urn:uuid:97695c08-d6b0-4b3a-81f0-c0ee61d6cac3>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00212.warc.gz"} |
Explain please Area of circle
Area of circle and stuff I am sooo confused about. Not really in any of the courses I've taken, but I've seen it and I'm confused.
Hi @The-Darkin-Blade!
Nice to hear from you again!
The area of circle is a veeery big topic for a small forum post. There are lots of articles about the formula of a circle's area and other things related to this topic! Hundreds! Even thousands!
Your confusion is normal
To learn more about it, you can start from here, or find more advanced information here.
You can also imagine that a circle is a regular polygon with lots and lots of vertices (actually, infinitely many vertices). To find the area of a regular polygon, you cut it into isosceles
triangles (see picture) and then sum up their areas. So the area of a polygon is equal to \(n\times(\frac{1}{2}ah)=\frac{h}{2}\times(na),\) where \(n\) - number of vertices, \(a\) - side length
and \(h\) - distance from the center of polygon to its side. The term \(na\) is the perimeter of the polygon. As the polygon becomes more and more like a circle, this value approaches the value
of the circle's circumference, which is \(2\pi r,\) and value of \(h\) approaches the circle radius. So, substituting \(2\pi r\) instead of \(na,\) we get: \(\text{area of circle}=\frac{r}{2}\
times(2\pi r)=\pi r^2.\)
Studying this in more detail and discovering lots of new and very interesting things will be easier as you get older and study higher-level topics like functions, derivatives, integrals, and many
other things. | {"url":"https://forum.poshenloh.com/topic/212/explain-please-area-of-circle","timestamp":"2024-11-06T07:08:30Z","content_type":"text/html","content_length":"56007","record_id":"<urn:uuid:c80c0fea-b6bf-41b2-a4a2-5fa2d41c989b>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00062.warc.gz"} |
ranklin Identity-Based Cryptosystem
Privacy is a significant worry for everyone in today's environment. In today’s world, messages you transmit are encoded into ciphertext before being transferred across the communication channel.
When the other person receives it, it is decrypted to reveal the message you wrote. A cryptosystem is a system that performs this process. In this blog, we will be discussing the Boneh-Franklin
identity-based cryptosystem. We will also be discussing pairing-based cryptography. Let's get started.
A cryptosystem is also known as a cipher system. A cryptosystem can be implemented using human methods, machine methods, or software. It consists of the following:
• Encryption method,
• Decryption algorithm
• A well-defined triad of text spaces: plaintexts, ciphertexts, and key texts.
The encryption algorithm converts plaintext to ciphertext for any given key text.
The decryption method transfers the ciphertext to the plaintext for the appropriate key text. | {"url":"https://www.naukri.com/code360/library/boneh-franklin-identity-based-cryptosystem","timestamp":"2024-11-09T16:11:05Z","content_type":"text/html","content_length":"392215","record_id":"<urn:uuid:c884e8c8-3400-4102-9482-648fd16746a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00520.warc.gz"} |
LoRa: Symbol Generation
LoRa is a chirp spread spectrum modulation based wireless networking standard.
Chirp stands for 'Compressed High Intensity Radar Pulse'. It is a signal which frequency either increase or decrease with time. It is very common is sonar and radar. It is also used in spread
Chirp Spread Spectrum:
Chirp Spread Spectrum was developed for radar applications. Chirp signals have constant amplitude and pass the whole bandwidth in a linear or non-linear way from one end to another end in a certain
time. Chirp spread spectrum uses complete bandwidth to transmit signals. If the frequency changes from lowest to highest, it is call up-chirp and if the frequency changes from highest to lowest, we
call it down-chirp. Following is the example of linear up-chirp:
- CSS (chirp spread spectrum) techniques helps to transmit signals for very large distances. - Bandwidth, time product is always greater than one (B*T > 1). - Chirp spread spectrum is resistive to
Doppler shift. - It is used for low power and low data rates.
LoRa Chirp Spread Spectrum Modulation: LoRa uses three different bandwidth: 125kHz, 250kHz and 500kHz (125kHz is used here). LoRa symbols are modulated over a up-chirp of 125kHz bandwidth and
different orthogonal (almost) spreading factors are used based on data rate requirement and channel conditions. LoRa uses SF7 to SF12 spreading factors. Here is a spectrogram or different Spreading
Factors: (For matlab codes of above plot please refer to the post Matlab Code to Generate LoRa Symbols )
LoRa physical layer includes 8 preamble symbols, 2 synchronisation symbols, physical payload and optional CRC.
- SF8 takes exact twice the time of SF7 and SF9 takes exact twice time of SF8. - Symbol Rate(Rs), Bandwidth(BW) and Spreading Factor(SF) relation: Rs = BW / (2^SF) - Higher the Spreading Factor ->
Higher the over-the-air time. - Lower the Spreading Factor -> Higher the Data Rate.
Here is spectrogram of LoRa physical layer:
First 8 up-chirp symbols are preamble symbols used to detect LoRa chirps, next 2 down-chirp symbols are synchronisation symbols used for timing synchronisation followed by the 5 modulated symbols
(payload). The jump in the frequency represents the modulated symbol.
If you are a research student and want to sell your work on my Blog here, please reach me on sakshama.ghosliya@gmail.com
20 comments:
1. How many beats are there in a symbol?
1. Number of bits in a symbol depends on Spreading Factor used. This is how you encode your bits on a chirp. In short 'Number of bits in a symbol' = 'SF used'.
2. Hi
If SF8 takes twice the time of SF7, but I also have twice the amount of bits per symbol (SF7=128 bits, SF8=256 bits) is the bitrate then not equal independent of the SF?
3. Hello Friend,
SF7 means you can transmit 7 bits (0 to 127). SF8 means 8 bits can be transmitted.
I hope you have gone through this: http://www.sghoslya.com/p/lora_9.html
4. Hello Friend,
This is what i understand. But what I don't understand is the following:
If one symbole(Chirp) takes Ts time. Ts = (2^SF)/BW
Ts=(2^7)/125k=1,024 mili second
Ts=(2^8)/125k=2,024 mili second
So one symbol takes twice the time with SF8 then SF7.
With SF8 I can send one symbol that will contain 256 bits of data.
In the same symbol period of SF8, I can send two symbols with SF7, but only with 128 bits of data per symbol, what totals to 256 bits.
So in this explanation the data rate will be independent of the spread factor. I know that this not true, because of the following: Rb=SF*(1/((2^SF)/BW))
Can you explain what I'm doing wrong here?
Anyway thank for this amazing LoRa tutorial, I learned a lot from it.
5. Hello Friend,
Read the text clearly. SF8 means 8 bits per symbol not 256 bits per symbol. And 8 bits means 0-255 decimal number, 00000000 to 11111111 binary number. Hope you understood this time.
2. Hi,
can you post matlab code of last spectrogram, entitled as "LoRa symbols [8 preamble, 2 Sync, 5 Symbols]". I want to see how and why LoRa makes frequency shifts after preamble and sync word.
1. Hello Umber,
You can check codes under "LoRa Decoding" page (http://sakshamaghoslya.blogspot.in/p/matlab-codes-for-lora-decoding.html) of this blog. If you will change the number of symbols and the
symbols, you will get this graph.
2. Code not available in the above mentioned link
3. what is the relation between Spreading factor,Bandwidth and CodingRate in LoRa?
1. Bit rate = ((spreading factor)*(4/4+code rate))/(2^spreading factor/Bandwidth)
4. Dear Sakshama Ghoslya,
I want to send 07,56,45, A3 hexa data with CSS (00000111,01010110,01000101,10100011). Spreading factor (SF) is 8. So how many chirp symbols do I use? four? The information I want to be sure is
shudder. The useful serial I will send is the encoded data, I use it According to the spreading factor (eg SF = 8) should I consider 8 bit symbols?
It's the information I want to make sure. I want to send a serial encoded binary data via CSS modulation. I'll use the Spreading factor 8. In this case, I must generate symbols with 8 bits using
the serial encoded data. These symbols are now Chirp symbols. Is this the right way of thinking?
1. hello,
please if you have found the answer to your question I am so interested to know that. please contact me, here my email : badra.ese@gmail.com
5. Hi, can i use the second and third pictures for my Thesis project? of course i will mention your site as source. Let me know, great work anyway!
1. Hi Leonardo, Yes you can use. Best of luck for your thesis.
6. Hi!
I was wondering if I could use your sources here in my thesis paper?
1. Hi, Many people have used it as a reference.
7. Hi, please may I use your graphics within my thesis?
1. Yeah sure
8. Hi great site help me a lot to understand lora, may I ask about coding rate in lora and its effect? | {"url":"http://www.sghoslya.com/p/lora-is-chirp-spread-spectrum.html","timestamp":"2024-11-08T07:34:19Z","content_type":"text/html","content_length":"93073","record_id":"<urn:uuid:c32de272-7e79-4b9b-a4d1-1a15e6cd96d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00576.warc.gz"} |
What Determines The Magnetic Field Strength At Point 1: Exploring The Phenomenon
What Determines the Magnetic Field Strength at Point 1: Exploring the Phenomenon
Welcome to Warren Institute, where we delve into the fascinating world of Mathematics education. Today, we explore a fundamental concept in Physics: magnetic field strength at point 1. Understanding
this concept is crucial for grasping the behavior of magnetic fields and their impact on surrounding objects. In this article, we will unravel the intricacies of calculating magnetic field strength
at a specific point, shedding light on the underlying principles that govern this phenomenon. Join us on this journey as we navigate through the realm of magnetic fields and uncover the mysteries of
their strength at point 1. Let's embark on this enlightening exploration together.
The concept of magnetic field strength
The concept of magnetic field strength refers to the force experienced by a unit north pole placed at a specific point in a magnetic field. It is a fundamental concept in physics and plays a crucial
role in understanding the behavior of magnets and electromagnetism.
Calculating magnetic field strength
To calculate the magnetic field strength at a point, one needs to consider the magnitude and direction of the magnetic field at that point. This can be done using mathematical equations derived from
the laws of electromagnetism.
Factors affecting magnetic field strength
Several factors can affect the magnetic field strength at a specific point, including the distance from the magnet, the orientation of the magnet, and the material in which the magnet is placed.
Understanding these factors is essential for accurately predicting magnetic field strength.
Applications of magnetic field strength
Understanding magnetic field strength is crucial in various real-world applications, such as designing MRI machines, electric motors, and magnetic levitation systems. The ability to manipulate
magnetic fields effectively relies on a solid understanding of magnetic field strength.
Units of magnetic field strength
Magnetic field strength is typically measured in units of tesla (T) or gauss (G). These units quantify the strength of the magnetic field at a specific point and are essential for making precise
calculations and comparisons in the field of electromagnetism.
Experimental determination of magnetic field strength
In educational settings, students often conduct experiments to determine the magnetic field strength at various points around a magnet. These hands-on activities help reinforce theoretical concepts
and provide practical experience in measuring and analyzing magnetic fields.
frequently asked questions
What factors determine the magnetic field strength at a specific point in space?
The factors that determine the magnetic field strength at a specific point in space include the magnitude of the current, the distance from the current-carrying wire, and the permeability of the
How does the distance from a magnet affect the magnetic field strength at a particular point?
The magnetic field strength at a particular point decreases as the distance from a magnet increases.
Can you explain how to calculate the magnetic field strength at a given location using mathematical formulas?
To calculate the magnetic field strength at a given location, you can use the formula: B = (μ0 * I) / (2πr), where B is the magnetic field strength, μ0 is the permeability of free space, I is the
current flowing through the wire, and r is the distance from the wire.
In what units is magnetic field strength typically measured, and how is it represented in equations?
Magnetic field strength is typically measured in tesla (T) or gauss (G), and it is represented in equations using the symbol B.
Are there any real-world applications where understanding magnetic field strength at a specific point is crucial?
Yes, understanding magnetic field strength at a specific point is crucial in applications such as designing MRI machines in the field of Mathematics education.
In conclusion, calculating the magnetic field strength at point 1 requires a solid understanding of Magnetism and Electromagnetism principles. By applying the Biot-Savart Law and considering the
contributions from all current-carrying segments, we can determine the magnitude and direction of the magnetic field at this specific location. This process not only enhances our problem-solving
skills but also deepens our comprehension of how magnetic fields interact in various scenarios.
If you want to know other articles similar to What Determines the Magnetic Field Strength at Point 1: Exploring the Phenomenon you can visit the category General Education. | {"url":"https://warreninstitute.org/what-is-the-magnetic-field-strength-at-point-1/","timestamp":"2024-11-06T01:32:05Z","content_type":"text/html","content_length":"104583","record_id":"<urn:uuid:7b4fdc46-98b3-4ab4-b800-dc50f086402a>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00796.warc.gz"} |
5.9 Higher Derivatives - Avidemia
We have learned the meaning of the first derivative of a function. Now we want to know what the second, the third, and the n-th derivatives of a function are defined and how we can calculate them.
Read about the importance of the second derivative in physics
If $s(t)$ is the position of an object moving on a straight line, then the derivative of s is the velocity of the object $v(t)=s'(t)$. The derivative of the velocity is the acceleration of the object
$a(t)$. So $a(t)=v'(t)$ or $a(t)=(s’)'(t)$, which is often written simply as $a(t)=s^{\prime\prime}(t)$.We say the acceleration is the second derivative of the position. In physics, acceleration
plays an important role as it appears in Newton’s second law $F=ma$.
In general, if we take the derivative of $y=f(x)$, we obtain a new function $f’$ (also denoted by $y’$ or $dy/dx$). We can take the derivative of $f’$ and obtain another function called the second
derivative of $f$ (or $y$). The second derivative of $f(x)$ is denoted by \(f^{\prime\prime}(x)\) or \(\dfrac{d}{dx}\left(\dfrac{dy}{dx}\right)\), which is commonly abbreviated to \(\dfrac{d^{2}y}{dx
^{2}}.\) Thus $f^{\prime\prime}=(f’)^\prime$ or
\[\bbox[#F2F2F2,5px,border:2px solid black]{f^{\prime\prime}(x)=\lim_{\Delta x\to0}\frac{f'(x+\Delta x)-f'(x)}{\Delta x}.\tag{a}}\]
The second derivative is also indicated by \(y^{\prime\prime}\) or \(\dfrac{d^{2}f}{dx^{2}}\).
For example, if \(f(x)=x^{3}-5x^{2}+3x-1\), then the first derivative is \[y’=f'(x)=\frac{dy}{dx}=\frac{df}{dx}=3x^{2}-10x+3,\] and the second derivative of \(f\) is the derivative of \(f'(x)\):
In a similar fashion, we can define the third derivative as the derivative of the second derivative. It is denoted by
the fourth derivative is the derivative of the third derivative, and is denoted by
\[y^{(4)}=f^{(4)}(x)=\frac{d^{4}y}{dx^{4}}=\frac{d^{4}f}{dx^{4}},\] and so on. In general, the \(n\)-th derivative of \(y=f(x)\) is indicated by one of the following symbols:
• If $f^{(n)}(x_0)$ exists, then it is said that $f$ is $n$-times differentiable at $x_0$.
• If a function is differentiable, its derivative is not necessarily differentiable. In other words, from the existence of \(f'(x_{0})\), we cannot infer the existence of \(f^{\prime\prime}(x_{0})
\). For instance, see the following example.
• In the above example, \(f\) is differentiable (= \(f'(x)\) exists) everywhere. But because \(f'(x)\) is not continuous at \(x=0\), \(f^{\prime\prime}(0)\) does not exist. | {"url":"https://avidemia.com/single-variable-calculus/differentiation/higher-derivatives/","timestamp":"2024-11-10T05:20:20Z","content_type":"text/html","content_length":"87492","record_id":"<urn:uuid:dd115d10-8d72-4376-a757-5c7d7200a95d>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00274.warc.gz"} |
Gambling and Expected Value: Encore (OLG)
In this post on Gambling and Expected Value, we look at the "Encore" lottery offered by the OLG.
Click here to find similar posts on other lotteries and games of chance.
Encore (OLG)
is a lottery offered by the Ontario Lottery and Gaming Corporation (OLG).
How the Game Works
Encore is an add-on lottery you can play if you are already playing another lottery (Lotto 649, Lotto Max, Lottario, Ontario 49, Pick 2, Pick 3, Pick 4, or Daily Keno).The player chooses how many
wagers he or she wishes to make (you can choose to play 1 to 10 combinations per ticket). The player doesn't get to choose their numbers though; a random string of seven digits (from 0 to 9) is
chosen for him for each play. Winnings are determined by matching particular digits in the sequence.
Probabilities and Prizes
There are 22 different ways to win a prize playing Encore, ranging from $2 to $1,000,000 on a $1 wager. Figuring out their respective probabilities isn't too difficult, but does require some care to
avoid overcounting. I've made the table below to help you see how to calculate how many possible sequences of numbers could win a particular prize.
Since the sequence consists of 7 randomly selected digits that can be repeated, there are a total of 10,000,000 (10 raised to the 7th power) possible number combinations that can be played. The
probability of winning a particular prize is the number of ways to get a match divided by the total number of possible plays. For instance, the probability of winning the prize for matching only the
last 4 digits is 891 divided by 10,000,000 (i.e. 0.00891%, or approximately 1 in 11,223). The table below shows the payout and the likelihood of winning for each of the 22 ways a player can win at
Subtracting the probabilities of the 22 ways to win from 100% gives the probability of losing at Encore. The probability of losing works out to exactly 891 in 1,000, or 89.1%.
Something you should note looking at the table of payouts and probabilities is that the prize doesn't consistently scale to the probability of occurrence. For instance, matching the First 2 and Last
4 digits has the same probability of matching the Last 6 digits, but the former pays only about 0.1% as much as the latter.
The following four combinations were played in an Encore draw where the winning numbers 5554321.
'A' = 5556789
'B' = 5556781
'C' = 7777321
'D' = 4321555
'A' wins a $10 payout for matching only the first 3 digits. 'B' wins a $12 payout matching the first 3 and the last 1 digits. 'C' wins a $10 payout for matching only the last 3 digits. 'D' loses.
Expected Value
The expected value of is the sum of the products of the probability of each outcome and the monetary gain of that outcome. The net monetary gain is the payout less the wager. Encore has 23 possible
outcomes (22 ways to win and 1 way to lose), so the expected value of Encore is the sum of 23 terms. The expected value of Encore is:
Expected value calculation for the OLG lottery "Encore".
The expected value of Encore is -$0.486 per $1 wagered. This means that, on average, for every dollar you spend playing Encore, you lose 48.6 cents. As always, it's smarter not to gamble at all,
though an expected value better than -$0.50 per $1 wager is slightly better than most lotteries. Therefore, if you are going to buy lotteries anyway, Encore appears to be one of the better choices.
From our analysis we can draw the following conclusions:
1. Encore has an expected value of slightly better than -$0.50 per $1 wager, which is comparable to other lotteries and typical 50/50 draws.
2. Accordingly, Encore is similar to other lotteries in that it you will generally lose money rapidly if you play.
3. Encore is neither the best choice, nor the worst choice, as far as OLG lotteries go.
4. Most of Encore's 22 ways to win have a very low probability of occurrence.
5. Encore's "22 ways to win" is a clever marketing trick to disguise the fact that there's an 89.1% chance you'll lose playing Encore.
6. Encore's prizes do not scale with their probability of occurrence. Most of the prizes are much smaller than one should expect given their probability of occurrence.
4 comments:
1. Please continue this great work and I look forward to more of your awesome blog posts. 토토사이트
2. Good day everybody, This is my testimony on how i won $4,200,201 million I want to use this opportunity to thank Great Priest Salami for helping me to win the lottery of $4,200,201 million
dollars on Mega millions lottery ticket. I have been playing the lottery for the past 10 years now and I have never won. Ever since then I have not been able to win and I was so upset and I
needed help to win the lottery. So I decided to go online and search for help, there i saw so many good testimonies about this man called Great Priest Salami of how he has cast lucky spell lotto
for people to win the lottery. I contacted him also and told him I want to win a lottery, he cast a spell for me which I use and I played and won $4,200,201 million dollars. I am so grateful to
this man, just in-case you also need him to help you win, you can contact him through his email: purenaturalhealer@gmail.com WhatsApp number +2348143757229
3. ,+27768478618``
@SSD CHEMICAL SOLUTION IN QATAR, KUWAIT, +27768478618``, SSD chemical in Durban, +27768478618``, SSD chemical in UK, ++27768478618``, buy SSD chemical in Kuwait, +27768478618``, buy SSD chemical
in Switzerland ,,+27768478618``, buy SSD chemical Europe ,,++27768478618``, buy SSD chemical in Johannesburg ,,+27768478618``, buy SSD chemical in Dubai ,,+27768478618``, Zambia Oman Dubai China
Bloemfontein Boksburg Cape Town Centurion East London Empangeni George Germiston Ibhayi Johannesburg Katlehong Kempton Park Khayelitsha Kimberley Klerksdorp Mamelodi Mitchells Plain Mthatha
Nelspruit Newcastle Pietermaritzburg Pinetown Polokwane Port Elizabeth Potchefstroom Pretoria Randburg Roodepoort Sebokeng Soshanguve Soweto Springbok Stellenbosch Tembisa Thohoyandou Umlazi
Upington Vanderbijlpark Vereeniging Witbank Eastern Cape Free State Gauteng KwaZulu-Natal Limpopo Mpumalanga North West Northern Cape Western Cape Call,,++27768478618`` to purchase Best SSD
Solution Clean Black Notes Dollars WE ALSO? SALE CHEMICALS LIKE SSD AUTOMATIC SOLUTION FROM CLEANING BLACK DOLLARS CURRENCIES. SSD chemical in the UK, ++27768478618``, buy SSD chemical in Kuwait,
+27768478618``, buy SSD chemical in Switzerland, +27768478618``, buy SSD chemical Europe, ++27768478618``, buy SSD chemical in Johannesburg,+27768478618``, buy SSD chemical in Dubai, +27768478618
``, Zambia Oman Dubai China Bloemfontein Boksburg Cape Town Centurion East London Empangeni George Germiston Ibhayi Johannesburg Katlehong Kempton Park Khayelitsha Kimberley Klerksdorp Mamelodi
Mitchells Plain Mthatha Nelspruit Newcastle Pietermaritzburg Pinetown Polokwane Port Elizabeth Potchefstroom Pretoria Randburg Roodepoort Sebokeng Soshanguve Soweto Springbok Stellenbosch Tembisa
Thohoyandou Umlazi Upington Vanderbijlpark Vereeniging Witbank Eastern Cape Free State Gauteng KwaZulu-Natal Limpopo Mpumalanga North West Northern Cape Western Cape Call, +27768478618`` to
purchase Best SSD Solution Clean Black Notes Dollars WE ALSO? SALE CHEMICALS LIKE SSD AUTOMATIC SOLUTION FROM CLEANING BLACK DOLLARS CURRENCIES. , SSD chemical in the UK, +27768478618``, buy SSD
chemical in Kuwait, +27768478618``, buy SSD chemical in Switzerland,+27768478618``, buy SSD chemical Europe, +27768478618``, buy SSD chemical in johannesburg++27768478618, buy SSD chemical in
Dubai, +27768478618``, Zambia Oman Dubai China Bloemfontein Boksburg Cape Town Centurion East London Empangeni George Germiston Ibhayi Johannesburg Katlehong Kempton Park Khayelitsha Kimberley
Klerksdorp Mamelodi Mitchells Plain Mthatha Nelspruit Newcastle Pietermaritzburg Pinetown Polokwane Port Elizabeth Potchefstroom Pretoria Randburg Roodepoort Sebokeng Soshanguve Soweto Springbok
Stellenbosch Tembisa Thohoyandou Umlazi Upington Vanderbijlpark Vereeniging Witbank Eastern Cape Free State Gauteng KwaZulu-Natal Limpopo Mpumalanga North West Northern Cape Western Cape Call,
+27768478618`` to purchase Best SSD Solution Clean Black Notes Dollars WE ALSO? SALE CHEMICALS LIKE SSD AUTOMATIC SOLUTION FROM CLEANING BLACK DOLLARS CURRENCIES. +27768478618
4. Great information, better still to find out your blog that has a great layout. Nicely done link alternatif sbobet | {"url":"http://alohonyai.blogspot.com/2014/07/gambling-and-expected-value-encore-olg.html","timestamp":"2024-11-05T03:23:50Z","content_type":"text/html","content_length":"73131","record_id":"<urn:uuid:4f55bb90-7ea5-46e0-b3ad-1cabd6934b53>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00680.warc.gz"} |
Feature request - Add "Floor" Global and update finish face formulas to include it
Current Foundation Library
Requesting that this be implemented by the Development team as the solution presented below will get overwritten with library updates.
Rounding down the width and height dimension of doors/drawers to the nearest increment that you manufacture to requires playing with the gap prompts until you get it right. Time consuming.
This can also be applied to adjustable shelf calculations.
Example, we cut doors/drawers rounding down to the next nearest 16".
• 20" wide cabinet, 1/8" left gap, 1/8 center gap, 1/16" right gap
• Options/General/Accuracy set to 1/16"
• The drawing produces doors that are 9 27/32" wide
• For our requirements in this example, the doors should be 9 13/16" wide.
• The work order produces doors that are 9 7/8" wide. This does not leave enough space for the gaps. The work order is not rounding down to the next 1/16" it is rounding to the nearest 1/16".
Add a Global to specify your increment and a "Floor" formula to the width/height calculations.
Same example with "Floor" formula added
• 20" wide cabinet, 1/8" left gap, 1/8 center gap, 1/16" right gap
• Options/General/Accuracy set to 1/16
• "Floor" formula inserted into finish face width/height formulas, set to 1/16"
• The drawing produces doors that are 9 13/16" wide (rounded down to the nearest 16")
• The work order produces doors that are 9 13/16" wide
MVU eLearning
Grow Your Knowledge
Follow along with RJ as he takes you on a journey to build your foundational knowledge of Toolbox. | {"url":"https://community.microvellum.com/portal/en/community/topic/feature-request-add-floor-global-and-update-finish-face-formulas","timestamp":"2024-11-07T12:08:26Z","content_type":"text/html","content_length":"38552","record_id":"<urn:uuid:febf13c0-9e7f-4d7e-b2c3-822ba90020ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00628.warc.gz"} |
The Association International Uncertainty Computing
The Association of International Uncertainty Computing (AIUC) is a non-profit organization focused on organizing the important irregular Conferences for Uncertainty Computing Theories, Algorithms or
Technologies which are not limited to the following subjects Artificial Intelligence, Fuzzy Sets, Rough Sets, Knowledge Representation, Machine Learning, Quantum Computing, Nonlinear issues on
computing problems (numerical or symbolic), Hilbert problems, Symbolic Logic, Mathematical Logic, Logic Programming, etc.
Logic, Set, Number and Algebra hold the keys to the universe. Logic and set theory are most fundamental to all sciences. Uncertainty Computing is the computing algorithms, technology, principles, and
theories to find the certainty from the uncertainty knowledge or information obtained from our unknown universe. Therefore we consider logic theory, set theory, number theory and algebra are also the
important fields to explore uncertainty computing.
Hilbert problems, NP-complete problems, Quantum Mechanics, computability, partial differential equations (related to physics) are also to be studied in AIUC.ORG.
The current important task of aiuc.org task force is to develop the International Journal of Uncertainty Computing which is also called Journal of AIUC with the online ISSN 2152-0917 and the print
ISSN 2154-221X. This Journal has been published in on-line format as well as in printed format since 2008. This journal can also be called "Journal of AI and UC".
The Board of Directors of the AIUC are nominated and elected by the AIUC Council Members. There are honor members, lifetime members, and regular members. The honor members and the lifetime members
will enjoy their free membership. The initial Steering Committee of AIUC.ORG has established in 2006. Dr. James Kuodo Huang is the current Chair of the Steering Committee | {"url":"https://findebiz.org/","timestamp":"2024-11-09T08:57:46Z","content_type":"text/html","content_length":"6164","record_id":"<urn:uuid:ffa8f272-53ef-4c9b-be6b-a781721414c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00474.warc.gz"} |
I may go on to graduate work in math or a subject using serious math
The advice for "I need calculus and more" applies to you, with some additional comment.
First, there are honors versions (MATH 140H and MATH 141H) of MATH 140 and 141. These courses are very similar in content to MATH 140 and 141, but the format is small-section rather than
large-lecture, and the students are honors students. You may want to investigate these as an alternative to MATH 140 and 141.
If you will already have credit for MATH 140-141 when you arrive at UMD, then you will probably be taking sophomore-level math courses when you arrive, and you should consider the special honors
sequence MATH 340-341 (Fall-Spring) which covers the material of the core sophomore (post-calculus) math courses (multivariable caclulus, linear algebra and differential equations) in an honors
setting, with enrichment.
This sequence is only available to students who have passed out of Calculus I and II (e.g., with a 4 or 5 on the Calculus BC Advanced Placement Test). The sequence is available by invitation only
from the math department and it is open only to very strong freshman (MATH SAT at least 750) and special high school students.
If you are a very special student who will already have finished some of the sophomore level courses on entering UMD, then you should contact the Coordinator of Undergraduate Advising in Mathematics
() who can advise you on your opportunities depending on your background and interests. | {"url":"https://www-math.umd.edu/outreach/high-school/88-math/undergraduate/340-i-may-go-on-to-graduate-work-in-math-or-a-subject-using-serious-math.html","timestamp":"2024-11-11T04:05:36Z","content_type":"text/html","content_length":"151866","record_id":"<urn:uuid:34006d58-26e3-451f-b056-03f2e9965c6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00225.warc.gz"} |
HV-Symmetric Polyhedra and Their Equivalence to Bipolar Polyhedra
Core Concepts
A polyhedron is HV-symmetric if and only if it contains the origin, meaning it is bipolar; this property has implications for vertex and facet enumeration problems.
Bibliographic Information:
Avis, D. (2024). HV-symmetric polyhedra and bipolarity. arXiv preprint arXiv:2406.03698v2.
Research Objective:
This paper explores the concept of HV-symmetry in polyhedra and its relationship to the well-established concept of bipolarity. The author aims to demonstrate that a polyhedron is HV-symmetric if and
only if it contains the origin, making it bipolar.
The author utilizes mathematical proofs and definitions to establish the equivalence between HV-symmetry and bipolarity in polyhedra. The paper relies on concepts like H-representation,
V-representation, polarity of convex sets, and the bipolar equation. Examples are provided to illustrate the definitions and support the arguments.
Key Findings:
The paper proves that a polyhedron is HV-symmetric if and only if it contains the origin, meaning it is also bipolar. This finding is significant because it simplifies the determination of
HV-symmetry by eliminating the need to compute both V(P) and V(Q) as defined in the paper.
Main Conclusions:
The equivalence between HV-symmetry and bipolarity in polyhedra has implications for computational geometry problems like vertex and facet enumeration. Specifically, when a polyhedron contains the
origin, the lifting process typically used in converting between H-representation and V-representation is unnecessary, potentially leading to faster computation times for these problems.
This research contributes to a deeper understanding of the properties of polyhedra and their representations. The established link between HV-symmetry and bipolarity simplifies the identification of
HV-symmetric polyhedra and offers potential computational advantages in solving geometric problems related to these objects.
Limitations and Future Research:
While the paper proves the equivalence of HV-symmetry and bipolarity, it raises questions about the efficiency of applying this knowledge in algorithms for vertex and facet enumeration. Further
research could investigate whether utilizing the unlifted representation for bipolar polyhedra consistently leads to faster computation times compared to the traditional lifted approach.
"It is well known and often stated that polytopes that contain the origin in their interior and pointed polyhedral cones are HV -symmetric. It seems to be less well known that, more generally, a
polyhedron is HV -symmetric if and only if it contains the origin, in other words it is bipolar."
How might the understanding of HV-symmetry in polyhedra be applied to other areas of computational geometry or computer graphics?
Answer: The insights into HV-symmetry offered by this paper can potentially benefit various areas within computational geometry and computer graphics: Collision Detection: Determining if two objects
intersect is a fundamental problem in collision detection. For convex objects, algorithms often leverage the objects' H-representations or V-representations. Knowing that a polyhedron is bipolar (and
thus HV-symmetric) allows for choosing the most efficient representation for the specific algorithm, potentially reducing computational overhead. Convex Hull Computations: Algorithms for computing
the convex hull of a point set can be optimized by exploiting HV-symmetry. For instance, if the input points are known to form a bipolar polyhedron, the algorithm can directly compute either the
H-representation or V-representation, as the other can be easily derived. Mesh Simplification: In computer graphics, simplifying complex meshes while preserving their overall shape is crucial for
efficient rendering. Understanding the HV-symmetry of a polyhedron can guide the simplification process. For example, if a mesh represents a bipolar polyhedron, simplification algorithms can
prioritize preserving vertices and facets that contribute significantly to both representations. Shape Analysis: HV-symmetry can be a valuable tool for analyzing and classifying 3D shapes. The
presence or absence of this symmetry, along with properties of the polar polyhedron, can provide insights into the object's structure and potential symmetries. This information can be used in shape
recognition, retrieval, and classification tasks.
Could there be cases where leveraging the unlifted representation for bipolar polyhedra in vertex and facet enumeration problems might actually lead to decreased computational efficiency?
Answer: While the paper suggests that using the unlifted representation for bipolar polyhedra in vertex and facet enumeration might seem computationally advantageous, there are situations where it
could lead to decreased efficiency: Degeneracy: The performance of pivot-based algorithms like reverse search is significantly affected by the presence of degeneracies in the polyhedron. Degeneracies
can lead to an exponential increase in the number of bases visited by the algorithm. It's possible that the unlifted representation of a bipolar polyhedron might introduce or exacerbate degeneracies,
leading to worse performance compared to the lifted representation. Implementation Details: The efficiency gains from using the unlifted representation depend on the specific implementation of the
vertex and facet enumeration algorithm. Some implementations might be optimized for handling the lifted representation, and using the unlifted representation might require modifications that could
negatively impact performance. Problem-Specific Structure: The structure of the specific bipolar polyhedron being analyzed plays a crucial role. In some cases, the lifted representation might
implicitly exploit certain structural properties of the polyhedron that are not readily apparent in the unlifted representation, leading to faster computation. Therefore, while the theoretical
possibility of improved efficiency using the unlifted representation for bipolar polyhedra exists, a careful empirical analysis considering the factors mentioned above is crucial to determine the
optimal approach for a given problem instance.
If the presence or absence of the origin within a polyhedron dictates its properties and computational complexity, what other fundamental geometric concepts might hold similar significance in shaping
our understanding of these objects?
Answer: The presence or absence of the origin significantly influences a polyhedron's properties and computational complexity, particularly concerning HV-symmetry and bipolarity. Several other
fundamental geometric concepts similarly impact our understanding of these objects: Convexity: The property of convexity is paramount in the study of polyhedra. Many efficient algorithms and elegant
theorems are only applicable to convex polyhedra. Non-convex polyhedra often require more complex and computationally intensive techniques. Dimensionality: The dimension of the space in which a
polyhedron resides plays a crucial role. Algorithms and their complexity often depend on the dimension. For instance, vertex enumeration for a polytope (bounded polyhedron) is polynomial-time
solvable in 2D and 3D but becomes NP-hard in higher dimensions. Combinatorial Structure: The arrangement of vertices, edges, and facets, captured by the face lattice of a polyhedron, significantly
influences its properties. Properties like simplicity (each vertex belonging to exactly d facets in d-dimensional space) or simpliciality (each facet being a simplex) can simplify algorithms and lead
to stronger theoretical results. Symmetry Groups: The presence of symmetries, described by the polyhedron's symmetry group, can be exploited for efficient representation, analysis, and manipulation.
For example, regular polyhedra, possessing a high degree of symmetry, have well-defined properties and are computationally easier to handle. Degeneracies: The presence of degeneracies, such as
multiple vertices lying on the same hyperplane or facets with more than d vertices in d-dimensional space, can significantly impact the complexity of algorithms. Handling degeneracies often requires
careful consideration and can increase the computational cost. Understanding these fundamental geometric concepts is crucial for developing efficient algorithms, proving theoretical results, and
gaining deeper insights into the properties and behavior of polyhedra in various computational geometry and computer graphics applications. | {"url":"https://linnk.ai/insight/scientific-computing/hv-symmetric-polyhedra-and-their-equivalence-to-bipolar-polyhedra-4OeW_mJC/","timestamp":"2024-11-01T22:16:18Z","content_type":"text/html","content_length":"247201","record_id":"<urn:uuid:f20b23be-1203-458e-aaab-f741939d9877>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00025.warc.gz"} |
Iberoamerican Webminar of Young Researchers in
Singularity Theory and related topics
This webminar is intended to be an open place for discussions and interactions between young researchers in all aspects of Singularity Theory and related topics. The seminar is open to everybody and
is composed by a a series of research talks by leading young and senior researchers. To attend a talk, please join the Mailing list bellow to receive the meeting link before the talk starts.
• Organizers: Patricio Almirón Cuadros, Pablo Portilla Cuadrado, Juan Viu Sos.
• Timing: Wednesdays biweekly at 17:00 (GMT +2, CEST), 50-minute talk + post discussion.
• Mailing list: Please email iberosing (-at-) ucm (-dot-) es to join the mailing list.
IberoSing International Workshop 2024, 25th-29th November 2024, Univ. Politécnica de Madrid (Spain).
Several talks and 2 minicourses about:
• Lattice cohomology: then and now by Tamás László (Babeş-Bolyai University).
• Singularities of maps and regular homotopy by Roberto Giménez Conejero (Mid Sweden University).
Previous events:
IberoSing International Workshop 2023, 06th-10th November 2023, Univ. de Granada (Spain).
Several talks and 2 minicourses about:
• Homological Mirror Symmetry by Prof. Helge Ruddat (Univ. of Stavanger).
• Invariants of singularities via D-modules Prof. Mircea Mustaţă (Univ. of Michigan).
IberoSing International Workshop 2022, 24th-27th October 2022, Univ. Complutense de Madrid (hybrid event).
Several talks and 3 minicourses about:
• “Floer Homology towards the Zariski conjecture” by Javier Fernández de Bobadilla/Tomasz Pełka (BCAM).
• “The Monodromy Conjecture” by Willem Veys (KU Leuven).
• “Lipschitz normal embedding of singular spaces“ by Lorenzo Fantini (Centre de Mathématiques Laurent Schwartz-École Polytechnique de Paris).
IberoSing Special Summer Edition 2022 (12th July 2022)
│ Date: │ │ │ │ │
│ 12th │ 9:30-10:30 │ 10:30-11:00 │ 11:00-12:00 │ 12:10-13:10 │
│ July │ │ │ │ │
│ 2022 │ │ │ │ │
│ │ │ │ × │ │
│ │ × │ │ │ │
│ │ │ │ Felix Delgado de La Mata │ × │
│ │ Maria Alberich Carramiñana │ │ │ │
│ │ │ │ Universidad de Valladolid │ Jean-Michel Granger │
│ │ Universitat Politècnica de │ │ │ │
│ │ Catalunya │ │ On the topological type of the image of a curve │ Université d'Angers (France) │
│ │ │ │ Video │ │
│ │ Valuative trees over valued │ │ │ Set-theoretic complete intersection singularities of space │
│ │ fields │ │ Abstract↴ │ curves │
│ │ Video │ │ │ Video │
│ │ │ │ Let $(Z,z)$ be the germ of a normal surface singularity, $\varphi =(f,g): │ │
│ Speakers │ Abstract↴ │ Break │ (Z,z)\to ({\mathbb C}^2,0)$ an finite analytic morphism. the goal of the talk │ Abstract↴ │
│ │ │ │ is to describe the topological type (equisingularity type) of the image by $\ │ │
│ │ Given a valued field (K,v) │ │ varphi$ of a curve $\delta \subset (Z,z)$ from the use of iterated "pencils" │ We deform monomial space curves in order to construct new │
│ │ we study a tree-model of │ │ recursively defined from the initial $\langle f,g \rangle$. A case of │ series of examples of set-theoretical complete intersection │
│ │ all equivalence classes of │ │ particular interest is the case where $\delta = C(\varphi)$ is the critical │ where he ideal of the reduced curve is not a complete │
│ │ valuations on K[x], whose │ │ locus of $\varphi$ and hence its image is the discriminant of the morphism. │ intersection. As a by-product we describe an inverse to │
│ │ restriction to K is │ │ │ Herzog's construction of minimal generators of non-complete │
│ │ equivalent to v. │ │ The results generalize the results obtained in previous works in the case │ intersection numerical semigroups with three generators. │
│ │ │ │ where the source space $(Z,z)$ is the plane $({\mathbb C}^2,0)$ and they │ │
│ │ This is a joint work with │ │ extend to the reduced curve case. Again the fundamentals tool is the analysis │ This is a work in common with Mathias Schulze. │
│ │ J. Guàrdia, E. Nart and J. │ │ of the "pencil" over the normal surfaces. │ │
│ │ Roé. │ │ │ │
│ │ │ │ This is a joint work in progress with Hélène Maugendre. │ │
Past talks & mini-courses (2021-2022)
Date Speaker Title
Milnor Fiber Consistency via Flatness
Alex Hof Abstract↴
01 June University of
2022 at Wisconsin-Madison Given a holomorphic family of function germs defining hypersurface singularities, we can ask whether the Milnor fiber varies consistently; in the isolated case, it is
17h (USA) well-known that the answer is always yes (in the sense that the family defines a fibration above the complement of the discriminant), and this allows us to obtain a
distinguished basis of vanishing cycles for a singularity by perturbing it slightly. In the non-isolated case, this is not always true, and there has long been interest
in finding conditions under which this kind of consistency does occur. We give a powerful algebraic condition which is sufficient for this purpose - namely, that the
analogous statement will hold so long as the critical locus of the family, considered as an analytic scheme, is flat over the parameter space.
First steps on the simplicity of augmentations
18 May Nacho Breva Ribes
2022 at Universitat de The operation of augmentation can be found in almost all known classifications of simple map-germs. In general, the process of augmenting a simple singularity does not
17h València (Spain) necessarily yield a simple augmentation.
In this talk we describe how to obtain the versal unfolding of an augmentation. This will allow us to characterise the simplicity of an augmentation in the case that the
augmented map-germ has $\mathscr{A}_e$-codimension 1 or that the augmenting function is a Morse function. We also give a list of simple map-germs from $\mathbb{C}^4$ to $
\mathbb{C}^4$. These, we conjecture, are all the simple augmentations that appear in these dimensions.
Yuan’s correspondence for Galois ring extensions of exponent one
Yuan presents an exponent one inseparable Galois theory for commutative rings extensions of prime characteristic $p\neq 0$. We say that an extension of exponent one, $A\
04 May Celia del Buey subseteq C$, is a Galois extension if $C$ is finitely generated projective as $A$-module and locally has $p$-basis. Yuan proves that this local condition is equivalent to
2022 at Univ. Autónoma de the hypothesis $C[Der_A(C)]=Hom_A(C,C)$. Using this characterization, he establishes a correspondence between the intermediate rings between $C$ and $A$ over which
17h Madrid (Spain) locally $C$ admits $p$-basis, and the restricted Lie subalgebras of $Der_A(C)$ which are also $C$-module direct summands of $Der_A(C)$.
Given a Galois extension $A\subseteq C$, whose local $p$-basis has $n$ elements, and a fixed integer $0
In this talk, we will expose the Yuan’s theory, this incipient results on the representability of the Yuan’s functor and some problems we are working on.
This is joint work with Diego Sulca (FAMAF, Universidad Nacional de Córdoba, Argentina) and Orlando Villamayor (Universidad Autónoma de Madrid).
On Milnor and Tjurina number of Foliations
20 Fernández-Pérez Abstract↴
April Univ. Federal de
2022 at Minas Gerais In this talk, I will show the relationship between the Milnor and Tjurina number of a foliation in the complex plane. Such numbers are similar to the classic Milnor and
17h (Brazil) Tjurina numbers for singular curves.
This work is in collaboration with Evelia García Barroso (Universidad de la Laguna - Spain) and Nancy Saravia Molina (PUCP-Peru).
Definite fillings of lens spaces
Paolo Aceto Motivated by the study of smoothings of cyclic quotient singularities as well as symplectic fillings of lens spaces, we consider an analogue problem in a purely
23 Mar Lab. Paul topological setting.
2022 at Painlevé-Univ. Lille
17h (France) We look at smooth, definite fillings of lens spaces and consider the question of which intersection forms can be realized by such fillings. We discuss various
constructions and an obstruction based on Donaldson's diagonalization theorem. Finally, we present a complete classification of the lens spaces which bound a unique
negative-definite intersection form (up to stabilizations). We discuss consequences for smoothings of singularities as well as embeddings of lens spaces in certain
This is joint work with Duncan McCoy and JungHwan Park.
Construction of knots with sphere packings
09 Mar Iván Rasskin How many spheres are needed to construct a knotted necklace? And with spheres of different sizes? Behind these harmless questions lies a deep connection between Knot
2022 at IMAG-Univ. Theory, sphere packings, polytopes, Number Theory and Lorentz geometry. In this talk, we will see how these different theories are connected, and we will use them to
17h Montpellier (France) describe two methods for the construction of knotted necklaces. The first method, based on the Koebe-Andreev-Thurston Circle Packings Theorem, will allow us to establish
that the minimum number of spheres necessary to construct a necklace (or necklaces) in the shape of a given knot (or link) it is less than 5 times its crossing number. In
the second method, we use a generalization of a famous fractal called Apollonian packing, to improve the bound given by the first method for rational knots and links.
This is a joint work with Jorge Ramírez Alfonsín.
On the stability of logarithmic tangent sheaves
Given a hypersurface $D$ in $P^N$ and its associated logarithmic tangent sheaf $T_D$, we could follow two alternative paths which are completely different and yet equally
23 Feb Simone Marchesi interesting. Indeed, we can analyze when $D$ is free or it induces a stable sheaf.
2022 at Universitat de
17h Barcelona (Spain) In this talk, we will deal with the stability problem, tackling a wide set of hypersurfaces by relating it to their degree and dimension of the singular locus.
Furthermore, we will show that stability also holds for hypersurfaces defined by determinants. Finally, for this last set, we will study the moduli map from the quotient
which describes the matrices whose determinant defines $D$, and the moduli space of semistable shaves on $P^N$ that contains $T_D$.
This is a joint work with Daniele Faenzi.
Using Algebraic Geometry to detect robustness in Reaction Networks
Slides Video
In Biochemistry, Ecology or Epidemiology, among other fields, Reaction Networks represent interactions between species, as for example proteins involved in some cellular
09 Feb Beatriz Pascual process. The evolution of the concentrations of these species in the network is often modeled by an autonomous ODE system, which can involve a large number of unknown
2022 at Escudero parameters. However, under certain kinetic assumptions, these ODEs are polynomials with a very particular structure, which can be used to understand several aspects of
17h Univ. Carlos III their dynamics. The equilibria of the system then happen to be the real positive part of an algebraic variety.
Madrid (Spain)
Motivated by this kind of dynamical systems, we will see how it is possible to use Algebraic Geometry to study the property of ACR (Absolute Concentration Robustness) in
systems with this type of structure: a biological system has ACR for some species if the concentration of this species is identical at any possible equilibrium that the
network admits. In particular, this concentration must be independent of the initial conditions. While some classes of networks with ACR have been described, as well as
some techniques to check ACR for a given network, finding networks with this property is a difficult task in general. We provide a practical criterion that networks must
satisfy for having ACR and which, for certain classes of networks, characterizes the property. This is based on joint work with E. Feliu.
The special fiber of a conormal space
26 Jan Antoni Rangachev Abstract↴
2022 at International Center
17h for Mathematical Let (X,0) be a complex analytic germ. I will show that if (X,0) is normal and the codimension one polar variety of (X,0) is empty, then (X,0) is smooth. In particular,
Sciences (Bulgaria) normal surfaces with no polar curve are smooth. I will discuss how these observations lead to a notion of generalized smoothability. If time permits I will discuss
applications to rigidity.
Milnor Fibers of Hypersurfaces with Line Singularities
15 Dec David Massey The homotopy-type of the Minor fiber of a hypersurface with an isolated singularity is completely determined by the Minor number, which is effectively algebraically
2021 at Northeastern Univ. calculable. The next easiest case is a hypersurface with a smooth curve of singularities which, after a local analytic change of coordinates, becomes a line of
17h (USA) singularities.
I will discuss the extent to which the 0- and 1-dimensional Lê numbers for a line singularity are good generalizations of the Minor number for an isolated singularity.
This will involve definitions and properties of the relative polar curve of Hamm, Lê, and Teissier from the 1970’s and the Lê numbers which I defined in 1990. Hopefully,
we will get to my generalization with Lê in 2006 of an important 1983 result of Siersma on line singularities.
Eigenspace Decomposition of Mixed Hodge Structures on Alexander Modules
01 Dec Eva Elduque Abstract↴
2021 at Univ. Autónoma de
17h Madrid (Spain) In previous work jointly with Geske, Herradón Cueto, Maxim and Wang, we constructed a mixed Hodge structure (MHS) on the torsion part of Alexander modules, which
generalizes the MHS on the cohomology of the Milnor fiber for weighted homogeneous polynomials. The cohomology of a Milnor fiber carries a monodromy action, whose
semisimple part is an isomorphism of MHS. The natural question of whether this result still holds for Alexander modules was then posed. In this talk, we will talk about
the solution to this question, as well as some consequences and explicit computations. Joint work with Moisés Herradón Cueto.
Vanishing cycles for irregular local systems
17 Nov Brian Hepler Motivated by the recent results of D'Agnolo-Kashiwara in dimension one, We give a generalization of the notion of vanishing cycles to the setting of enhanced ind-sheaves
2021 at Univ. on an arbitrary complex manifold $X$. Specifically, we show that there are two distinct (but Verdier-dual) functors that deserve the name of “irregular” vanishing cycles
17h Wisconsin-Madison associated an arbitrary holomorphic function $f$ on $X$. Loosely, these functors capture the two distinct ways in which an irregular local system on the complement of the
(USA) hypersurface $V(f)$ can be extended across that hypersurface.
From this perspective, we give an (conjectural) interpretation of the enhanced perverse vanishing cycles object in terms of two distinct (but Verdier-dual) notions from
the theory of Stokes-filtered local systems with poles along a divisor $D$: as the sheaf of sections with greater than rapid decay along $D$, and as the sheaf of sections
with moderate growth along $D$. The duality obtained recovers a version of the duality between de Rham cohomology and the rapid decay homology of Bloch-Esnault-Hien.
Reductive quotient singularities
03 Nov Joaquín Moraga
2021 at Princeton Univ. The study of quotients by reductive groups is an important topic in algebraic geometry. It manifests when studying moduli spaces, orbit spaces, and G-varieties. Many
16h (USA) important classes of singularities, as rational singularities, are preserved under quotients by reductive groups.
In this talk, we will show that the singularities of the MMP are preserved under reductive quotients. As an application, we show that many good moduli spaces, as the
moduli of smoothable K-polystable varieties, have klt type singularities.
The analytic classification of plane curves
20 Oct María Elenice Abstract↴
2021 at Rodrigues Hernandes
17h Univ. Estadual de In this talk I will present a solution to the problem of the analytic classification of germs of plane curves with several branches in a fixed topological class. The
Maringá (Brazil) algebraic approach of this work follows precursive ideas of Oscar Zariski.
This is a joint work with Marcelo Escudeiro Hernandes.
How can one check if a tuple of curves is a Zariski tuple?
Slides Video
06 Oct Enrique Artal
2021 at Universidad de Abstract↴
17h Zaragoza (Spain)
The origin of this talk is an open question in a recent joint work with J.I. Cogolludo and J. Martín Morales in Collect. Math. (MR4129536), where for any even degree (at
least 6) we present candidates of Zariski tuples of irreducible curves (increasing size). Before dealing with this actual problem, I will describe invariants which detect
topological properties of embeddings of plane curves, define combinatorics and explain known strategies to detect Zariski tuples.
IberoSing Special hybrid workshop for PhD students in singularities (27th-28th October 2021)
IberoSing Special Summer Edition 2021 (23th-25th June 2021)
│ │ June 23th │ June 24th │ June 25th │
│ │ │ × │ │
│ │ │ │ │
│ │ │ Beata Gryszka │ │
│ │ │ │ × │
│ │ × │ University of Cracow (Poland) │ │
│ │ │ │ Christopher Heng Chiu │
│ │ Guillaume Rond │ On some regularity condition │ │
│ │ │ Slides Video │ University of Vienna (Austria) │
│ │ Université Aix Marseille (France) │ │ │
│ │ │ Abstract↴ │ Embedding codimension of the space of arcs │
│ │ A few facts about elimination theory in local analytic geometry │ │ Slides Video │
│ │ Slides Video │ We will present a theorem, which says that if $\ │ │
│ │ │ mathbb{K}$ is a field of characteristic zero, a │ Abstract↴ │
│ │ Abstract↴ │ function $f \colon \mathbb{K} ^{n} \rightarrow \ │ │
│ │ │ mathbb{K} $ has a rational representation and the │ In this talk we aim to study the local geometry of arc │
│ 17:00-18:00 │ Elimination theory covers the methods to eliminate variables in systems │ restriction of $f$ to every vector plane │ spaces and relate it to the singularities of the │
│ │ of equations. From a geometrical point of view, this concerns the │ contained in $ \mathbb{K}^{n}$ is regular, then │ underlying algebraic varieties. To that avail, we │
│ │ methods to determine the image of a set defined by equations under some │ $f$ is regular at the origin. This theorem is a │ introduce a notion of embedding codimension that is │
│ │ linear projection. Here we consider the case of power series equations. │ positive answer to the question of Wojciech │ applicable to the non-Noetherian setting. Our main │
│ │ We will recall briefly some results about the elimination theory for │ Kucharz, which was formulated for a real closed │ result characterizes arcs whose generic point maps to │
│ │ polynomial equations, then we will give examples emphasizing the │ field. │ the smooth locus as those with finite embedding │
│ │ differences with the case of power series equations. Finally we will │ │ codimension. We will then relate our work to the │
│ │ focus on the case of convergent power series equations and investigate │ During the talk we will also show that if $ \ │ theorem of Drinfeld, Grinberg and Kazhdan as well as │
│ │ the following question: what happens when we eliminate variables of │ mathbb{K} $ is uncountable and the restriction of │ the study of Mather discrepancies in birational │
│ │ convergent power series equations without paying attention to the │ $f \colon \mathbb{K} ^{n} \rightarrow \mathbb{K} │ geometry. │
│ │ convergence of the series involved? We will give examples, and we will │ $ to every affine plane is regular, then $f$ is │ │
│ │ present both the algebraic and geometric perspectives on the problems. │ regular. In this theorem we do not have to assume │ This is joint work with Tommaso de Fernex and Roi │
│ │ │ that $f$ has a rational representation. In the │ Docampo. │
│ │ │ case $ \mathbb{K} = \mathbb{R} $, the theorem │ │
│ │ │ follows directly from the result obtained by J. │ │
│ │ │ Kollár, W. Kucharz and K. Kurdyka in 2017. │ │
│ │ Break/Discussion │ Break/Discussion │ Break/Discussion │
│ │ × │ × │ │
│ │ │ │ │
│ │ Manuel Gonzalez Villa │ Arturo Giles Flores │ │
│ │ │ │ │
│ │ CIMAT-Guanajuato (Mexico) │ University of Aguascalientes (Mexico) │ × │
│ │ │ │ │
│ │ On a quadratic form associated with the nilpotent part of the monodromy │ On the 5th Whitney cone of a complex analytic │ Daniel Duarte │
│ │ of a curve │ curve │ │
│ │ Video │ Slides Video │ Universidad Autónoma de Zacatecas (Mexico) │
│ │ │ │ │
│ │ Abstract↴ │ Abstract↴ │ The module of Kähler high order differentials and the │
│ │ │ │ Hasse-Schmidt algebra │
│ │ Joint work with Lilia Alanís-López, Enrique Artal Bartolo, Christian │ For a germ of complex analytic variety (X,0) │ Slides Video │
│ │ Bonatti, Xavier Gómez-Mont, and Pablo Portilla Cuadrado. │ Whitney gave 6 possible definitions of tangent │ │
│ 18:15-19:15 │ │ vectors, the set of which define "tangent cones" │ Abstract↴ │
│ │ We study the nilpotent part N of certain pseudo-periodic automorphisms │ and coincide with the tangent space when the germ │ │
│ │ of surfaces appearing in singularity theory. We associate a quadratic │ is smooth. │ It was recently proved by T. de Fernex and R. Docampo │
│ │ form Q defined on the first (relative to the boundary) homology group of │ │ that the module of differentials of the algebra of │
│ │ the Milnor fiber F of any germ analytic curve on a normal surface. Using │ We will start with a quick review of how these │ Hasse-Schmidt derivations of a ring can be described │
│ │ the twist formula and techniques from mapping class group theory, we │ spaces are built and the equisingularity data │ in terms of the module of differentials of the ring. │
│ │ prove that the form Q obtained after killing ker N is definite positive, │ they carry. We will then present a procedure to │ This result was then applied to find a │
│ │ and that its restriction to the absolute homology group of F is even │ calculate the 5th Whitney cone of a curve. As a │ projectivization of induced maps on jets schemes. In │
│ │ whenever the Nielsen-Thurston graph of the monodromy automorphism is a │ byproduct we obtain bounds on the number of │ this talk, we explore the analogous statements for the │
│ │ tree. The form Q is computable in terms of the Nielsen-Thurston or the │ irreducible components of the cone and a set of │ module of high order differentials. │
│ │ dual graph of the semistable reduction, as illustrated with several │ numbers called auxiliary multiplicities which │ │
│ │ examples. Numerical invariants associated to Q are able to distinguish │ characterize biLipschitz equisingularity of │ This is joint work with Paul Barajas. │
│ │ plane curve singularities with different topological types but the same │ curves. │ │
│ │ spectral pairs or Seifert form. Finally, we discuss a generic linear │ │ │
│ │ germ defined on a superisolated surface with not smooth ambient space. │ This is joint work with Otoniel N. Silva and │ │
│ │ │ Jawad Snoussi. │ │
│ │ Break/Discussion │ Break/Discussion │ Break/Discussion │
│ │ × │ │ │
│ │ │ │ │
│ │ Laura Starkston │ │ │
│ │ │ │ │
│ │ UC Davis Mathematics (USA) │ │ │
│ │ │ │ │
│ │ Unexpected fillings, singularities, and plane curve arrangements │ │ │
│ │ Slides Video │ │ │
│ │ │ │ │
│ 19:30-20:30 │ Abstract↴ │ │ │
│ │ │ │ │
│ │ I will discuss joint work with Olga Plamenevskaya studying symplectic │ │ │
│ │ fillings of links of certain complex surface singularities, and │ │ │
│ │ comparing symplectic fillings with complex smoothings. We develop │ │ │
│ │ characterizations of the symplectic fillings using planar Lefschetz │ │ │
│ │ fibrations and singular braided surfaces. This provides an analogue of │ │ │
│ │ de Jong and van Straten's work which characterizes the complex │ │ │
│ │ smoothings in terms of decorated complex plane curves. We find │ │ │
│ │ differences between symplectic fillings and complex smoothings that had │ │ │
│ │ not previously been found in rational complex surface singularities. │ │ │
Past talks & mini-courses (2020-2021)
Date Speaker Title
Nodal blocks, partial separatrices and dicritical components
16 June Beatriz Abstract↴
2021 at Molina-Samper
17h UNAM (Mexico) It is classically known that there are germs of codimension one foliations in ambient dimension three that do not have an invariant surface, the property they share is
that they are dicritical foliations. Thus, the question of how much transcendental can be the leaves of the foliation arises in these cases. Brunella’s Alternative in a
local version conjectures that each leaf must contain at least an analytic germ of curve passing through the origin. In this talk we introduce some of the ingredients
that allow us to deal with this problem and we also present the family of foliations for which we want to give an answer to this question.
Singularities of frontals
Christian Muñoz
9 June Cabello A smooth mapping $f\colon N^n \to Z^{n+1}$ is frontal if there exists a nowhere-vanishing $1$-form $\nu$ on $Z$ such that $f^*\nu=0$. Since frontals can be obtained as
2021 at Universitat de Legendrian projections of parametrized Legendre submanifolds, the problem of classifying frontals is equivalent to that of classifying Legendre submanifolds under
17h Valencia (Spain) Legendre equivalence.
In this joint work with J.J. Nuño-Ballesteros and R. Oset-Sinha, we explore a more direct approach to the classification of frontals, based on the fact that the $\
mathscr{A}$-orbit of any given frontal map is contained within the space of frontal maps. One of the consequences of this approach is that many of the classic results
from Mather's theory of $\mathscr{A}$-equivalence can be adapted to the frontal case.
On k-folding map-germs and hidden symmetries of surfaces in the Euclidean 3-space II
2 June Farid Tari Given any smooth surface in $\mathbb{R}^3$ (or a complex surface in $\mathbb{C}^3$) we associate, to any point on $M$, any integer $k>1$ and any plane, a holomorphic
2021 at ICMC-USP (São map-germ of the form $F^k(x,y)=(x,y^k, f(x,y))$. This kind of map-germs are called $k$-folding map-germs. In these two talks we describe the interplay between the
17h Carlos, Brazil) topology of $k$-folding map-germs and the extrinsic differential geometry of $M$.
On the second session, we introduce the topological classification of $k$-folding map germs exhibited by generic surfaces, and show how their occurrence relates to old
and new robust features of $M$.
This is a joint work with Guillermo Peñafort Sanchís (U. Valencia).
On k-folding map-germs and hidden symmetries of surfaces in the Euclidean 3-space I
26 May Guillermo Peñafort Given any smooth surface in $\mathbb{R}^3$ (or a complex surface in $\mathbb{C}^3$) we associate, to any point on $M$, any integer $k>1$ and any plane, a holomorphic
2021 at Sanchís map-germ of the form $F^k(x,y)=(x,y^k, f(x,y))$. This kind of map-germs are called $k$-folding map-germs. In these two talks we describe the interplay between the
17h Universidat de topology of $k$-folding map-germs and the extrinsic differential geometry of $M$.
València (Spain)
On the first session, we introduce topological invariants that control the topological triviality of families of map-germs, and show how they are computed for
$k$-folding mappings.
This is a joint work with Farid Tari (ICMC-USP).
Link criterion for Lipschitz normal embedding of definable sets
19 May Nhan Nguyen Abstract↴
2021 at Basque Center for
17h Applied Mathematics In this talk, we will present a link criterion for normal embedding of definable sets in o-minimal structures. Namely, we prove that given a definable germ $(X, 0)\
(Bilbao, Spain) subset (\mathbb{R}^n,0)$ with $(X\setminus\{0\},0)$ connected and a continuous definable function $\rho: (X,0) \to \mathbb{R}_{\geq 0}$ such that $\rho(x) \sim \|x\|$,
then $(X,0)$ is Lipschitz normally embedded (LNE) if and only if $(X,0)$ is link Lipschitz normally embedded (LLNE) with respect to $\rho$. This is a generalization of
Mendes--Sampaio's result for the subanalytic case.
Relating analytic invariants of a plane branch and their semiroots
12 May Marcelo Escudeiro Abstract↴
2021 at Hernandes
17h U. Estadual de In this talk we present relations among analytical invariants of irreducible plane curves and their semiroots. More specifically, we will explore the Tjurina number of a
Maringá (Brazil) plane branch and the set of values of Kähler differentials of the local ring associated to the curve.
This is a joint work with Marcelo Rodrigues Osnar de Abreu.
Moderately Discontinuous Algebraic Topology
In the works [1] and [2] we develope a new metric algebraic topology, called the Moderately Discontinuous Homology and Homotopy in the context of subanalytic germs in R^
n (with a supplementary metric structure) that satisfies the analogues of the usual theorems in Algebraic Topology: long exact sequences, relatv case, Mayer Vietoris,
Seifert van Kampen for special coverings...
21 April María Pe Pereira This theory captures bilipschitz information, or in other words, quasi isometric invariants. The typical examples to which it applies are subalaytic germs with the inner
2021 at U. Complutense de metric (the length metric induced by the euclidean metric) and with the outer metric (the restriction of the euclidean metric).
17h Madrid (Spain)
A subanalytic germ is topologically a cone over its link and the moderately discontinuous theory captures the different speeds, with respect to the distance to the
origin, in which the topology of the lik collapses towards the origin.
In this talk, I will present the most important concepts in the theory and some results or applications that we got until the present.
References: [1] (with J. Fernández de Bobadilla, S. Heinze, E. Sampaio) Moderately discontinuous homology. To appear in Comm. Pure App. Math.. Available in arXiv:
1910.12552 ou sur https://arxiv.org/pdf/1910.12552.pdf [2] (with J. Fernández de Bobadilla, S. Heinze) Moderately discontinuous homotopy. Submitted. Available in
ArXiv:2007.01538 or in https://arxiv.org/pdf/2007.01538.pdf
On the notion of quasi-ordinary singularities in positive characteristics: Teissier singularities and their resolution.
Hussein Mourtada
14 April Institut de A singularity $(X,0)$ of dimension $d$ is quasi-ordinary with respect to a finite projection $p: (X,0)\to C^d$ if the discriminant of the projection is a normal crossing
2021 at Mathématiques de divisor. These singularities are at the heart of Jung’s approach to resolution of singularities (in characteristic 0). In positive characteristics, they are not useful
17h Jussieu (Paris, from the point of view of resolution of singularities, since their resolution problem is almost as difficult as the resolution problem in general.
I will discuss a new notion of singularities, Teissier singularities, which are candidate to play the role of quasi-ordinary singularities in positive characteristics.
This is a joint work with Bernd Schober.
A new proof of Gabrielov’s rank Theorem
Octave Curmi This talk concerns Gabrielov’s rank Theorem, a fundamental result in local complex and real-analytic geometry, proved in the 1970’s. Contrasting with the algebraic case,
24 Mar Alfréd Rényi it is not in general true that the analytic rank of an analytic map (that is, the dimension of the analytic-Zariski closure of its image) is equal to the generic rank of
2021 at Institute of the map (that is, the generic dimension of its image). This phenomenon is involved in several pathological examples in local real-analytic geometry. Gabrielov’s rank
17h Mathematics Theorem provides a formal condition for the equality to hold.
(Budapest, Hungary)
Despite its importance, the original proof is considered very difficult. There is no alternative proof in the literature, besides a work from Tougeron, which is itself
considered very difficult. I will present a new work in collaboration with Andr ́e Belotto da Silva and Guillaume Rond, where we provide a complete proof of Gabrielov’s
rank Theorem, for which we develop formal-geometric techniques, inspired by ideas from Gabrielov and Tougeron, which clarify the proof.
I will start with some fundamental examples of the phenomenon at hand, and expose the main ingredients of the strategy of this difficult proof.
Mixed multiplier ideals and equisingularity class
17 Mar Dachs-Cadefau Abstract↴
2021 at Martin Luther
17h University Järviletho in his Thesis presented a formula on how to infer the topological type of an unibranched curve based on its associated jumping numbers. Later, Tucker
Halle-Wittenberg presented an example on how this cannot be done if we drop the condition of unibrached. In this talk we study if we can infer the topological type of a tuple of ideals
(Germany) from its associated jumping walls. From those results, one can infer some properties of the jumping walls.
Three dimensional Strong Sard Conjecture in sub-Riemannian geometry
Slides Video
André Belotto da Given a totally nonholonomic distribution of rank two $\Delta$ on a three-dimensional manifold $M$, it is natural to investigate the size of the set of points $\mathcal
10 Mar Silva {X}^x$ that can be reached by singular horizontal paths starting from a same point $x \in M$. In this setting, the Sard conjecture states that $\mathcal{X}^x$ should be
2021 at Université a subset of the so-called Martinet surface of 2-dimensional Hausdorff measure zero.
17h Aix-Marseille
(France) I will present a reformulation of the conjecture in terms of the behavior of a singular foliation. By exploring this geometrical framework, in a recent work in
collaboration with A. Figalli, L. Rifford and A. Parusinski, we show that the strong version of the conjecture holds for three dimensional analytic varieties, that is,
the set $\mathcal{X}^x$ is a countable union of semi-analytic curves. Next, by studying the regularity of the solutions of the set $\mathcal{X}^x$, we show that
sub-Riemannian geodesics are all $C^1$. Our methods rely on resolution of singularities of surfaces, vector-fields and metrics; regularity analysis of Poincaré
transition maps; and on a symplectic argument, concerning a transversal metric of an isotropic singular foliation.
Simultaneous Monomialization
03 Mar Julie Decaup
2021 at UNAM (Cuernavaca, Abstract↴
17h Mexico)
In my talk, I will explain what is the simultaneous monomialization and its relation with the resolution of singularities.
Periods of algebraic cycles and Hodge locus
Slides Video
Roberto Tomas
24 Feb Villaflor Loyola The Hodge locus was introduced by Grothendieck in 1966 to study the Hodge conjecture in families (aka Variational Hodge conjecture). Even for surfaces, where the Hodge
2021 at IMPA (Rio de conjecture is known to hold, its components are far from being well-understood. In this special case the Hodge locus coincides with the classical Noether-Lefschetz
17h Janeiro, Brazil) locus, that was studied at the end of the 80's and beginning of the 90's by Green, Voisin, Harris among others. Their main tool was the use of infinitesimal variations
of Hodge structures (IVHS).
In this talk I will survey recent results about the computation of periods of algebraic cycles in hypersurfaces, their relation with IVHS and some applications to the
study of the Hodge locus and Variational Hodge conjecture.
On the boundary of the Milnor fiber for non-isolated singularities.
Slides Video
Let $f: (\mathbb C^{n+1},p) \to (\mathbb C,0)$ be a holomorphic function germ with critical point at $p$, set $V= f^{-1}(0)$, let $L_f = V \cap \mathbb S_\varepsilon$ be
its link, and recall that $L_f$ determines fully the topology of $V$.
17 Feb José Seade Kuri
2021 at UNAM (Mexico) The Milnor fibers of $f$ can be regarded as the family of local non-critical levels $F_t:= f^{-1}(t) \cap \mathbb B_\varepsilon$ with $t \ne 0$, which degenerate to the
17h special fiber $F_0:= V \cap \mathbb B_\varepsilon$ as $t$ approaches $0$. There is a vast literature studying how this degeneration $F_t \leadsto F_0$ takes place.
Simultaneously, as the $F_t $ degenerate to $F_0$, their boundaries $\partial F_t$ ``converge" to the link $L_f = \partial F_0$. If $p$ is an isolated critical point of
$f$, all the $\partial F_t$ are ambient isotopic to $L_f$. Yet, if $p$ is a non-isolated critical point, then the $\partial F_t$ with $t \ne 0$, are a family of real
analytic manifolds converging to the link $L_f$, which now is singular. In this talk we study the degeneration $\partial F_t \leadsto L_f$.
This is joint work with Aurelio Menegon and Marcelo Aguilar, and it springs from previous work by Randell, Siersma, Michel-Pichon-Weber, N\'emethi-Szilard and Fern\
'andez de Bobadilla-Menegon.
Hodge ideals of some free divisors
Slides Video
10 Feb Alberto Castaño
2021 at Domínguez Hodge ideals were recently introduced by Popa and Mustata to study, mainly with birational techniques, the Hodge filtration on the sheaf of meromorphic functions on a
17h Universidad de variety along a divisor. They measure the difference between its Hodge and pole order filtrations, the latter always containing the former. They are also a
Sevilla (Spain) generalization of multiplier ideals, since the zeroth Hodge ideal is the adjoint ideal of the divisor.
Up to now, most results dealt with isolated singularities; in this talk I will comment on a joint work with Ch. Sevenheck and L. Narváez Macarro, where we study and
compute Hodge ideals for certain free divisors, by means of Hodge modules and D-module theory.
Monodromy of germs of analytic functions without fixed points
Slides Video
Roberto Giménez In this joint work with J.J. Nuño-Ballesteros and Lê Dung Tráng we prove that, given $f:(X,x)\rightarrow (\mathbb{C}, 0)$ such that $f\in\mathfrak{m}^2O_ {X,x}$, there
03 Feb Conejero is a geometric local monodromy of $f$ without fixed points and we give an application of this fact in a broad context.
2021 at Universidat de
17h València (Spain) A geometric monodromy appears every time we have a local trivial fibration over $S^1$, say $f:U\rightarrow S^1$. Broadly speaking, it is a map of a fiber $F=f^{-1}(x_0)$
onto itself that is defined by taking $F$ to give a loop around $S^1$. This is the situation of $f:(X,x)\rightarrow (\mathbb{C}, 0)$ such that $f\in\mathfrak{m}^2O_
{X,x}$ and the fibration induced by taking a small enough circumference around $0$, in this case is called local geometric monodromy. Finally, we use it to prove that,
in a broad context, the critical points of a family of functions from a family of complex analytic sets cannot split along the family. This generalizes two theorems of
the second coauthor, one stated for $\mathbb{C}^n$ instead of $X$ and other for hypersurfaces; and gives an alternative proof of a result of A'Campo, that the Lefschetz
number is zero.
The Bruce Roberts Number of a Function on an Isolated Hypersurface Singularity
Slides Video
Bárbara Karolline Let $(X,0)$ be an isolated hypersurface singularity defined by $\phi\colon(\mathbb{C}^n,0)\to(\mathbb{C},0)$ and $f\colon(\mathbb{C}^n,0)\to\mathbb{C}$ such that the
27 Jan de Lima Pereira Bruce-Roberts number $\mu_{BR}(f,X)$ is finite. In this work we prove that $$\mu_{BR}(f,X)=\mu(f)+\mu(X\cap f^{-1}(0),0)+\mu(X,0)-\tau(X,0),$$ where $\mu$ and $\tau$ are
2021 at UFSCar (São Carlos, the Milnor and Tjurina numbers respectively of a function or an isolated complete intersection singularity. We also prove that the logarithmic characteristic variety $LC
17h Brazil) (X,0)$ is Cohen-Macaulay, both results generalize [1].
This is a joint work with J. J. Nuno-Ballesteros (Universitat de Valencia, SPAIN), B. Orefice-Okamoto (UFSCar, BRAZIL) and J.N. Tomazella, (UFSCar, BRAZIL).
References: [1] J. J. Nu ̃no Ballesteros, B. Or ́efice-Okamoto, J. N. Tomazella, The Bruce-Roberts number of a function on a weighted homogeneous hypersurface, Q. J.
Math.64(2013), no. 1, 269-280
Newton nondegenerate Weil divisors in toric varieties
Slides Video
20 Jan Baldur Sigurðsson
2021 at UNAM (Cuernavaca, We introduce Newton nondegenerate Weil divisors in toric affine varieties and present formulas for their geometric genus, canonical divisors, and provide conditions on
17h Mexico) their Newton polyhedron to be Gorenstein. We prove that if such a Weil divisor of dimension 2 is normal and Gorenstein, and the link is a rational homology sphere, then
the geometric genus is given by the minimal path cohomology, a topological invariant.
This is joint work with András Némethi.
Contact exponent and the Milnor number of plane curve singularities
We investigate properties of the contact exponent (in the sense of Hironaka [3]) of plane algebroid curve singularities over algebraically closed fields of arbitrary
Evelia García characteristic. We prove that the contact exponent is an equisingularity invariant and give a new proof of the stability of the maximal contact. Then we prove a bound
13 Jan Barroso for the Milnor number and determine the equisingularity class of algebroid curves for which this bound is attained. We do not use the method of Newton's diagrams. Our
2021 at Universidad de La tool is the logarithmic distance developed in [1].
17h Laguna (Spain)
This is a joint work with Arkadiusz Płoski (see [2]).
References: [1] García Barroso, E. and A. Płoski. An approach to plane algebroid branches. Rev. Mat. Complut., 28 (1) (2015), 227-252. [2] García Barroso, E. and A.
Płoski. Contact exponent and Milnor number of plane curve singularities. Analytic and Algebraic Geometry, 3. T. Krasinski, Stanisław Spodzieja (Eds.) Lodz University
Press (2019), 93-109. http://dx.doi.org/10.18778/8142-814-9.08 [3] Hironaka, H. Introduction to the theory of infinitely near singular points, Memorias del Instituto
Jorge Juan 28, Madrid 1974.
Yano's conjecture
16 Dec Guillem Blanco Abstract↴
2020 at KU Leuven (Belgium)
17h In 1982, T. Yano proposed a conjecture about the generic $ b $-exponents of an irreducible plane curve singularity. Given any holomorphic function $ f : (\mathbb{C}^2, \
boldsymbol{0}) \longrightarrow (\mathbb{C}, 0) $ defining an irreducible plane curve, the conjecture gives an explicit formula for the generic $ b $-exponents of the
singularity in terms of the characteristic sequence of $ f $. In this talk, we will present a proof of Yano's conjecture.
Lê’s vanishing polyhedron for mixed functions
Slides Video
09 Dec Aurélio Menegon Abstract↴
2020 at Neto
17h UF de Paraiba (João We will talk about Lê’s vanishing polyhedra for analytic maps and we will use them to prove a join theorem for mixed functions. This provides a tool for understanding
Pessoa, Brazil) the topology of some families of real analytic singularities, which is still somehow an unexplored area in Singularity Theory.
This is a joint work with José Luis Cisneros-Molina.
Mini-course (4 sessions): Mixed Hodge Structures on Alexander Modules II
Eva Elduque & Abstract↴
23 Nov - Moisés Herradón
02 Dec U. of Michigan/ This second half will consist of an overview of the construction of the mixed Hodge structure on Alexander modules using mixed Hodge complexes, together with a
2020 at Lousiana State discussion of some of its desirable properties, such as its relation to other well-known mixed Hodge structures. We will see that the covering map $U^f \to U$ induces a
17h (USA) mixed Hodge structure morphism $A_*(U^f;\mathbb Q)\to H_*(U;\mathbb Q)$. As applications of this fact, we can understand the mixed Hodge structure on the Alexander
modules better, plus we can draw conclusions about the monodromy action on $A_*(U^f;\mathbb Q)$ that don't involve Hodge structures. For instance, we can show that this
action is always semisimple on $A_1(U^f;\mathbb Q)$. Time permitting, we will also discuss the relation to the limit Mixed Hodge structure in the case where $f$ is
Applications of Alexander Modules to the topology of curve complements
Slides Video
18 Nov Jose I. In this talk I will present an accessible view on applications of the Alexander Module of the complement of a plane curve complement on its topology. In particular, I
2020 at Cogolludo-Agustín will describe different strategies to use these modules to solve two problems: the Zariski pair problem and the quasi-projectivity Serre's problem. The first one arises
17h Universidad de when trying to determine whether or not two plane curves that have the same degree, same irreducible components, and same topological type of singularities, might have
Zaragoza (Spain) non-homeomorphic complements. The second one aims to deciding whether or not a given finitely presented group is the fundamental group of a quasi-projective variety
(such as the complement of a plane curve).
To illustrate the Zariski-pair type of problems Rybnikov's famous example will be discussed, where two line arrangements with the same combinatorics are shown to have
non-homeomorphic complements. As an example of Serre's problem, a characterization of the quasi-projective even Artin groups will be presented.
Mini-course (6 sessions): Mixed Hodge Structures on Alexander Modules I
This is a course about our recent paper arXiv:2002.01589v3 (Joint with Christian Geske, Laurențiu Maxim and Botong Wang), on the construction and properties of a
canonical mixed Hodge structure on the torsion part of the Alexander modules of a smooth connected complex algebraic variety. The course will roughly be divided in two
26 Oct - Eva Elduque & halves.
11 Nov Moisés Herradón
2020 at U. of Michigan/ The first half of the course will cover the necessary background material. We will give a historical introduction to (pure and mixed) Hodge structures, and the
17h Lousiana State techniques developed to study them, focusing mainly on Deligne's mixed Hodge complexes. For this, we will need to introduce some basic concepts about sheaves. We will
(USA) also give an introduction to Alexander modules on smooth algebraic varieties. For our purposes, they are defined as follows: let $U$ be a smooth connected complex
algebraic variety and let $f\colon U\to \mathbb C^*$ be an algebraic map inducing an epimorphism in fundamental groups. The pullback of the universal cover of $\mathbb C
^*$ by $f$ gives rise to an infinite cyclic cover $U^f$ of $U$. The Alexander modules of $(U,f)$ are by definition the homology groups of $U^f$. The action of the deck
group $\mathbb Z$ on $U^f$ induces a $\mathbb Q[t^{\pm 1}]$-module structure on $H_*(U^f;\mathbb{Q})$, whose torsion submodule we call $A_*(U^f;\mathbb Q)$.
For the background in Hodge theory, we will follow Peters and Steenbrink's text Mixed Hodge Structures. For the sheaf theory, possible references include Maxim's
Intersection Homology & Perverse Sheaves and Dimca's Sheaves in Topology.
The shapes of level curves of real polynomials near strict local minima
21 Oct Sorea We consider a real bivariate polynomial function vanishing at the origin and exhibiting a strict local minimum at this point. We work in a neighbourhood of the origin in
2020 at SISSA (Trieste, which the non-zero level curves of this function are smooth Jordan curves. Whenever the origin is a Morse critical point, the sufficiently small levels become boundaries
17h Italy) of convex disks. Otherwise, these level curves may fail to be convex.
The aim of this talk is two-fold. Firstly, to study a combinatorial object measuring this non-convexity; it is a planar rooted tree. And secondly, we want to
characterise all possible topological types of these objects. To this end, we construct a family of polynomial functions with non-Morse strict local minima realising a
large class of such trees.
The Brasselet-Schürmann-Yokura conjecture on $L$-classes
Slides Video
14 Oct Irma Pallarés
2020 at BCAM (Bilbao, The Brasselet-Schürmann-Yokura conjecture is a conjecture on characteristic classes of singular varieties, which predicts the equality between the Hodge L-class and the
17h Spain) Goresky-MacPherson L-class for compact complex algebraic varieties that are rational homology manifolds. In this talk, we will illustrate our technique used in the proof
of the conjecture by explaining the simple case of $3$-folds with an isolated singularity.
This is a joint work with Javier Fernández de Bobadilla.
Motivic zeta functions for ${\mathbb Q}$-Gorenstein varieties
07 Oct Edwin León-Cardenal This is a joint work with Jorge Martín-Morales, Wim Veys & Juan Viu-Sos.
2020 at CIMAT (Zacatecas,
17h Mexico) The study of zeta functions of hypersurfaces, allows one to determine some invariants of the singularity defining the hypersurface. A common strategy is to use a
classical embedded resolution of the singularity, which gives a list of possible 'poles' from which some invariants can be read of. The list is usually very large and a
major and difficult problem (closely connected with the Monodromy Conjecture) is determining the true poles. In this work we propose to use a partial resolution of
singularities to deal with this problem. We use an embedded Q-resolution, where the final ambient space may contain quotient singularities. This machinery allows us to
give some explicit formulas for motivic and topological zeta functions in terms of Q-resolutions, generalizing in particular some results of Veys for curves and
providing in general a reduced list of candidate poles.
This webminar is sponsored by Instituto de Matemática Interdisciplinar (IMI)
- Website designed by Juan Viu Sos - | {"url":"https://iberosing.github.io/","timestamp":"2024-11-02T02:51:10Z","content_type":"text/html","content_length":"127441","record_id":"<urn:uuid:25bf4a9c-7ddf-4857-9bdb-d328d425ac68>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00356.warc.gz"} |
Beliefs of Teaching Knowledge - Theoretical Background and Relevant Research
2 Theoretical Background and Relevant Research
2.2 Beliefs of Teaching Knowledge
Over the last few decades, a growing body of research has been conducted to elucidate teachers’ mathematical knowledge needed for teaching (Hill et al., 2008). However, most of the extant studies
have focused on the ways teachers’ knowledge and beliefs influence student performance (Hill et al., 2005; Rockoff, Jacob, Kane, & Steiger, 2011) and instructional practice (Ball, 1990; Ben-Peretz,
2011; Fennema &
Franke, 1992; Mapolelo & Akinsola, 2015; Wilkins, 2008). In particular, only a small portion of these studies have focused on teachers’ views and understanding of the knowledge needed for teaching
mathematics (Hatisaru, 2018; Mosvold & Fauskanger, 2013).
Research on teachers’ beliefs about teaching knowledge, while limited, has yielded some valuable findings (Ferguson & Brownlee, 2018; Fives & Buehl, 2008; Hofer, 2002; Mosvold & Fauskanger, 2013;
Sinatra & Kardash, 2004). Fives and Buehl (2008), for instance, described the concept of personal epistemology in the context of studies about teaching knowledge. The authors employed qualitative and
quantitative methods to examine pre-service teachers’ and practicing teachers’ beliefs about teaching knowledge and teaching ability. While
the authors provided valuable insights for developing a framework to conceptualize teachers’ beliefs about teacher knowledge, they called for further investigations using longitudinal and
cross-sectional methodologies to explore this topic further, as “such studies would indicate whether these beliefs are developmental in nature and change as one experiences the profession” (Fives &
Buehl, 2008, p. 172).
Drawing upon the work of Fives and Buehl (2008) and Philipp’s (2007) concept of belief, Mosvold and Fauskanger (2013) explored the epistemic beliefs that teachers have about the knowledge needed to
teach mathematical definitions. The researchers gathered pertinent data via focus-group interviews involving 15 pre-service and in-service teachers in Norway, which was subjected to content and
inductive analysis. Their findings revealed that, while some teachers believed that knowledge of definitions is an integral part of their mathematical knowledge for teaching, others opined that the
mathematical definitions are important for higher grades but are not necessary for lower-grade students. The participating teachers were, however, aware of the cultural differences in accepted
mathematical definitions.
In a subsequent study, Mosvold and Fauskanger (2014) focused specifically on the domain of mathematical horizon content knowledge.
In this context, the authors discussed the beliefs pre-service and practicing teachers have about the knowledge at the mathematical horizon for teaching. A significant finding that emerged from this
study was that teachers did not seem to emphasize HCK in their education and practice. When discussing aspects of broader content, participants tended to focus mainly on whether a particular
mathematical content was directly related to the curriculum for a specific grade level. This investigation illustrates difficulties encountered when investigating teachers’ beliefs about mathematical
knowledge for teaching, as this is a complex phenomenon that some teachers might find difficult to articulate.
In other studies, focus was primarily given to specific characteristics of teaching knowledge. For instance, Leikin and Zazkis (2010) interviewed secondary school teachers about their usage of
advanced mathematical knowledge acquired during undergraduate studies at colleges or universities. The authors adopted a qualitative approach based on grounded theory (Strauss & Corbin, 1990), aiming
to identify common themes in the teachers’ data. Their findings indicate that most teachers acknowledge the relevance of advanced mathematical knowledge but have difficulties in generating specific
problems or recalling situations in which advanced mathematics knowledge can be useful (Leikin & Zazkis, 2010). In particular, only a few participants were able to provide content-specific examples
for the purposes and advantages of their advanced mathematical knowledge for student learning, such as personal confidence, and the ability to make connections and respond to students’ questions
(Leikin & Zazkis, 2010).
Based on these findings, the authors called for a more articulate relationship between advanced mathematical knowledge and mathematical knowledge for teaching.
While the research briefly reviewed in the preceding sections is relevant to the understanding of how teaching knowledge influences the quality of teaching, how teaching knowledge functions in the
teaching-learning process of pre-service teachers during teacher training remains to be established. Extant studies on this topic suggest that teaching knowledge should be examined through the lens
of pre-service teachers’
perceptions of knowledge domains (Kilic, 2015), their self-perceptions of the tasks of teaching (O’Meara, Prendergast, Cantley, Harbison, &
O’Hara, 2019), and their views on and understanding of the certainty of teaching knowledge (Ferguson & Brownlee, 2018). Empirical evidence shows that teachers’ beliefs have a strong influence on the
way they approach students’ specificities and learning needs (Givvin, Stipek, Salmon, & MacGyvers, 2001), comprehend mathematical knowledge (Cady & Rearden, 2007), and develop their identity as a
teachers (Ponte, 2011). Thus, there is a need to examine whether pre-service teachers
truly understand the teaching tasks and the knowledge needed to carry out these tasks during mathematics instruction.
Influenced by the works of Mosvold and Fauskanger (2013) and Fives, Lacatena, and Gerard (2015), the present study focuses on the pre-service teachers’ understanding of the knowledge defined as
relevant to the practice of mathematics teaching. Thus, its aim is to elucidate how pre-service teachers develop their understanding of the knowledge necessary to teach mathematics throughout teacher
Having presented the theoretical background of this study, it is now important to clarify the concept of belief and the reasons for adopting understanding as the focus of this study.
2.3 Clarification of the Terms: Belief and Understanding | {"url":"https://9pdf.net/article/beliefs-teaching-knowledge-theoretical-background-relevant-research.z1dl8798","timestamp":"2024-11-03T20:31:55Z","content_type":"text/html","content_length":"66832","record_id":"<urn:uuid:247e9f72-9887-4539-ad2a-aed6640f3094>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00890.warc.gz"} |
This website includes slides, tutorial sheets and solutions for the various courses I teach at Brunel University and Imperial College London.
Finite Element Analysis, Masters Level
A masters level course on finite element analysis (FEA) teaching students to write a 2D FEA solver from scratch. Lectures from 2019 are available:
Applied Fluid Mechanics and CFD, Third Year
A course which teaches students further fluid mechanics theory how to solve problems with Computational Fluid Dynamics (CFD). My lectures cover the fundamental equations of fluid dynamics in both
integral and differential form, an overview of partial differential equations with some simple analytical solutions before explaining how to apply numerical techniques.
Programming, Second Year
A course which aims to teach best practice in engineering programming. This covers basic programming in MATLAB (loops, branching, flow control), functions, best-practice software design. Applications
include interpolation with Newton's divided difference and Lagrange polynomials, root finding, numerical integration and solving differential equations in 1D and 2D.
Aerodynamics and CFD, Second Year
A second year course at Brunel on Aerodynamics with an introduction to fluid dynamics. We cover the fundamental equations of aerodynamics, turbulence and transition, boundary layer theory, finite
aerofoil theory, horseshoe vortices with Biot-Savart Law and supersonic flow.
A Complete 1D Navier-Stokes Solver on one page. A Jupyter notebook to explain the complete discretisation of the Navier Stokes equations in 1D, explaining in the simplest possible case (1D) case how
we can discretise our equations, issues with oscillations, boundary conditions and the fractional step pressure solver.
Multi-Scale Modelling
Here are the notes for the continuum part of the multi-scale modelling course I teach. This is for masters students who have a background in a mathematical subject. Slides for the lectures, part one
notes and part two notes two, as well as background notes.
The lectures are available:
• Part one video, introduction to the continuum, differential equations and numerical solutions.
• Part two video, review of part one, more differential equations and an overview of the steps which lead the the Navier-Stokes equation.
• A white-board derivation video of the Navier-Stokes equation considering the link to molecular systems.
The content includes:
• Where the continuum fits into the wider modelling hierarchy
• Understand the Continuum assumption
• Partial differential equations and numerical solutions
• Two dimensional vector fields
• The Navier-Stokes Equation
□ Assumptions that lead to it
□ Key terms and their meaning (with some extensions)
□ Simplifications and solutions
• Link to the molecular dynamics equations
• Numerical solutions to the Navier Stokes equation
and the aims for the course were as follows:
• State the Continuum assumption, specifically for continuous fields and how this underpins fluid dynamics
• Understand three dimensional fields, vector calculus and partial differential equations
• Be able to solve basic differential equations numerically
• State the Navier-Stokes Equation, key assumptions, the meaning of the terms and how to simplify and solve.
• Understand how to treat the various terms in a numerical solutions to the Navier-Stokes equation
• Understand where the continuum modelling fits into the hierarchy and links to the molecular and plant scales
Intro Course
In order to address the lack of general Python teaching here at Imperial, I put together and gave a three part introduction course through the HPC support here at Imperial. This class was aimed at
beginners and also for those who want to switch from Matlab to Python.
• Introduction to Python for scientific computing, 3/3/17 (Video) (Slides) (Solutions)
□ Motivation for using Python.
□ Introduction to programming in Python
□ Python concepts (lists, iterators, etc) and discussion of the differences to other languages.
□ Scientific libraries numpy and matplotlib.
□ Examples of usage for scientific problems.
• Further details of the Python language, 10/3/17 (Video) (Slides) (Solutions)
□ More on Python data structures: concepts like references, immutable, lists, data organisation with Dictionaries and numpy arrays.
□ Use of functions and design of interfaces.
□ Introduction to classes and objects.
□ Structuring a project, importing modules and writing tests.
□ Examples of usage for scientific problems.
• Python libraries, 17/3/17 (Video) (Slides) (Solutions)
□ Using Python to read files (ascii, binary, hp5) and plot.
□ Running parameter studies by calling executables repeatedly with subprocess.
□ Designing a basic Graphical User Interface.
□ Unit testing frameworks and version control.
□ Other libraries and how to wrap your own code from fortran, c++, etc
Feedback from the course was very positive, summarised here, although given the large volume of material and range of students' backgrounds, many students felt the pace was too fast.
HPC Summer School 2017
Given the interest in this course, I ran again as part of the summerschool. This was split over two days:
Feedback from the course was positive, summarised here. The modular and object oriented approach taught in these course is the basis for open-source visualisation software, pydataview.
Roll Royce 2017
I was employed by Roll Royce to deliver a course over two days at their headquarters in Darsbury back in 2017. The feedback was very positive from this course, with 35 candidates rating the
"professional competence of the trainer" as 2.89 out of 3, a rating of 2.7 out of 3 for "how satisfied were you with the organisation of the training" and an "overall rating for the course" of 2.7
out of 3. The full summary is here. Please contact me if out would be interested in organising teaching. | {"url":"https://www.edwardsmith.co.uk/teaching.html","timestamp":"2024-11-03T04:09:10Z","content_type":"text/html","content_length":"15044","record_id":"<urn:uuid:9ef47893-5853-4bde-92b2-8c5989eb7f99>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00840.warc.gz"} |
How to do you graph y=1/3x+2 by plotting points? | HIX Tutor
How to do you graph #y=1/3x+2# by plotting points?
Answer 1
graph{y = 1/3 * x + 2 [-10, 10, -5, 5]}
x | 0 | -6 | 9 | y | 2 | 0 | 5 |
Just take any three arbitary natural numbers and substitute them in either x or y. Solve for the other value to get the other coordinate. then plot them and join. Should be easy for you!
Warning : Do this only when the equation is linear.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To graph the equation y = (1/3)x + 2 by plotting points, you can choose values for x, plug them into the equation to find corresponding values for y, and then plot the points on the coordinate plane.
Here's how you can do it:
1. Choose values for x. You can choose any values you like, but it's often helpful to pick values that are easy to work with. Let's choose x = 0, x = 3, and x = -3.
2. Plug each value of x into the equation to find the corresponding values of y:
□ When x = 0: y = (1/3)(0) + 2 = 2
□ When x = 3: y = (1/3)(3) + 2 = 1 + 2 = 3
□ When x = -3: y = (1/3)(-3) + 2 = -1 + 2 = 1
3. Plot the points (0, 2), (3, 3), and (-3, 1) on the coordinate plane.
4. Draw a straight line through the points. This line represents the graph of the equation y = (1/3)x + 2.
You can also find additional points by choosing more values for x and repeating the process, but three points are typically sufficient to accurately sketch the graph.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-to-do-you-graph-y-1-3x-2-by-plotting-points-8f9af9108c","timestamp":"2024-11-14T04:26:00Z","content_type":"text/html","content_length":"577413","record_id":"<urn:uuid:fbe75705-a824-437b-9780-d5c7cfb24ae6>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00013.warc.gz"} |
Motor Control University
The figure below represents a classical three-arm two-level inverter feeding a three-phase machine with star connection.
Modelling of the inverter
Transistors are considered as switches. The switches on each arm have complementary states. The control signal for the top switch of the arm, denoted \(s_k\) with \(k\in\{a,b,c\}\), represents the
state of the \(k\) arm and is defined as follows: \[ s_{k}=\left\{ \begin{matrix} 1 & \textrm{if the high switch is on},\\ 0 & \textrm{if the high switch is off}. \end{matrix} \right. \] The binary
state \(s_k\) of the upper switches produces the line-to-ground voltages \(v_{aM},v_{bM},v_{cM}\) such that: \[ \left[\begin{matrix} v_{aM}\\ v_{bM}\\ v_{cM} \end{matrix} \right] = V_{DC} \left[\
begin{matrix} s_{a}\\ s_{b}\\ s_{c} \end{matrix} \right]. \label{eq:etataligne} \] So the three arms can generate three independent line-to-ground voltages. The states of the three switches give \(2^
3\) possible configurations listed below:
\(s_a\) 0 1 1 0 0 0 1 1
\(s_b\) 0 0 1 1 1 0 0 1
\(s_c\) 0 0 0 0 1 1 1 1
In the case of a balanced star connected machine, we have: \[ \begin{array}{lcl} \mathbf{1}_3^\intercal i_{abc}&=& 0, \end{array} \] and: \[ \sum_{k}v_{k N} = 0, k\in\{a,b,c\}. \] In addition, \[ v_
{k N} = v_{k M} +v_{MN}. \] From (4) and (5) one has: \[ \begin{array}{rcl} \sum_{k}(v_{k M} +v_{MN})&=& 0,\\ \sum_{k}v_{k M} & = & -3v_{MN}, \end{array} \] and from (5) and (6), \[ v_{k N} = v_{k M}
+v_{MN}=\frac{1}{3}\left(2v_{k M}-\sum_{j\neq k} v_{jM}\right), j\in\{a,b,c\}, \] Finally, from (2) and (7) one obtains; \[ \underbrace{\left[\begin{matrix} v_{aN}\\ v_{bN}\\ v_{cN} \end{matrix} \
right]}_{v_{{abc} N}} = \frac{V_{DC}}{3} \underbrace{\left[ \begin{matrix} 2 & -1 & -1\\ -1 & 2 & -1\\ -1 & -1 & 2 \end{matrix} \right]}_{M} \underbrace{\left[\begin{matrix} s_{a}\\ s_{b}\\ s_{c} \
end{matrix} \right].}_{s_{abc}} \label{eq:M32} \] Remark: Note that the three-phase voltages \(v_{abc N}\) correspond to the voltages \(v_{abc}\) to be applied to the motor in the rest of the
Pulse Width Modulation (PWM)
Here we consider the use of inverter with PWM only. The study of the various forms of MLI has been the subject of much research over the decades (Holtz1992). The aim here is not to review the
literature, but to describe the Space Vector Modulation (SVM) solution, in a very simple form to be implemented experimentally. By simply modifying an algorithm, SVM makes it possible to apply higher
voltages to the machine phases. The model below represents the motor, associated to the inverter on the hardware side and the modulation for the on the software side.
Average model: For each ideal commutation, the duty cycle applied over a period \(T_{\rm PWM}\), is defined by : \[ \rho_k = \frac{t_{k,\rm ON}}{T_{\rm PWM}}, \] where \(t_{k,\rm ON}\) is the time
for which \(s_k=1\). The duty cycle \(\rho_k\) takes values in the interval \(\begin{bmatrix}0&1\end{bmatrix}\). The average voltage value \(v_{k,M}\), is given by: \[ <v_{k,M}(t)>_{T_{\rm PWM}} = \
frac{1}{T_{\rm PWM}}\int_0^{T_{\rm PWM}}v_{k,M}(\tau)d \tau = \rho_kV_{\rm DC}. \] On average, the voltage \(v_{abc}\) applied to the motor by the PWM, considering the inverter above, will be: \[ v_
{abc} = \frac{V_{DC}}{3}M \rho_{abc}. \] Modulation with harmonic injection:
The inverter is used to apply a desired voltage \(v_{abc}\) to the motor, denoted \(v_{abc}^\#\) (see above), while phase voltage only is accessible (instead of neutral to phase voltage). A simple
solution would be to invert the \(M\) matrix given in equation (8), in order to define the duty cycles to be applied to the machine as a function of the desired \(v_{abc}\) voltages. However, the
matrix \(M\) is not invertible, because \(\det{M} = 0\). This means that adding the same constant to all the duty cycles has no influence on the voltage between phase and neutral.
A universal representation of PWM is called carrier-based PWM. The duty-cycle \(\rho_{abc}\) can be written as (Vidal2013, Zhou2002): \[ \rho_{abc}= \frac{1}{V_{\rm DC}}v_{abc}^\# + \mathbf{1}^\
intercal \lambda, \] where \(\lambda(t)\) is the injected harmonic and \(v_i^\#,i\in\{a,b,c\}\), are the fundamental signals to be applied to the motor. The injected harmonic is referred to as the
zero sequence signal .
To ensure that the duty cycles \(\rho_{abc}\) are in the range \(0<\rho_{k}\)<1, the zero sequence voltage can take values in the range (Bowes1997, Hava1998): \[ {\frac{-\min(v_{abc}^\#)}{V_{\rm
DC}}}<\lambda<1-{\frac{\max(v_{abc}^\#)}{V_{\rm DC}}}. \] The diagram of the three-phase PWM with carrier is shown in figure below:
A centred aligned PWM is considered. Note \(T_{\rm d}\), the dead time to avoid short circuits in the inverter. The centred aligned PWM prevents switching to take place at the same instant.
Sine PWM is found by taking : \[ \lambda=\frac{1}{2}. \] The maximum voltage, without over-modulation, that can be reached with this configuration is \(V_{\rm max}= \frac{V_{\rm DC}}{2}.\)
Space Vector Modulation (SVM) is found by taking : \[ \lambda=\frac{1}{2}\left(1 -{\frac{\min(v_{abc}^\#) +\max(v_{abc}^\#))}{V_{\rm DC}}}\right).%-{\frac{\max(v_{abc}^\#)}{2V_{\rm DC}}}. \] The
maximum voltage, without over-modulation, that can be reached with this configuration is \(V_{\rm max} = \frac{V_{\rm DC}}{\sqrt{3}}.\)
Note that simply adding the functions \(\min\) and \(\max\) increases phase voltages by \(15.47\%\) (corresponding to the change from $ $ to \(\frac{V_{\rm DC}}{\sqrt{3}}\)).
(Bowes1997) Bowes, S.-R., & Lai, Y.-S. (1997). The relationship between space-vector modulation and regular-sampled PWM. IEEE Transactions on Industrial Electronics, 44(5), 670–679. https://doi.org/
(Hava1998) Hava, A.-M., Kerkman, R.-J., & Lipo, T.-A. (1998). Carrier-based PWM-VSI overmodulation strategies: analysis, comparison, and design. IEEE Transactions on Power Electronics, 13(4),
674–689. https://doi.org/10.1109/63.704136
(Holtz1992) Holtz, J. (1992). Pulsewidth modulation-a survey. IEEE Transactions on Industrial Electronics, 39(5), 410–420. https://doi.org/10.1109/41.161472
(Vidal2013) Vidal, P.-E., Cailhol, S., Rotella, F., Berkoune, K., Llor, A., & Fadel, M. (2013). Generalized inverses applied to pulse width modulation for static conversion: A first study. 2013 15th
European Conference on Power Electronics and Applications (EPE), 1–10. https://doi.org/10.1109/EPE.2013.6634683
(Zhou2002) Zhou, K., & Wang, D. (2002). Relationship between space-vector modulation and three-phase carrier-based PWM: a comprehensive analysis [three-phase inverters]. IEEE Transactions on
Industrial Electronics, 49(1), 186–196. https://doi.org/10.1109/41.982262 | {"url":"https://ctrl-elec.fr/mcu_electric_motor_field_oriented_control_modulation.html","timestamp":"2024-11-02T18:58:15Z","content_type":"text/html","content_length":"16288","record_id":"<urn:uuid:e4bb1af7-d897-4370-808e-768067f355a6>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00029.warc.gz"} |
Laplace-Domain Hybrid Distribution Model Based FDIA Attack Sample Generation in Smart Grids
State Grid Shanghai Municipal Electric Power Company, Shanghai 200122, China
College of Computer Science and Technology, Shanghai University of Electric Power, Shanghai 201306, China
Author to whom correspondence should be addressed.
Submission received: 1 August 2023 / Revised: 24 August 2023 / Accepted: 29 August 2023 / Published: 30 August 2023
False data injection attack (FDIA) is a deliberate modification of measurement data collected by the power grid using vulnerabilities in power grid state estimation, resulting in erroneous judgments
made by the power grid control center. As a symmetrical defense scheme, FDIA detection usually uses machine learning methods to detect attack samples. However, existing detection models for FDIA
typically require large-scale training samples, which are difficult to obtain in practical scenarios, making it difficult for detection models to achieve effective detection performance. In light of
this, this paper proposes a novel FDIA sample generation method to construct large-scale attack samples by introducing a hybrid Laplacian model capable of accurately fitting the distribution of data
changes. First, we analyze the large-scale power system sensing measurement data and establish the data distribution model of symmetric Laplace distribution. Furthermore, a hybrid Laplace-domain
symmetric distribution model with multi-dimensional component parameters is constructed, which can induce a deliberate deviation in the state estimation from its safe value by injecting into the
power system measurement. Due to the influence of the multivariate parameters of the hybrid Laplace-domain distribution model, the sample deviation generated by this model can not only obtain an
efficient attack effect, but also effectively avoid the recognition of the FDIA detection model. Extensive experiments are carried out over IEEE 14-bus and IEEE 118-bus test systems. The
corresponding results unequivocally demonstrate that our proposed attack method can quickly construct large-scale FDIA attack samples and exhibit significantly higher resistance to detection by
state-of-the-art detection models, while also offering superior concealment capabilities compared to traditional FDIA approaches.
1. Introduction
As an advanced energy supply and management system, the smart grid integrates sensor, communication and control technologies to improve energy efficiency, reliability and security [
], and combines modern Internet of Things (IoT) technology with the traditional power grid system to achieve effective and reliable power distribution and promote the development of clean energy.
Nowadays, it is an indispensable infrastructure for various services such as the smart home [
], smart health care [
] and smart transportation [
]. However, with the rapid development of the smart grid, its security threats are increasingly prominent. The smart grid is essentially a cyber-physical system (CPS), which can also be called energy
CPS (e-CPS) by combining the computing, communication and control (3CS) capabilities with the physical world of traditional grid. Despite the advantages of reliability and efficiency, the
insufficient level of security measures leads to a greater threat situation. According to the survey, the economic losses caused by malicious network attacks to the U.S. economy in 2016 are estimated
to be between USD 5.7 billion and 109 billion [
] and many loopholes still exist in the development of the smart grid. Among them, FDIA is a serious threat. By tampering with the data in the smart grid, these behaviors will seriously damage the
reliability and good operation performance of the grid, and may lead to serious consequences and security risks [
In recent years, FDIA attacks have aroused widespread concern in the research community. By injecting false data into the smart grid, targeted interference is performed for the normal operation of
the power grid and may lead to energy imbalance, wrong decision making and control, misjudgment of the monitoring system and damage to the reliability and toughness of the whole grid. For example, an
FDIA attacker can tamper with sensor data to make the smart grid misjudge the current energy demand, resulting in energy distribution imbalance and power supply interruption [
]. In addition, FDIA attack may lead to errors in the control system in the smart grid, thereby affecting the safety and reliable supply of energy. In order to make the FDIA successful, the attacker
needs to keep an in-depth understanding of the smart grid, including its data characteristics, communication protocol, attack sparsity, attack specificity, minimum attack vector, the need to present
unobservability and the need to be based on the impact of attacks in the smart grid environment [
]. It, however, is difficult for attackers to access the real-time topology configuration and structure of power grid system in reality, which increases the difficulty of attack and greatly reduces
the effect of attack.
Aiming at the threat and countermeasures of FDIA in the smart grid, many researchers have invested a lot of efforts. They pay attention to the detection and defense methods of FDIA and put forward a
variety of different solutions. For example, Huang et al. proposed a deep reinforcement learning FDIA detection method, which focuses on state attention to solve the problem of state feature
extraction in existing reinforcement learning detection methods [
]. Mahi et al. proposed a novel prediction-aided anomaly detection, using the sequence-to-sequence architecture of the automatic encoder based on CNN-LSTM to combat the FDIA [
]. However, the above research deals with the detection of traditional and single FDIA. Due to the limitations of this attack, in reality, we can not often meet the conditions that meet the
traditional FDIA, but may suffer from novel and complex attack modes. Therefore, the traditional FDIA technology has gradually lost its effect against the smart grid. With the progress of technology
and the increasing complexity of the smart grid system, we are faced with new forms of FDIA, which are more hidden, advanced and more destructive [
]. In addition, we can understand the current research status in this field by consulting a large number of literature.
In [
], Tu et al. proposed an optimal attack strategy using Kullback–Leibler (KL) divergence, in which the attacker maintains a fixed stealth level and reduces the performance of the system by modifying
the control input. In [
], Zhang et al. designed a self-generated FDI attack on the measurement signal that remains invisible. In [
], Xiao et al. proposed an optimal stealth attack on damaged sensors using the backward recursive Riccati equation. In [
], Sushree et al. studied FDI attacks on sensors, actuators and physical systems, and designed bounded and unbounded errors of FDI (False Data Injection) attacks in some cases to improve their
effectiveness. In [
], Sun et al. proposed a closed form expression for the optimal Gaussian sparse attack, and two target observation selection strategies were introduced to find the vulnerable observations in the
system, in which the algorithm adopts different trade-offs between performance and calculation time. In [
], Wang et al. proposed a new attack method to reduce the detection effect of FDI detector based on SVM (Support Vector Machine). The core of this method is to construct attack vectors that cannot be
detected via FDI detector based on SVM to avoid the detection of the power system detector. Although the above methods have shown strong detection performance against FDIA attacks, most of them are
based on machine learning methods. However, efficient machine learning detection models require large-scale attack samples as training sets, which are difficult to achieve in practical scenarios
because FDIA attack nodes are difficult to accurately characterize, making it difficult to obtain FDIA cascading attack samples. Therefore, some mature machine learning algorithm models are difficult
to stably train and have weak generalization ability against FDIA attacks without sufficient labeled samples as training data, greatly reducing the efficiency of FDIA attack detection models in smart
Facing the aforementioned problems, we are thus motivated to design an efficient FDIA attack sample generation method by introducing a hybrid Laplace-domain distribution model, which makes the
following novel contributions:
We propose an efficient FDIA attack sample generation method. Our method can quickly construct large-scale attack training samples for FDIA detection models, thereby solving the problem of sparse
attack samples.
By analyzing the measured data changes of each node of the power system, individual Laplacian distribution can be established sequentially according to the change of the sensing measurement data
of each node. A Laplace-domain hybrid distribution can be constructed to generate FDIA attack samples by combining multiple symmetric Laplace distribution models, which can better improve the
concealment of the attack sample.
We conduct a large number of experiments by different detection models to verify that our attack samples are more deceptive than other attack samples. The experimental results demonstrate that
our method outperforms traditional FDIA attack sample construction schemes in terms of attack strength and covert capability, while guaranteeing a low computational complexity.
The rest of this paper is organized as follows.
Section 2
reviews the mechanism of the false data injection attack. In
Section 3
, we describe the details of the proposed FDIA attack sample generation method. Comprehensive experiments are performed to evaluate the performance of proposed scheme. The experimental results and
corresponding discussions are presented in
Section 4
. Finally,
Section 5
concludes the paper.
2. False Data Injection Attack
FDIA refers to an attacker’s behavior of introducing false information into the smart grid to disrupt its normal operation or obtain illegal benefits by tampering with or falsifying the data of the
power system. This attack may have a serious impact on the operation stability, energy supply reliability and data accuracy of the power system [
]. The basic principle of constructing FDIA is to change some state vectors by manipulating a group of measurement vectors on the AC power system [
]. An attacker can change the real power on the bus to make a series of attack vectors, which are then added to the measurement data so that the estimated state vector is different from the actual
state value. Correspondingly, the DC state estimation model can be expressed as follows.
$z = { z 1 , z 2 , ⋯ z I }$
is the measurement vector,
$x = { x 1 , x 2 , ⋯ x J }$
is the state vector,
$e = { e 1 , e 2 , ⋯ e I }$
is the measurement error vector,
denote the number of measurements and the number of state data, respectively, and
is the measurement Jacobian matrix.
The most commonly used state estimation method in power systems is the weighted least squares method, which solves the estimate with the smallest mesh function value as the optimal state result by
using the sum of squared differences between the measurement vector
and the estimated state vector
as the objective function, which is formulated as
$x ^ = ( H T W H ) − 1 H T W z$
is a diagonal matrix and each of its elements is equal to the inverse of the corresponding measurement accuracy.
Since the measurement data in the power grid are usually subject to incomplete and abnormal data, state estimation is necessary to accurately and effectively monitor the state information and provide
support for system safety assessment [
]. In the process of state estimation, the bad data with large errors will lead to the deviation between the calculated state estimation value and the real situation, which seriously affects the
judgment of the control center operator on the system state. Researchers have adopted a range of approaches to detect, process and eliminate bad data. In general, the largest normalized residual
(LNR) test is commonly used for bad data detection (BDD) of state estimation. The residual
is defined as
In order to make the injected attack vector invisible, we assume that the measurement error
follows an ideal normal distribution and we use
$a = a 1 , a 2 , ⋯ a m T$
represent the FDIA vector injected by the attacker into the measured value. Then the actual measurement data is
$z a = z + a$
, and the error vector is
$c = c 1 , c 2 , ⋯ c n T$
of the state variable caused by FDIA. At this time, the estimated state variable
$x a = x ^ + c$
and the residual after attacking can be expressed as
$r a = z a − H x a = z + a − H ( x ^ + c ) = z − H x ^ + a − H c$
$a = H c$
, the following formula holds:
$r a = z a − H x a = z − H x ^ = r ⩽ τ$
is the threshold value of LNR test. It can be seen that, when the above conditions are met, FDIA can successfully pass the LNR test, thus causing changes and losses to the power system state
estimation. However, attackers in this mode must understand the internal structure of the power grid system and the real-time topology configuration, and also need to find highly sparse attack
vectors that meet specific conditions to cause attacks, making it rather cost-consuming.
3. Proposed Method
3.1. The Framework of Proposed Scheme
The primary objective of the proposed framework is to generate attack samples by closely observing and fitting the distribution of the original data. Firstly, the interactive data from various
sensors within the virtual power plant are collected and processed, which is then utilized to calculate power flow, while incorporating standard Laplace distribution noise to simulate power system
measurement data. Subsequently, a Laplace-domain hybrid distribution model is established to further simulate the measurement data of the power system. This hybrid model can be solved iteratively
based on the changing trends observed in the measurement data to effectively capture the inherent characteristics of the data. Furthermore, the solved mixed model is used to generate the
corresponding attack vector, which is then injected into the measurement data to change the state estimation of the power system, and the proposed Laplace-domain hybrid distribution model can be
further evaluated and corrected by iteratively using different detection models. The complete framework of proposed attack sample generation scheme is shown in
Figure 1
3.2. Laplace Distribution and Observation
Laplace distribution, also known as double-exponential distribution, is a continuous probability distribution that is widely employed in various fields, including statistics, signal processing and
power system analysis. Laplace distribution is characterized by its symmetric bell-shaped curve and usually exhibits heavy tails compared to the normal distribution [
]. This heavy-tailed property makes the Laplace distribution particularly suitable for modeling data with outliers or extreme values. Laplace distribution is usually defined by two parameters: the
location parameter and the scale parameter, where the former determines the center of the distribution and the latter controls the spread or dispersion.
is the sample data,
is the scale parameter,
is the position parameter, and its position parameter
determines the center of the distribution, and the scale parameter
controls its distribution.
Based on the definition of the above Laplace standard distribution, we analyze the IEEE 14- and IEEE 118-node system data. Due to the fact that different data nodes in the IEEE node system represent
different power system devices, their temporal data have different distribution states. We randomly select the data from four nodes 1, 2, 4, 11 in the IEEE 14-node system and fit their distributions,
as shown in
Figure 2
. From the figure, we can observe that the actual data distribution for different nodes have a significantly consistent distribution model with the standard Laplace distribution with different
parameters. In other words, the data of different nodes in the IEEE node system almost all conform to the Laplace distribution, although their distribution parameters may be different. Therefore, we
completely believe that, if multiple different Laplacian models with different parameters are combined to construct a hybrid Laplacian distribution model to fit interference errors aiming at
measurement data, it is easy to confuse existing FDIA detection models by adding the above interference errors in normal samples, because the distribution of generated attack samples is approximately
the same as that of normal samples.
3.3. Hybrid Laplace Distribution Model
According to the analysis in
Section 3.2
, the sensing data of different nodes in the power system generally conform to the standard Laplace distribution, but their parameters may be different. Therefore, a single Laplace distribution model
can not fit the sensing data distribution of all nodes well. Considering the above problem, we design a Laplace-domain hybrid distribution by combining multiple single standard Laplace distribution
model [
]. The proposed hybrid Laplace distribution model is a probability model based on standard Laplace distribution. It is used to simulate the sensing data distribution in a power system by linearly
combining multiple Laplacian distributions through weighting functions. The mathematical modeling process can be defined as follows.
$g x = ∑ i = 1 k b i 1 2 s i e − x − μ i s i$
$b i ≥ 0$
, and
$∑ i = 1 k b i = 1$
$s i$
$μ i$
represent the component scale and mean value of the
-th component distribution in the linear hybrid Laplacian distribution, respectively.
$b i$
is a weight parameter,
$i = 1 , 2 , ⋯ , k$
. Accordingly, the parameter vector of the density function of the hybrid Laplacian distribution can be calculated as follows.
$ϑ k = b 1 , b 2 , ⋯ , b k , s 1 , s 2 , ⋯ , s k , μ 1 , μ 2 , ⋯ , μ k$
Correspondingly, the parameter form in Equation (
) can be further transformed as:
$g ( x / ϑ k ) = ∑ i = 1 k b i f i ( x , μ i , s i )$
Furthermore, according to the above established hybrid Laplace distribution model, we need to solve it to obtain the optimal model parameters. Notably, within this model, the measurement data of each
node is addressed individually, and thus the characteristics of power injection changes for each node need to be fitted separately. Moreover, a synchronous injection attack can be utilized to
intentionally obfuscate the detection and identification model. Correspondingly, the solution process of the proposed hybrid distribution model can be described as follows.
Firstly, determine the initial values of the model parameters according to the characteristics of the sensing measurement data, then the model parameter vector is
$ϑ k ( 0 ) = b 1 ( 0 ) , b 2 ( 0 ) , ⋯ , b k ( 0 ) , s 1 ( 0 ) , s 2 ( 0 ) , ⋯ , s k ( 0 ) , μ 1 ( 0 ) , μ 2 ( 0 ) , ⋯ , μ k ( 0 )$
. Correspondingly, the posterior probability of sample
$X 1 , X 2 , ⋯ X n$
$X i ∈ f ( s i ( 0 ) , μ i ( 0 ) )$
) under this initial condition can be expressed as:
$p t j ( 0 ) = g x t , b j ( 0 ) , μ j ( 0 ) , s j ( 0 ) ∑ i = 1 k g x t , b t ( 0 ) , μ t ( 0 ) , s t ( 0 ) .$
Considering that the posterior probability of each component of the linear component is calculated circularly: For each component, the probability density of the sample data
$X i$
belonging to the component is calculated by using the initialized parameters
$s i$
$μ i$
, where the posterior probability can be calculated by matrix multiplication with the prior probability
$b i$
, and the posterior probability meets the normalization condition, that is,
$∑ j = 1 k p t j ( 0 ) = 1$
. For any group of
$∑ j = 1 k b j = 1$
, the assignment of samples to
components with
$p t j ( 0 )$
can be completed sequentially under the initial value
$ϑ k ( 0 )$
. Subsequently, the parameters of each component distribution can be obtained by the expectation algorithm.
$b j ( 1 ) = 1 n ∑ t = 1 n p t j ( 0 ) μ j ( 1 ) = ∑ t = 1 n p t j ( 0 ) x t ∑ t = 1 n p t j ( 0 ) s j ( 1 ) = ∑ t = 1 n p t j ( 0 ) ( x t − μ j ( 1 ) ) 2 ∑ t = 1 n p t j ( 0 )$
$b j ( 1 )$
$μ j ( 1 )$
$s j ( 1 )$
are the component weights, component mean values and component scales updated in the first iteration, respectively. Meanwhile,
$ϑ k ( 1 )$
is a known estimation
$ϑ k ( 0 )$
, not the maximum likelihood estimation of hybrid distribution, and
$ϑ k ( 1 )$
is the expected result of the first iterative separation.
In order to find the best model parameters, we need to introduce the maximum likelihood estimation (MLE) on sample
$X 1 , X 2 , ⋯ X n$
$L ϑ k = ∏ i = 1 n ∑ t = 1 k g x i , b t , μ t , s t$
In order to maximize the likelihood function $L ϑ k$ under the conditions of formula $p t j = g x t , b j , μ j , s j ∑ i = 1 k g x t , b t , μ t , s t$, we use the derivative method and make the
derivative zero to solve the corresponding $s j$ and $μ j$.
Finally, after
rounds of iteration, the result of
$m + 1$
rounds can be obtained, that is, the solution of
$b j ( m + 1 )$
$μ j ( m + 1 )$
$s j ( m + 1 )$
$b j ( m + 1 ) = 1 n ∑ t = 1 n p t j ( m ) μ j ( m + 1 ) = ∑ t = 1 n p t j ( m ) x t ∑ t = 1 n p t j ( m ) s j ( m + 1 ) = ∑ t = 1 n p t j ( m ) ( x t − μ j ( m + 1 ) ) 2 ∑ t = 1 n p t j ( m )$
In the iterative algorithm for maximum likelihood estimation of hybrid model parameters, the likelihood function is monotonically increasing, i.e., $L ϑ k m + 1 ⩾ L ϑ k m$. This means that an $L
ϑ k$ maximum point can always be found in the iteration process, and the corresponding threshold $ε$ can be given generally. When $L ϑ k m + 1 − L ϑ k m ⩽ ε$, the Likelihood function is the
largest, and accordingly the iteration should stop to obtain the maximum likelihood estimation parameters.
After successfully solving the hybrid Laplace distribution model using the original measurement data from the system, the model is utilized to capture the changing patterns in the measurement data of
each node. Subsequently, an attack vector corresponding to each node can be incorporated into the measurement data of the power system. This manipulation alters the power injection quantities,
thereby indirectly impacting the state estimation process and potentially leading to detrimental consequences for the power system.
4. Experimental Results and Discussions
4.1. Attack Data Generation
In our experiment, the test data are from the IEEE 14-node and IEEE 118-node test systems [
]. We collect the real load data of a New York independent system operator (nyiso) from 1 January 2020 to 1 May 2022. Notably, we collect the data of more than two years because the data in this time
range include seasonal changes throughout the year, which help to comprehensively understand long-term load patterns and trends, including different seasons, weather conditions and the potential
impact of energy consumption caused by various factors. After obtaining the data, we simulate the measurement data of the power system through the load flow calculation of the load data and further
obtain the load information of each bus state variable and each node in the IEEE 14-node and IEEE 118-node test systems. The nodes in the system are regarded as different sensor data sources
monitoring the same system.
After completing the modeling of the hybrid Laplace distribution model, we solve the model parameters and fit the changes of the measurement data of the power system. Accordingly, the generation of
attack vectors can be simulated to change the state estimation of the system and destroy the stability and security of the power system.
Figure 3
is a comparison of the normal measurement data variation distribution and the generated measurement data variation distribution on the IEEE 14-node test system [
], where the abscissa is the number of data samples and the ordinate is the data variation. In order to assess the threat degree of the attack and compare it with other attack methods, we divide the
attack vector into different attack strengths according to the method in [
], where the ratio of the average power injection deviation of
is less than 10% for weak attacks, the ratio of the average power injection deviation of
is greater than 10% and less than 30% for moderate attacks, and the ratio of the average power injection deviation of
is greater than 30% for strong attacks. Correspondingly, attack strength can be calculated as follows.
$∑ i = 1 n c i − x i m × 100 %$
In addition, to avoid bad data detection (BDD) of the power system and improve the attack sample concealment, the error vector
should meet
$c i$
is the power error vector injected in the
$H +$
is the generalized inverse of Jacobian matrix,
is the attack vector of injected measurement data,
is the size of data dimension, and
is the number of samples of injection attack.
4.2. Experimental Setup
In this section, we introduce the super-parameter, training and testing set, and the corresponding environment settings. All simulations are performed on Intel Core i7-8750 h CPU, 1050ti GPU and 8 GB
RAM. Power flow calculation and state estimation of data are performed on Matlab using Matpower, while the establishment and solution of the hybrid Laplace distribution model are performed in Python.
In addition, the number of components of the hybrid distribution model is set to 3, and then the number is changed during experimental comparison. The number of iterations for solving the mixed model
is set to 100, and the threshold value of the difference in the absolute value of the likelihood function is set to 0.0001. The noise error of the analog measurement data of power flow calculation is
set to 0.25, and the Gaussian noise with the mean value of zero and the standard deviation of 1 is used.
In each attack case, we generate 15,000 pieces of data, 7500 of which are normal data and 7500 are attack data. We label the normal data as 0 and the attack data as 1 to facilitate subsequent
experiments. Further, the proportion of test set, verification set and training set is 0.2, 0.3 and 0.5, respectively, which are used to detect the detection model. Each data contain the active
injection power of each node to measure the impact caused by the injection power error.
4.3. Experimental Results and Discussions
After the attack samples are generated in our experiment, we compare them with the attack samples generated by the traditional FDIA mode. The comparisons use different popular deep learning detection
models for detection. In addition, four metrics—accuracy, precision, recall and
$F 1$
score—are used as the evaluation indicators of our output results in the experiment, which can be defined as follows [
$Accuracy = T P + T N T P + F P + T N + F N$
are true positive, false positive, true negative and false negative, respectively. For
$F 1$
score, the formula is
$F 1 = 2 × Precision × Recall Precision + Recall$
$Precision = T P T P + F P$
Obviously, the higher the accuracy and $F 1$ score, the worse the concealment of the attack sample, and the easier it is to be detected by the detection model.
We carry out a series of experiments over the IEEE 14- and IEEE 118-node systems to demonstrate the performance of our proposed attack sample generation scheme. Two different FDIA detection
schemes—CNN-based detection scheme [
] and LSTM-based detection scheme [
]—are used to give the testing results. Three attack levels (weak, moderate, strong) and nine attack strengths,
$c = 2 % , 5 % , 10 % , 15 % , 20 % , 25 % , 30 % , 40 % , 50 %$
, are employed to provide the comparisons. The corresponding experimental results are shown in
Table 1
Table 2
Table 3
Table 4
, where
Table 1
Table 2
present the testing results over the IEEE 14-node system, while
Table 3
Table 4
show the testing results over the IEEE 118-node system.
From these tables, we can observe that our scheme can obtain lower detection precision, $F 1$ score and recall values. To be specific, when CNN-based FDIA detection scheme is performed over the IEEE
14-node system, for weak attack strength $2 %$, our scheme can obtain an approximate $21 %$ reduction for precision, $16 %$ for recall value, $20 %$ for $F 1$ score. For strong attack strength $40 %$
, our LMM-FDIA scheme can still obtain an approximate $9 %$ reduction for precision, $14 %$ for recall value, $14 %$ for $F 1$ score. Similarly, for the IEEE 118-node system, our scheme can also
achieve approximate performance improvement. This implies that our method has a stronger anti-detection capability and can more effectively bypass the detection model to achieve the attack effect. In
fact, this result can be easily explained as follows. Because our constructed hybrid Laplace distribution can well simulate the disturbance model that is consistent with the distribution of the
original samples, the constructed attack samples can be thus perfectly consistent with the distribution model of the original samples.
In order to better compare our generated attack samples and the attack samples generated by traditional FDIA, we use the accuracy as the evaluation metric and the experimental results are shown in
Figure 4
, where the abscissa is the attack strength and the ordinate is the accuracy. As can be seen from these figures, the accuracy of our method is always lower than that of FDIA when the attack intensity
is between 5% and 40%, which means that the attack samples generated by our method are more covert and confusing than traditional FDIA samples, and also implies that our scheme is more difficult to
be detected by the detection model. In addition, we can find an interesting phenomenon that the performance under low attack strength is close to the performance under high attack strength. This is
mainly because, under low attack strength, our generated attack samples and traditional FDIA samples are both difficult to detect due to the scarcity of attack samples, while for high attack
strength, both are easily detected due to the increase in the number of attack samples. Nevertheless, our scheme can still obtain a superior performance compared to traditional FDIA samples.
To gain more insight, we discuss the influence of different model component parameters by a series of experiments. We set the component parameters from 1 to 6, and use CNN-based and LSTM-based
detection models and
$F 1$
score as measurements. The corresponding experimental results are shown in
Figure 5
. It can be seen from the results that, when the component parameters are set to 1 and 2, the generated attack samples are easy to be detected, which demonstrates that the generated attack samples do
not fit the original sensing data well. When the component parameter is set to 3, the effect of the attack sample is greatly improved and is difficult to be recognized by the detection model,
indicating that the characteristics of the data distribution can be well fitted at this time. When the component parameters are set to 4, 5 or 6, the impact on the attack samples gradually stabilize;
the computational burden on memory and operational efficiency, however, significantly increase at this time. Consequently, a smaller parameter can be suggested to maintain the desired effect on
attack samples while minimizing computational costs.
Furthermore, we also demonstrate the effect from qualitative comparison. We choose different FDIA attack methods, e.g.,
Table 5
, which is also shown in this revision report for a quick check. Most of the FDIA schemes in this table utilize the traditional random disturbance generation method. We compare them in terms of
concealment, model structure complexity, operational costs and runtime. Compared to some existing FDIA attack methods, using hybrid Laplacian models to construct attack samples can more easily bypass
FDIA detection models, it does not need complex internal structure of the power system, and it has a low operation cost and model complexity, as well as high time effect. This is mainly because our
model directly constructs a hybrid Laplacian model by combining several individual Laplacian distributions and then utilizes large-scale data samples to train optimized parameters. This makes the
model construction and parameter optimization process simpler and easier to implement. In addition, we further analyze the construction of the model through parameter discussions, fundamentally
simplifying the model structure, thereby greatly reducing the process and complexity of parameter optimization. This is also why our model can demonstrate significant advantages from qualitative
4.4. Impact of Noise Error
In order to evaluate the robustness of our attack sample generation method, we conduct a series of experiments to observe the impact of noise errors on the power system measurements. Real power
system measurements often suffer from noise errors, and it is important to understand how these errors affect our method. To simulate these errors [
], Gaussian noise with varying levels of noise variance is injected into the data of each node in the power system. We test the performance of two attack samples using CNN-based and LSTM-based
detection models over the IEEE 14-node test system and present the results in
Figure 6
In this experiment, the noise variance distribution of the measurement data is set to 0.25, 0.35, 0.45, 0.55, respectively. From the figure, it is evident that the accuracy of all models decreases
with the noise levels increasing. This decline in accuracy can be attributed to the increased difficulty in distinguishing between normal and damaged data as the noise variance grows. Consequently,
noiseless data is more likely to be obscured by the noise. Furthermore, the experimental results demonstrate that our method of generating attack samples outperforms the traditional FDIA attack
sample generation method at different noise levels, regardless of the detection model employed. This demonstrates the robustness and applicability of our attack sample generation method compared to
other existing approaches. In conclusion, our method can construct more robust and anti-interference FDIA attack samples, while also having stronger concealment capability. This can provide
large-scale and efficient attack samples for the construction of deep-learning-based FDIA detection models.
4.5. Time Complexity Analysis
Time complexity is an estimate of the running time of the algorithm, which indicates that the time needed for the algorithm to execute varies with the size of the problem [
]. In this experiment, we test the calculation time and storage capability of various model parameters in our hybrid model. In our proposed hybrid model, parameters are used to control changes in
data distribution. When the parameters are performed with small changes, the time complexity of the hybrid model is mainly affected by sample data and iteration times, and its complexity will not
undergo large-scale changes due to parameter changes, e.g., the complexity is
$O ( k n )$
when parameter is set to
. Therefore, the final theoretical complexity only maintains a linear change of
$O ( n )$
. Meanwhile, due to the small size of the parameter quantity, it has no impact on the storage space, and the spatial complexity also maintains a constant change, so it can keep the space complexity
$O ( 1 )$
. According to the construction of the hybrid model, the specific runtime of our method generally maintains a linear change.
Figure 7
illustrates this relationship, where the x-axis represents the different iterations of the model solution, and the y-axis stands for the time of each round (seconds). Different model component
parameters of the hybrid model are set to 1, 2, 3 and 4, which are drawn by the dotted lines in the figure, respectively. Parameter 1 corresponds to the case of single Laplace distribution model.
Although the running time of this model is the shortest, the quality of the generated attack samples is easy to be detected. When the model parameter is 2, the running time is close to the case of
model parameter 3, but the effect of the generated attack samples is far worse than that of the hybrid model with the model component parameter of 3. When the model component parameter is 4, the
effect of the generated attack samples is similar to the case of model parameter 3, but the running time significantly increases. In general, when the model parameter is set to 3, the proposed hybrid
model can obtain optimal performance.
5. Conclusions
This paper presents a novel approach for generating FDIA attack samples based on Laplace-domain hybrid distribution model. The proposed method involves establishing the mixed Laplace distribution and
utilizing the EM algorithm to solve the hybrid distribution, thereby effectively capturing the changes observed in power system measurement data. Subsequently, the corresponding attack vector is
generated and injected into the measurement data to induce changes in the system’s state estimation. To evaluate the effectiveness of the generated attack samples, we conduct an in-depth analysis of
the parameters associated with different component models and the division of attack intensity. Furthermore, we employ various detection models to assess the performance of the generated attack
samples. Extensive experiments are conducted on both the IEEE 14-node system test and the IEEE 118-node system test datasets.The experimental results unequivocally demonstrate that the attack samples
generated by our proposed method exhibit significantly higher resistance to detection by the employed detection models.
While our proposed method has shown good performance in the test with diverse and classical IEEE node data sets, we should note that it is slightly weak on small samples. This is because more data
features may be required to better fit the data distribution and improve the concealment of FDIA attack sample, while small sample datasets may lead to inaccurate model construction, thereby
affecting the attack capability of attack samples. In addition, the proposed hybrid model we constructed is sample-constructed for specific FDIA attacks, resulting in a lack of universality in our
model in terms of diverse attack samples targeting the power system.
In the future, our research will focus on conducting feature engineering by analyzing cascading FDIA attack samples and building efficient detection model by generating a large-scale attack samples,
thereby enhancing the overall security and reliability of the smart grid system.
Author Contributions
Conceptualization and Resources: Z.Z.; Methodology and Original draft writing: Y.W.; Software and Validation: N.G. and T.Z.; Review and editing and Supervision: F.L. All authors have read and agreed
to the published version of the manuscript.
This work was supported by the Scientific and Technological Project of the State Grid Shanghai Municipal Electric Power Company (Grant No. B30940220003).
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
1. Victor, C.; Harindra, S.; Brady, J.; Bjorn, V.; Vivek, K. A Review of Visualization Methods for Cyber-Physical Security: Smart Grid Case Study. IEEE Access 2023, 11, 1. [Google Scholar]
2. Thakur, N.; Han, C.Y. An Intelligent Ubiquitous Activity Aware Framework for Smart Home. In Human Interaction, Emerging Technologies and Future Applications III: Proceedings of the 3rd
International Conference on Human Interaction and Emerging Technologies: Future Applications (IHIET 2020), Paris, France, 27–29 August 2020; Ahram, T., Taiar, R., Langlois, K., Choplin, A., Eds.;
Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2021; Volume 1253. [Google Scholar]
3. Vergutz, A.; Noubir, G.; Nogueira, M. Reliability for Smart Healthcare: A Network Slicing Perspective. IEEE Netw. 2020, 34, 91–97. [Google Scholar] [CrossRef]
4. Wu, L.; Zhang, W.; Zhao, W. Privacy Preserving Data Aggregation for Smart Grid with User Anonymity and Designated Recipients. Symmetry 2022, 14, 847. [Google Scholar] [CrossRef]
5. Charalambos, K.; Saraju, P.M. Cybersecurity for the Smart Grid. Computer 2020, 53, 10–12. [Google Scholar]
6. Aoufi, S.; Derhab, A.; Guerroumi, M. Survey of False Data Injection in Smart Power Grid: Attacks, Countermeasures and Challenges (Article). Inf. Secur. Appl. 2020, 54, 102518. [Google Scholar] [
7. Qu, Z.; Yang, J.; Lang, Y.; Wang, Y.; Han, X.; Guo, X. Earth-Mover-Distance-Based Detection of False Data Injection Attacks in Smart Grids. Energies 2022, 15, 1733. [Google Scholar] [CrossRef]
8. Haftu, T.; Adnan, A.; Abdun, M. Comprehensive Survey and Taxonomies of False Data Injection Attacks in Smart Grids: Attack Models, Targets, and Impacts. Renew. Sustain. Energy Rev. 2022, 163,
112423. [Google Scholar]
9. Huang, R.; Li, Y.; Wang, X. Attention-Aware Deep Reinforcement Learning for Detecting False Data Injection Attacks in Smart Grids. Electr. Power Energy Syst. 2023, 147, 108815. [Google Scholar] [
10. Mahi, A.; Hossain, F.; Anwar, A.; Azam, S. False Data Injection Attack Detection in Smart Grid Using Energy Consumption Forecasting. Energies 2022, 15, 4877. [Google Scholar] [CrossRef]
11. Nafees, M.; Saxena, N.; Cardenas, A.; Grijalva, S.; Burnap, P. Smart Grid Cyber-Physical Situational Awareness of Complex Operational Technology Attacks: A Review. Assoc. Comput. Mach. 2023, 55,
10. [Google Scholar] [CrossRef]
12. Tu, W.; Dong, J.; Zhai, D. Optimal ϵ-stealthy attack in cyber-physical systems. J. Frankl. Inst. 2021, 358, 151–171. [Google Scholar] [CrossRef]
13. Zhang, T.; Ye, D. False Data Injection Attacks With Complete Stealthiness in Cyber-physical Systems: A Self-Generated Approach. Automatica 2020, 120, 109117. [Google Scholar] [CrossRef]
14. Xiao, L. Optimal Attack Strategy Against Fault Detectors for Linear Cyber-Physical Systems. Inf. Sci. 2021, 581, 390–402. [Google Scholar]
15. Sushree, P.; Ashok, K.T. Design of False Data Injection Attacks in Cyber-Physical Systems. Inf. Sci. 2022, 608, 825–843. [Google Scholar]
16. Sun, K.; Li, Z. Sparse Data Injection Attacks on Smart Grid: An Information-Theoretic Approach. IEEE Sens. 2022, 22, 14553–14562. [Google Scholar] [CrossRef]
17. Wang, B.; Zhu, P.; Chen, Y.; Xun, P.; Zhang, Z. False Data Injection Attack Based on Hyperplane Migration of Support Vector Machine in Transmission Network of the Smart Grid. Symmetry 2018, 10,
165. [Google Scholar] [CrossRef]
18. Li, X.; Wang, Y.; Lu, Z. Graph-Based Detection for False Data Injection Attacks in Power Grid. Energy 2023, 263, 125865. [Google Scholar] [CrossRef]
19. Jorjani, M.; Seifi, H.; Varjani, A. A Graph Theory-Based Approach to Detect False Data Injection Attacks in Power System AC State Estimation. IEEE Trans. Ind. Inform. 2020, 17, 2465–2475. [Google
Scholar] [CrossRef]
20. Li, Y.; Wei, Y.; Li, Y.; Dong, Z.; Shahidehpour, M. Detection of False Data Injection Attacks in Smart Grid: A Secure Federated Deep Learning Approach. IEEE Trans. Smart Grid 2022, 99, 1. [Google
Scholar] [CrossRef]
21. Jing, H.; Liu, Y.; Zhao, J. Asymmetric Laplace Distribution Models for Financial Data: VaR and CVaR. Symmetry 2022, 14, 807. [Google Scholar] [CrossRef]
22. Amos, N.; Tomasz, J.K. A Uniform-Laplace Mixture Distribution. Comput. Appl. Math. 2023, 115236. [Google Scholar]
23. Wu, Y.; Sheng, Y.; Guo, N.; Li, F.; Tian, Y.; Su, X. Hybrid Deep Network Based Multi-Source Sensing Data Fusion for FDIA Detection in Smart Grid. In Proceedings of the 2022 Asia Power and
Electrical Technology Conference (APET), Shanghai, China, 11–13 November 2022; pp. 310–315. [Google Scholar]
24. Wu, Y.; Wang, Q.; Guo, N.; Tian, Y.; Li, F.; Su, X. Efficient Multi-Source Self-Attention Data Fusion for FDIA Detection in Smart Grid. Symmetry 2023, 15, 1019. [Google Scholar] [CrossRef]
25. Shen, K.; Yan, W.; Ni, H.; Chu, J. Localization of False Data Injection Attack in Smart Grids Based on SSA-CNN. Information 2023, 14, 180. [Google Scholar] [CrossRef]
26. Sayghe, A.; Anubi, O.M.; Konstantinou, C. Adversarial Examples on Power Systems State Estimation. In Proceedings of the 2020 IEEE Power & Energy Society Innovative Smart Grid Technologies
Conference (ISGT), Washington, DC, USA, 17–20 February 2020; pp. 1–5. [Google Scholar]
27. Mukherjee, D. Data-Driven False Data Injection Attack: A Low-Rank Approach. IEEE Trans. Smart Grid 2022, 13, 2479–2482. [Google Scholar] [CrossRef]
28. Jiao, R.; Xun, G.; Liu, X.; Yan, G. A New AC False Data Injection Attack Method Without Network Information. IEEE Trans. Smart Grid 2021, 12, 5280–5289. [Google Scholar] [CrossRef]
29. Tian, J.; Wang, B.; Wang, Z.; Cao, K.; Li, J.; Ozay, M. Joint Adversarial Example and False Data Injection Attacks for State Estimation in Power Systems. IEEE Trans. Cybern. 2022, 52,
13699–13713. [Google Scholar] [CrossRef]
30. Li, Y.; Li, J.; Qi, J.; Chen, L. Robust Cubature Kalman Filter for Dynamic State Estimation of Synchronous Machines Under Unknown Measurement Noise Statistics. IEEE Access 2019, 7, 29139–29148. [
Google Scholar] [CrossRef]
31. Deng, G.; Qi, N.; Tang, M.; Duan, X. Constructing Dixon Matrix for Sparse Polynomial Equations Based on Hybrid and Heuristics Scheme. Symmetry 2022, 14, 1174. [Google Scholar] [CrossRef]
Figure 2. Actual data distribution and standard Laplace distribution for different node data (node 1, node 2, node 3, node 4) in the IEEE 14-node system.
Figure 3. Variation distribution comparison for normal samples and generated attack samples. (a) Normal measurement data variation. (b) Generated measurement data variation.
Figure 4. Accuracy comparison of two attack sample generation schemes on the IEEE 14- and IEEE 118-node systems. In this experiment, CNN-based FDIA detection model and LSTM-based FDIA detection model
are used to provide the testing results. (a) CNN-based detection over the IEEE 14 system. (b) LSTM-based detection over the IEEE 14 system. (c) CNN-based detection over the IEEE 118 system. (d)
LSTM-based detection over the IEEE 118 system.
Figure 6. Comparison of two attack sample generation schemes under different noise levels. (a) CNN-based detection model. (b) LSTM-based detection model.
Table 1. Performance comparison of two attack sample generation schemes, the traditional FDIA sample generation and FDIA sample generation based on hybrid Laplace model (LMM-FDIA). In this test,
CNN-based FDIA attack detection model is used over the IEEE 14-node system.
Attack Attack Traditional Scheme LMM-FDIA Based Scheme
Level Strength Precision Recall $F 1$-Score Precision Recall $F 1$-Score
Weak 2% 0.5485 0.6137 0.5613 0.3330 0.4520 0.3689
Attacks 5% 0.6243 0.6869 0.6408 0.4205 0.6950 0.5179
10% 0.7021 0.7132 0.7004 0.6556 0.6148 0.5499
Moderate 15% 0.7813 0.7954 0.7826 0.6636 0.5541 0.5574
Attacks 20% 0.8372 0.8424 0.8358 0.7351 0.6776 0.6764
25% 0.8478 0.8732 0.8565 0.7794 0.7605 0.7595
Strong 30% 0.8995 0.9094 0.8998 0.8280 0.8143 0.7981
Attacks 40% 0.9494 0.9452 0.9462 0.8541 0.8021 0.8090
50% 0.9754 0.9708 0.9725 0.9281 0.9186 0.9213
Table 2. Performance comparison of two attack sample generation schemes, the traditional FDIA sample generation and FDIA sample generation based on hybrid Laplace model (LMM-FDIA). In this test,
LSTM-based FDIA attack detection model is used over the IEEE 14-node system.
Attack Attack Traditional Scheme LMM-FDIA Based Scheme
Level Strength Precision Recall $F 1$-Score Precision Recall $F 1$-Score
Weak 2% 0.5399 0.5417 0.5408 0.4943 0.4686 0.4811
Attacks 5% 0.6290 0.6000 0.6141 0.5000 0.4078 0.4492
10% 0.6704 0.6842 0.6959 0.5044 0.4799 0.4854
Moderate 15% 0.7481 0.8063 0.7761 0.5244 0.4859 0.5045
Attacks 20% 0.8208 0.8307 0.8257 0.5249 0.6261 0.5054
25% 0.8409 0.9023 0.8705 0.5662 0.4706 0.5140
Strong 30% 0.8586 0.9189 0.8877 0.6058 0.5787 0.6041
Attacks 40% 0.9509 0.9421 0.9465 0.6742 0.6495 0.6616
50% 0.9831 0.9569 0.9698 0.7843 0.5867 0.5960
Table 3. Performance comparison of two attack sample generation schemes, the traditional FDIA sample generation and FDIA sample generation based on hybrid Laplace model (LMM-FDIA). In this test,
CNN-based FDIA attack detection model is used over the IEEE 118-node system.
Attack Attack Traditional Scheme LMM-FDIA Based Scheme
Level Strength Precision Recall $F 1$-Score Precision Recall $F 1$-Score
Weak 2% 0.5176 0.5485 0.4952 0.4372 0.5854 0.4360
Attacks 5% 0.5818 0.5714 0.5650 0.5933 0.4132 0.3958
10% 0.7241 0.6903 0.7005 0.7976 0.4121 0.5355
Moderate 15% 0.8023 0.7874 0.7912 0.8586 0.5338 0.6525
Attacks 20% 0.8364 0.8263 0.8281 0.8619 0.6291 0.7165
25% 0.8746 0.8481 0.8583 0.9072 0.6810 0.7730
Strong 30% 0.8926 0.8952 0.8917 0.9464 0.7250 0.8162
Attacks 40% 0.9374 0.9376 0.9364 0.9189 0.7970 0.8504
50% 0.9621 0.9693 0.9651 0.9272 0.9047 0.9157
Table 4. Performance comparison of two attack sample generation schemes, the traditional FDIA sample generation and FDIA sample generation based on hybrid Laplace model (LMM-FDIA). In this test,
LSTM-based FDIA attack detection model is used over the IEEE 118-node system.
Attack Attack Traditional Scheme LMM-FDIA Based Scheme
Level Strength Precision Recall $F 1$-Score Precision Recall $F 1$-Score
Weak 2% 0.5328 0.1870 0.2769 0.4966 0.6444 0.5610
Attacks 5% 0.5718 0.6027 0.5869 0.5062 0.9808 0.6678
10% 0.7518 0.6126 0.6751 0.6737 0.2288 0.3417
Moderate 15% 0.8008 0.7654 0.7827 0.8398 0.4703 0.6029
Attacks 20% 0.8728 0.7549 0.8096 0.9037 0.6748 0.7726
25% 0.8672 0.8735 0.8703 0.8817 0.7130 0.7884
Strong 30% 0.9129 0.8708 0.8914 0.8338 0.9102 0.8703
Attacks 40% 0.8995 0.9670 0.9320 0.9499 0.8509 0.8977
50% 0.9555 0.9769 0.9661 0.9800 0.9373 0.9581
Attack Method Characteristics Challenges
FDIA [20] Effectively avoiding BDD detection Easy to detect by DL model
Low model complexity Simple construction of attack vector
Effectively avoiding BDD detection Poor robustness
AFDIA [26] High success rate Easy to detect by DL model
Large amount of model parameters
D-FDIA [27] Effectively avoiding BDD detection Easy to detect by DL model
Low computational burden Poor concealment of attack vector
SG-FDIA [28] Effectively avoiding BDD detection Easy to detect by DL model
High time efficiency Poor robustness
M-AFDIA [29] Effectively avoiding DL model detection Easy to detect by BDD
Strong concealment of attack vectors Long running time
Effectively avoiding BDD and High model complexity
S-AFDIA [29] DL model detection High operation cost
Obtain comprehensive system information
Effectively avoiding BDD and
LMM-FDIA DL model detection Poor performance on small samples
Low model complexity and running time
Strong concealment of attack vector
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Wu, Y.; Zu, T.; Guo, N.; Zhu, Z.; Li, F. Laplace-Domain Hybrid Distribution Model Based FDIA Attack Sample Generation in Smart Grids. Symmetry 2023, 15, 1669. https://doi.org/10.3390/sym15091669
AMA Style
Wu Y, Zu T, Guo N, Zhu Z, Li F. Laplace-Domain Hybrid Distribution Model Based FDIA Attack Sample Generation in Smart Grids. Symmetry. 2023; 15(9):1669. https://doi.org/10.3390/sym15091669
Chicago/Turabian Style
Wu, Yi, Tong Zu, Naiwang Guo, Zheng Zhu, and Fengyong Li. 2023. "Laplace-Domain Hybrid Distribution Model Based FDIA Attack Sample Generation in Smart Grids" Symmetry 15, no. 9: 1669. https://doi.org
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2073-8994/15/9/1669","timestamp":"2024-11-11T04:31:29Z","content_type":"text/html","content_length":"502137","record_id":"<urn:uuid:23315ea9-331a-49fe-9817-89d2fd1c6c7f>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00596.warc.gz"} |
Matematik-Bilgisayar Bölümü / Department of Mathematics and Computer Science
Collections in this community
• Matematik-Bilgisayar Bölümüne ait bildiri ve sunumlar bu koleksiyonda listelenir.
• Matematik-Bilgisayar Bölümüne ait kitap bölümleri bu koleksiyonda listelenir.
• Matematik-Bilgisayar bölümüne ait kitaplar bu koleksiyonda listelenir.
• Matematik-Bilgisayar Bölümüne ait makaleler bu koleksiyonda listelenir.
Recent Submissions
• (MDPI, 2023)
In this work, a computationally efficient method based on data-driven surrogate models is proposed for the design optimization procedure of a Frequency Selective Surface (FSS)-based filtering
antenna (Filtenna). A Filtenna ...
• (UNIV NIS, 2019)
This article deals with the approximation properties of a generalization of an integral type operator in the sense of Favard-Szász type operators including Sheffer polynomials with graphics
plotted using Maple.We investigate ...
• (Elsevier B.V., 2022)
We study the two-sublattice model in the mean field theory by expanding the Gibbs free energy in terms of the magnetizations M1 (Mup) and M2 (Mdown) with the quadratic coupling M12M22
(quadrupolar interactions) for the ...
• (Springer/Plenum Publishers, 2021)
The T - x phase diagram of a binary system of tetradecane + hexadecane is calculated using the Landau phenomenological model. Expressions derived for the phase lines are fitted to experimental
data from the literature and ...
• (Institute of Electrical and Electronics Engineers Inc., 2021)
In this work, design optimization process of a multiband antenna via the use of Artificial Neural Network (ANN) based surrogate model and meta-heuristic optimizers is studied. For this mean
firstly, by using Latin-Hyper ...
• (Scientific Technical Research Council Turkey, 2021)
Let G be a compact abelian metric group with Haar measure lambda and (G) over cap its dual with Haar measure mu. Assume that 1 < p(i) < infinity, p(i)' = p(i)/p(i)-1, (i = 1, 2, 3) and theta >=
0. Let L-(pi' ,L-theta (G), ...
• (Etamaths Publ., 2021)
In this paper, we introduce a new type of (p; q) exponential function with some properties and a modified (p; q)-Szasz-Mirakyan operators by virtue of this function by investigating approximation
properties. We obtain ...
• (Spie-Soc Photo-Optical Instrumentation Engineers, 2021)
Urbanization at the expense of the natural environment has been increasing in Turkey in recent years. The toll it takes on the ecosystem has an adverse impact on local weather systems and natural
resources. These rapid ...
• (Journal Mathematics & Computer Science-JMCS, 2021)
The aim of this paper is to introduce almost generalized proximal (alpha - psi - phi - theta)-weakly contractive mappings with rational expressions and prove the best proximity point theorems for
such mappings. The main ...
• (Rocky Mt Math Consortium, 2020)
We define the grand Wiener amalgam space by using the classical Wiener amalgam space and the generalized grand Lebesgue space. Moreover we study the inclusions between these spaces and some
applications. Finally we prove ...
• (Amer Inst. Physics, 2020)
In this study, we apply the Landau phenomenological model to describe magnetic transition and structural phase transition in a metal formate framework (MOF) of the ferromagnetic [(CH3)(2)NH2]
[Na0:5Fe0:5(HCOO)(3)], namely, ...
• (Springer/Plenum Publishers, 2020)
We study the temperature and magnetic field dependence of the magnetization (M) and the inverse susceptibility (chi(-1)) in the metal-organic frameworks, in particular, for (CH3)(2)(NH2FeNiII)
-Ni-III(HCOO)(6) (DMFeNi) and ...
• (Walter De Gruyter GMBH, 2020)
In this paper we study the boundedness of localization operators associated with the Stockwell transform with symbol in L-p acting on the Wiener amalgam space W(L-p, L-q)(R).
• (Springer, 2020)
This paper proposes a novel high-performance dynamic inverse distance weighting based local descriptor (DIDWLD) for facial recognition. Studies proposed thus far have focused on finding local
descriptors that can represent ...
• (Sage, 2020)
Reliable prediction of municipal solid waste (MSW) generation rates is a significant element of planning and implementation of sustainable solid waste management strategies. In this study, the
multi-layer perceptron ...
• (Journal Mathematics & Computer Science-JMCS, 2020)
In this article, a pair of second-order nondifferentiable symmetric dual model in optimization problem is formulated over arbitrary cones. For a differentiable function, we consider the
definition of strongly K-pseudobonvexity ...
• (Kossuth Lajos Tudomanyegyetem, 2020)
We prove that the subgroup of all IA-automorphisms of the automorphism group Aut(N) of a free nilpotent group N of infinite rank is normality-small. As a consequence, every maximal normal
subgroup of the group Aut(N) is a ...
• (2016)
The Raman frequency of a soft mode (238 cm-1) is analyzed as a function of pressure at 20 oC for NH4F using the experimental data from the literature. This analysis is performed for the pressure
dependence of the Raman ...
• (2016)
The Pippard relations (CP vs. ?P and ?P vs. ?T) are examined at various temperatures up to 1200 K at zero pressure (P = 0) for the cubic gauche nitrogen. The specific heat (CP) is related to the
thermal expansion (?P) and ...
• (Institute of Physics Publishing, 2012)
DR rate coefficients for He-like, Li-like and Be-like Bismuth ions and KLM resonances are calculated using MCBP approximation. | {"url":"https://arelarsiv.arel.edu.tr/xmlui/handle/20.500.12294/193","timestamp":"2024-11-05T09:56:12Z","content_type":"text/html","content_length":"72748","record_id":"<urn:uuid:f138002c-33d7-4e2e-be38-8c5b1197453c>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00252.warc.gz"} |
How to calculate percent difference with three sums
Per cent difference or percentage difference is used to calculate how much two numbers vary between each other. It is presented as a percentage. Percentage difference is useful in manufacturing,
design or engineering. Calculating the percentage difference between three numbers requires calculating the percentage differences between paired numbers of the three. Finding this result requires no
mathematical knowledge beyond basic arithmetic. You will need knowledge of addition, averaging, division and how to convert a fraction to a percentage.
• Per cent difference or percentage difference is used to calculate how much two numbers vary between each other.
• You will need knowledge of addition, averaging, division and how to convert a fraction to a percentage.
Write down the three sums you are going to use for the problem. We'll use 3, 7 and 10 for this example.
Pick two of the numbers and subtract them from each other and write down the value. For example, subtracting 7 from 3 would result in an answer of -4. Remove any negative signs you may get as a
result of your subtraction. This will leave you with a result of 4. Write this number down; you'll use it later in a division problem.
• Pick two of the numbers and subtract them from each other and write down the value.
• Write this number down; you'll use it later in a division problem.
Ignore the 4 for now and instead add the two number you originally picked. Adding 7 and 3 results in a sum of 10. Divide this number by 2 to find the average. The average here is 5.
Divide the difference from Step 2 by the average from Step 3, that is, 4 divided by 5 which results in .8. Use your calculator to solve the problem if necessary.
• Divide the difference from Step 2 by the average from Step 3, that is, 4 divided by 5 which results in .8.
Multiple your result from Step 4 by 100 to get your percentage. The problem here would be .8 multiplied by 100. This results in 80. Write this number down on your paper and draw a percentage sign to
the right. This is your percentage difference, which means there is an 80 per cent difference between 3 and 7.
Repeat this process by pairing up the rest of the numbers. For example, you'd solve the problems for the pair 3 and 10 and the pair 7 and 10. Write all your percentage differences down. | {"url":"https://www.ehow.co.uk/how_8365409_calculate-percent-difference-three-sums.html","timestamp":"2024-11-15T01:31:26Z","content_type":"text/html","content_length":"121832","record_id":"<urn:uuid:db89bacb-0347-478b-92bf-e80edf7e6a5c>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00091.warc.gz"} |
Let $G$ be an affine, connected, reductive group and $X$ a $G$-module. Choose a maximal torus $T\subseteq G$, a Borel $B\subseteq G$ containing $T$ and let $U$ be the unipotent radical of $B$. Denote
by the character group of $T$. Let $\Lambda\subseteq\mathbb{X}$ be the set of
dominant weight
s of $G$ with respect to these choices. We can decompose the graded coordinate ring $\mathbb{C}[X]=\bigoplus_{\lambda\in\Lambda} V_{(\lambda)}$ into its
isotypic components
$V_{(\lambda)}$ of weight $\lambda$. Let \[ \Lambda_X=\{ \lambda\in\Lambda \mid V_{(\lambda)}\ne\{0\}\} \] be the set of weights that occur in $\mathbb{C}[X]$. Let $V_{(\lambda)}\cong V_\lambda^{\
oplus n_\lambda}$, where $V_\lambda$ is the irreducible module of highest weight $\lambda$. Each $V_\lambda$ has a highest weight vector which is unique up to scaling - let $f_{\lambda 1}, \ldots, f_
{\lambda n_\lambda}\in V_{(\lambda)}$ be linearly independent highest weight vectors.
*For each $1\le k\le n_\lambda$, if the function $f:=f_{\lambda k}\in\mathbb{C}[X]$ is reducible, then there exist weights $\lambda_1,\ldots,\lambda_r\in\Lambda_X$ such that $\lambda$ is an $\mathbb
N$-linear combination of the $\lambda_i$.* Indeed, about a year ago this statement was completely unclear to me. However, it's actually not that hard to see and I felt like sharing my proof.
I asked a question on mathoverflow, and while researching I stumbled upon a theorem which I did not know before. I consider it quite powerful and the proof is short and sweet, I copied it from this
expository paper by Bryden Cais: Theorem. Let $X$ be a normal, separated, Noetherian scheme and $U\subseteq X$ a nonempty affine open subset. Then, $X\setminus U$ is pure of codimension one. Proof.
We need to show that for each generic point $y\in Y=X\setminus U$, the local ring $\mathcal{O}_{X,y}$ has dimension one. Let $y$ be the generic point of a component of $Y$. Denote by $\mathfrak{m}$
the maximal ideal of the local ring $A:=\mathcal{O}_{X,y}$. Let $S:=\mathrm{Spec}(A)$. The inclusion morphism $f:S\to X$ is affine because $X$ is separated (Lemma 1). Thus, $V:=f^{-1}(U)=S\setminus\{
\mathfrak{m} \}$ is an affine, open subscheme of $S$. The morphism on global sections $A=\mathcal{O}_S(S)\to\mathcal{O}_{V}(V)$, corresponding to the inclusion $V\to S$, is therefore injective. It
can not be surjective because $V\ne S$. Pick some $a\in\mathcal{O}_{V}(V)\setminus A$. If we now had $\dim(A)>1$, then the prime ideals of height one $\mathfrak{p}\subseteq A$ would satisfy $\
mathfrak{p}\ne\mathfrak{m}$, i.e. they would correspond to points contained in $V$. Consequently, we would have $a\in \mathcal{O}_{V,\mathfrak{p}} = A_{\mathfrak{p}}$. Because $A$ is a normal
Noetherian domain, it is the intersection of all localizations at prime ideals of height one - this is Corollary 11.4 in Eisenbud's book. This yields the contradition that $a\in A$. Lemma 1. If $f:X\
to Y$ is a morphism of schemes with $Y$ separated and $X$ affine, then $f$ is an affine morphism. Proof. This is exercise 3.3.6 in Qing Liu's book. Let $V\subseteq Y$ be an open affine subset.
Proposition 3.9(f) in the same book tells you that there is a closed immersion $f^{-1}(V)\cong X\times_Y V\to X\times V$, so $f^{-1}(V)$ is isomorphic to a closed subscheme of an affine scheme, hence | {"url":"https://blag.nullteilerfrei.de/tag/variety/","timestamp":"2024-11-13T03:22:30Z","content_type":"text/html","content_length":"35305","record_id":"<urn:uuid:687c8de2-69ef-4149-82d5-3b95654f74a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00574.warc.gz"} |
Shape Optimization of A Parabolic Trough Collector, Geometrical Considerations
Volume 10, Issue 01 (January 2021)
Shape Optimization of A Parabolic Trough Collector, Geometrical Considerations
DOI : 10.17577/IJERTV10IS010001
Download Full-Text PDF Cite this Publication
B. El Ghazzani , O. Nait Mensour , R. Ait El Cadi , A. Ihlal, K. Bouabid, , 2021, Shape Optimization of A Parabolic Trough Collector, Geometrical Considerations, INTERNATIONAL JOURNAL OF ENGINEERING
RESEARCH & TECHNOLOGY (IJERT) Volume 10, Issue 01 (January 2021),
• Open Access
• Authors : B. El Ghazzani , O. Nait Mensour , R. Ait El Cadi , A. Ihlal, K. Bouabid
• Paper ID : IJERTV10IS010001
• Volume & Issue : Volume 10, Issue 01 (January 2021)
• Published (First Online): 21-01-2021
• ISSN (Online) : 2278-0181
• Publisher Name : IJERT
• License: This work is licensed under a Creative Commons Attribution 4.0 International License
Text Only Version
Shape Optimization of A Parabolic Trough Collector, Geometrical Considerations
B. El Ghazzani1*, O. Nait Mensour1,
1. Ait El Cadi1 , A. Ihlal1, K. Bouabid1
1 Laboratory of Materials and Renewable Energy (LMER), University IBN ZOHR, faculty of science BP 8106 – City Dakhla, Agadir, Morocco.
R. Ait El Cadi2
2 Laboratory of Thermodynamics and Energy (LTE), University IBN ZOHR, faculty of science BP 8106 – City Dakhla, Agadir, Morocco.
Abstract In the last years, solar thermal power plants based on parabolic trough concentrators (PTCs), which are the most proven type of the Concentrating Solar Power technologies (CSP), have
been widely deployed in the industry sectors. Indeed, there are various applications for thermal plants in industry, such as desalination, refrigeration and air heating.
The optimization of the mirror surfaces design is basic to characterize the whole solar energy conversion system.
In this paper the basic geometrical parameters a typical PTC are presented and the optical performances of this collector is studied basing on these parameters.
Keywords- Solar energy; parabolic trough collectors; solar concentration; rim angle; optical performance.
1. INTRODUCTION
Through an indirect process, concentrating solar power (CSP) is an efficient means to convert solar energy into another type of energy, usually thermal.[1] CSP technologies consist of
large-area of mirrors that concentrate sunlight onto a small- aperture receiver. The working fluid, called heat transfer fluid (HTF), inside the receiver can be heated to a high temperature,
and then the hot fluid can be used to either directly run a thermodynamic power cycle or generate another working fluid at high temperature through a heat exchanger to run the cycle in order
to generate electricity or for another use[2].
Several developing countries like Mexico, Egypt, India, and Morocco are moving to concentrating solar power for electricity[3]. A technology assessment shows that CSP plants could play a
promising role in these countries and generally in Africa and Europe where the level of solar radiation is high, helping to reach ambitious climate protection goals [4][5].
In general, CSP technologies can be classified in four types: [6] parabolic trough, linear Fresnel, central receiver (tower), and dish/engine. This work will focus only on the parabolic
trough collectors (PTCs) which can be considered the most mature among these concentrating solar technologies. The common concentrating design requires the use of mirrors. [7] As mentioned
earlier the parabolic profile is one of the most widespread because of its construction properties and
a reasonably good manufacture feasibility [8].
The optimization of the mirror surfaces design is basic to characterize the whole solar energy conversion system [9], it is the most responsible for the radiation collection. In real plants
working conditions, several misalignment errors between the solar rays and the collector axis would arise, a part of them are related with the environment where the system is built, others
concern the concentrator itself [10].
First of all, it is suitable to describe and explain the most important parameters of a typical parabolic trough collector (PTC) and makes recommendations to reach good performance. A short
description of the PTC energy balance is also presented in this paper. In the purpose to evaluate the optical performances of a PTC, the geometrical parameters are taken into consideration in
this paper.
2. MAIN PTC PARAMETERS
Commercial PTC designs for solar thermal power plants are generally 100 m to 150 m long and have a parabola width of about 6-m, while PTC designs for industrial heat use are remarkably
smaller [11]. Parabolic trough collectors require solar tracking systems to modify their position with the changing apparent sun position in the sky from sunrise to sunset. Movement of this
type of solar collector has only one degree of freedom, on-axis rotation [12].
The most important PTC parameters are the geometric concentration ratio, acceptance angle, rim angle, and peak optical efficiency [13]. The following paragraphs explain these parameters and
The geometric characteristics of a parabolic trough collector from a lateral plane are depicted in Fig.1 as follow:
Figure 1: Principal geometrical characteristics of a PTC (Cross-sectional view)
☆ f is the parabola focal length;
☆ d is the absorber diameter;
☆ W is the collector width;
☆ r is the distance between the focus and the end point of the profile;
☆ is the rim angle of the collector formed by the normal axis and the straight line from the focus to the end point of the profile;
☆ is the incoming rays angle in respect to the parabola axis (the incidence angle).
1. The rim angle
The rim angle, , is directly related to the concentrator arc length and the focal length as shown in Fig.5.1, it can be calculated as a function of the parabola focal distance, f, and
aperture width, W and can expressed as follow:
2. The geometric concentration ratio, Cg
The geometric concentration ratio, Cg, is the ratio between the collector aperture area and the total absorber tube area as presented in Fig.1 and Fig.2(b). The Geometric concentration
ratio, Cg, is given by the equation [13] [14]:
. sin
= . . = . = sin
Figure 2: (a) Geometric concentration ratio, Cg and (b) acceptance angle, and aperture angle, of a parabolic-trough collector. [13]
3. The acceptance angle
The acceptance angle, , is the maximum angle that can be formed by two rays on a plane transversal to the collector aperture in such a way that, when they are reflected by the parabolic
mirrors, they intercept the absorber tube. Fig.2 (b) illustrates this angle.
Small acceptance angles are associated with high concentration ratios, which require the installation of very accurate solar tracking systems and, consequently, higher costs. The minimum
acceptance angle is (0.53°), which is the average solid angle with which the solar disk as seen from the Earth. Therefore, any PTC with an acceptance angle smaller than 0.53° would
always lose a fraction of the direct solar radiation.
[13] In fact, recommended acceptance angles for commercial PTCs are in the range of 1 to 2°. So most commercial PTC designs have acceptance angles within the range of 12°, with
geometric concentration ratios of 20 to 30. [13]
4. Optical losses
The importance of optical losses in parabolic-trough collectors comes from the fact that they are about 25% of the total solar flux incident on the PTC aperture plane. Optical losses are
associated with the four parameters presented below (see Fig.3).
○ Reflectivity, , of the collector reflective surface.
○ Intercept factor, . A fraction of the direct solar radiation reflected by the mirrors does not reach
the active surface of the receiver pipe due to either reflector microscopic imperfections, macroscopic errors in the parabolic-trough concentrator shape (e.g., inaccuracies during
assembly) or mechanical deformation, etc. This optical parameter is typically within the 0.910.93 range for high-quality PTCs [13].
○ Transmissivity, of the glass cover. This parameter present the ratio of the radiation passing through the glass cover to the total incident radiation.
○ Absorptivity, of the receiver selective coating. This parameter describes the amount of energy absorbed by the steel receiver pipe over the total radiation reaching its outer wall.
Figure 3: Optical losses in a parabolic-trough collector [13]
5. The Incidence Angle Modifier (IAM)
The incidence angle of the direct solar radiation, , (Fig.4) affects the four optical parameters mentioned earlier and the useful aperture area of the collector. This effect is quantified
by the incidence angle modifier, called
Therefore, in addition to the losses due to the angle of incidence, there are other losses on collectors that can be correlated to the angle of incidence. These losses are due to the
additional reflection and absorption of the glass envelope as the angle of incidence varies. The incidence angle modifier (IAM) corrects these additional losses of reflection and
absorption. This parameter represents an empirical correlation to experimental data on a parabolic trough collector. It includes all optical and geometric losses in a PTC generated by an
incidence angle greater than 0°, without including the effect.
Figure 4: The angle between the solar irradiation and the normal vector to the collector aperture plane (Incidence angle). [15]
6. The peak optical efficiency of the PTC, ,
The peak optical efficiency is given by the multiplication of these four parameters (reflectivity, intercept factor, glass transmissivity, and absorptivity of the steel pipe) when the
incidence angle, , of the solar rays onto the PTC aperture plane is 0°, and it is expressed as follow:
,0is usually in the range of 0.74-0.79 for good- quality, clean parabolic trough collectors [13].
The multiplication of the peak optical efficiency, ,0, by the incidence angle modifier, () gives us the part of the direct solar radiation reaching the PTC aperture plane with the
incidence angle that is absorbed by the receiver pipe.
7. Energy balance in a PTC
There are three sources of energy loss in a typical PTC, and are presented as follow:
☆ optical losses due to mirror reflectivity, intercept factor, glass transmissivity and absorptance of the receiver tube when the solar radiation incidence angle is equal to 0°, ,0.
☆ additional optical and geometrical losses due to an incidence angle greater than 0°, (); these additional losses do not exist when the incident angle is equal to 0° because ( = 0) = 1
☆ Thermal losses from the receiver pipe to the ambient,
Figure 5: Energy balance in a parabolic-trough collector [13]
= . . cos ,0. (). (5)
= . . cos ,0. (). (5)
The net thermal output can be theoretically calculated from the energy balance shown in Fig.5, and direct solar irradiance, DNI, ambient air temperature, incidence angle, , and PTC optical,
thermal and geometrical parameters, in addition to the soiling factor, Fe, which is calculated as the ratio between average PTC mirror reflectivity during real operation and the nominal
reflectivity when the PTC is completely clean. This net thermal output can be given by the following equation:
Row shadowing
Shadowing between rows particularly occurs at extreme solar positions when the shadow cast by a collector closer to the sun obscures a portion of an adjacent collector. Fig.6 presents the
geometry associated with row shadowing.
Figure 6: Two adjacent collector rows shadowing each other. [15]
The shadowing efficiency is equal to the ratio of the non- shadowed aperture to the total aperture width, w, as shown in Equation 6.
End losses
At the ends of the receiver, the end losses occur. For non- zero incidence angle, some parts at the extremities of the absorber tube are not illuminated by the solar radiation reflected by
the mirror. Fig.7 depicts the end losses by an absorber tube.
Figure 7: End losses by the absorber tube of a PTC. (Modified from [15] )
The end losses appear on the absorber tube in the case of non-zero incidence angle. The end losses parameter is a function of the focal length of the PTC, its proper length, and the incidence
angle. It is given by the following equation [16]:
3. SHAPE OPTIMIZATION OF A PTC, GEOMETRICAL CONIDERATIONS
From an optical point of view, light is considered as rays carrying power through a transmission medium and interacting with reflective and diffractive surfaces.
The definition of the concentration ratio of the parabolic trough collector takes into account a cylindrical absorber that is fully hit by the rays reflected from the mirror. It is clear to
assure that the CR becomes lower when the incidence angle increases.
The trends for a rim angle equal to 90° and the one equal to 71° which is the value used in the commercial Polytrough 1800, also the one equal to 50° of the Polytrough 1200, are reported
in Fig.8. In fact, the parameter is very relevant and
values near 1° are sufficient to cut down the concentration level for the three rim angle values. The variation of the rim angle value influences also the concentration ratio of the PTC for
different incidence angle values.
C_R_(rim_90°) C_R_(rim_71°) C_R_(rim_50°)
C_R_(rim_90°) C_R_(rim_71°) C_R_(rim_50°)
Concentration Ration
Concentration Ration
80 C_R(0,25°)
Concentration ratio (C_R)
Concentration ratio (C_R)
70 C_R(0,5°)
50 C_R(1°)
0.01 0.08 0.16 0.27 0.48 1.13
Figure 8: The variation of the geometrical concentration ratio as function of the incidence angle
Figure 10: The variation of the geometric concentration ratio as function of (f/c), for three values of
According to Equation 1 and Equation 2, for every value of , the maximum concentration ratio can be reached when the rim angle is 90° and f/c is equal to 0.25, without taking into
considerations the concentrator absolute dimensions.
Related curves are shown and the particular cases for equal to 0.25, 0.5°, and 1°. Figures Fig.9 and Fig.10 illustrate these curves. The Fig.11 presents the concentration ratio variation as
function of the both rim angle and the incidence angle.
This concentration ratio is usually about 25 and High concentration ratios are associated with higher working
Concentration ratio C_R
Concentration ratio C_R
#0R,3EF! 0,5
temperatures[13]. Although theoretically, the maximum is on the order of 72 for the case a PTC with equal to 90° as presented in figures bellow. This maximum decreases when the value
Concentration ratio C_R
Concentration ratio C_R
Figure 11: The variation of the geometric concentration ratio as function of
rim angle and
This study makes possible to define the shape of a parabolic trough collector that optimizes the optical efficiency. The calculations depict that the rim angle of 90° is the best in order to
have a PTC with the optimal concentration ratio.
C_R(0,5°) C_R(1°)
4. CONCLUSION
In this paper, the main parameters of a typical parabolic trough collector are discussed, then, for the aim to evaluate the optical performances of a PTC, a study about its shape optimization
is performed. The definition of the optimized shape of a parabolic through collectors is possible through this study. The calculations show that the rim angle of 90° is the best in order to
have a PTC with the optimal concentration ratio. After that a modelling of a parabolic trough collector using the Ray Tracing approach of Monte Carlo method under
Figure 9: The variation of the geometric concentration ratio as function of rim angle , for three values of
SolTrace software is mandatory in order to validate this conclusion.
This work is supported by IRESEN (Institut de Recherche en Energie Solaire et Energies Nouvelles, MOROCCO) in the framework of the project InnoTherm.
1. A. Fernandez-Garcia, E. Zarza, L. Valenzuela, and M. Pérez, Parabolic-trough solar collectors and their applications, Renew. Sustain. Energy Rev., vol. 14, pp. 16951721, 2010, doi:
2. R. A. El Cadi et al., Power Generation and Heating Performances of An Organic Rankine Cycle Driven by Parabolic Trough Collectors, vol. 8, no. 10, pp. 115120, 2019.
3. O. N. Mensour, S. Bouaddi, B. Abnay, B. Hlimi, and A. Ihlal, Mapping and Estimation of Monthly Global Solar Irradiation in Different Zones in Souss-Massa Area , Morocco , Using Artificial
Neural Networks, Int. J. Photoenergy, vol. 2017, 2017.
4. P. Viebahn, Y. Lechon, and F. Trieb, The potential role of concentrated solar power ( CSP ) in Africa and Europe A dynamic assessment of technology development , cost development and life
cycle inventories until 2050, Energy Policy, vol. 39, no. 8, pp. 44204430, 2011, doi: 10.1016/j.enpol.2010.09.026.
5. B. El Ghazzani, D. Martinez Plaza, R. Ait El Cadi, B. Abnay, A. Ihlal, and K. Bouabid, Thermal plant based on parabolic trough collectors for industrial process heat generation in
Morocco, Renew. Energy, vol. 113, pp. 12611275, 2017, doi: 10.1016/j.renene.2017.06.063.
6. D. Mills, Advances in solar thermal electricity technology, Sol. Energy, vol. 76, pp. 1931, 2004, doi: 10.1016/S0038- 092X(03)00102-6.
7. H. Price et al., Advances in Parabolic Trough Solar Power Technology, J. Sol. Energy Eng., vol. 124, no. 2, p. 109, 2002, doi: 10.1115/1.1467922.
8. A. Fernandez-Garcia, E. Rojas, P. Manuel, R. Silva, Q. Hernandez- Escobido, and F. Manzano-agugliaro, A parabolic-trough collector for cleaner industrial process heat nzazu Fern a d, J.
Clean. Prod. J., 2014, doi: 10.1016/j.jclepro.2014.11.018.
9. J. Xiao, X. Wei, Z. Lu, W. Yu, and H. Wu, A review of available methods for surface shape measurement of solar concentrator in
solar thermal power applications, Renew. Sustain. Energy Rev., vol. 16, no. 5, pp. 25392544, 2012, doi: 10.1016/j.rser.2012.01.063.
10. K. Lovegrove and J. Pye, Fundamental principles of concentrating solar power (CSP) systems, 2012, pp. 1667.
11. M. Borunda, O. A. Jaramillo, R. Dorantes, and A. Reyes, Organic Rankine Cycle coupling with a Parabolic Trough Solar Power Plant for cogeneration and industrial processes, Renew. Energy,
2016, doi: 10.1016/j.renene.2015.08.041.
12. A. Lokurlu and F. Richarts, High efficient utilisation of solar energy with newly developed parabolic trough collectors ( SOLITEM PTC ) for chilling and steam production in a hotel at the
Mediterranean coast of Turkey Ahmet Lokurlu * and Fritz Richarts Dirk Krüger, int. J. Energy Tech Policy, vol. 3, pp. 137146, 2005.
13. E. Zarza, Parabolic trough concentrating solar power (CSP) systems, Woodhead Publ. Ltd., 2012, doi: 10.1533/9780857096173.2.197.
14. G. Pierucci, D. Fontani, P. Sansoni, and M. De Lucia, Shape optimization for parabolic troughs working in non-ideal conditions, Energy Procedia, vol. 57, pp. 22312240, 2014, doi: 10.1016/
15. M. J. Wagner and P. Gilman, Technical Manual for the SAM Physical Trough Model Technical Manual for the SAM Physical Trough Model, no. June, 2011.
16. F. Lippke, Simulation of the Part Load Behavior of a 30MWe SEGS Plant., Albuquerque, 1995.
You must be logged in to post a comment. | {"url":"https://www.ijert.org/shape-optimization-of-a-parabolic-trough-collector-geometrical-considerations","timestamp":"2024-11-11T07:01:55Z","content_type":"text/html","content_length":"83797","record_id":"<urn:uuid:ec0509f4-f5df-48f1-9fc9-e56d692083a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00140.warc.gz"} |
[Solved] A pressure of 650 kPa pushes a piston of | SolutionInn
A pressure of 650 kPa pushes a piston of diameter 0.25 m with V = 5 m/s.
A pressure of 650 kPa pushes a piston of diameter 0.25 m with V = 5 m/s. What is the volume displacement rate, the force and the transmitted power?
Fantastic news! We've Found the answer you've been seeking!
Step by Step Answer:
Answer rating: 100% (3 reviews)
Answered By
Danish Sohail
My objective is to become most reliable expert for clients. For last 10 years I have been associated with the field of accounting and finance. My aim is to strive for best results and pay particular
attention to client needs. I am always enthusiastic to help clients for issues and concerns related to business studies. I can work on analysis of the financial statements, calculate different ratios
and analysis of ratios. I can critically evaluate stock prices based on the financial analysis and valuation for companies using financial statements of the business entity being valued with use of
excel tools. I have expertise to provide effective and reliable help for projects in corporate finance, equity investments, financial accounting, cost accounting, financial planning, business plans,
marketing plans, performance measurement, budgeting, economic research, risk assessment, risk management, derivatives, fixed income investments, taxation, auditing, and financial performance
4.80+ 78+ Reviews 112+ Question Solved
Students also viewed these Mechanical Engineering questions
Study smarter with the SolutionInn App | {"url":"https://www.solutioninn.com/pressure-of-650-kpa-pushes-piston","timestamp":"2024-11-10T18:09:29Z","content_type":"text/html","content_length":"78068","record_id":"<urn:uuid:2d33dde6-825e-4848-bf1a-1f470476642b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00288.warc.gz"} |
Concerns About the Default Cauchy Are Often Exaggerated: A Demonstration with JASP 0.12
Contrary to most of the published literature, the impact of the Cauchy prior width on the t-test Bayes factor is seen to be surprisingly modest. Removing the most extreme 50% of the prior mass can at
best double the Bayes factor against the null hypothesis, the same impact as conducting a one-sided instead of a two-sided test. We demonstrate this with the help of the “Equivalence T-Test” module,
which was added in JASP 0.12.
We recently revised a comment on a scholarly article by Jorge Tendeiro and Henk Kiers (henceforth TK). Before getting to the main topic of this post, here is the abstract:
Tendeiro and Kiers (2019) provide a detailed and scholarly critique of Null Hypothesis Bayesian Testing (NHBT) and its central component –the Bayes factor– that allows researchers to update
knowledge and quantify statistical evidence. Tendeiro and Kiers conclude that NHBT constitutes an improvement over frequentist p-values, but primarily elaborate on a list of eleven ‘issues’ of
NHBT. We believe that several issues identified by Tendeiro and Kiers are of central importance for elucidating the complementary roles of hypothesis testing versus parameter estimation and for
appreciating the virtue of statistical thinking over conducting statistical rituals. But although we agree with many of their thoughtful recommendations, we believe that Tendeiro and Kiers are
overly pessimistic, and that several of their ‘issues’ with NHBT may in fact be conceived as pronounced advantages. We illustrate our arguments with simple, concrete examples and end with a
critical discussion of one of the recommendations by Tendeiro and Kiers, which is that “estimation of the full posterior distribution offers a more complete picture” than a Bayes factor
hypothesis test.
In section 3, “Use of default Bayes factors”, we address the common critique that the default Cauchy distribution (on effect size for a t-test) is so wide that the results are meaningless:
(…) in our experience the adoption of reasonable non-default prior distributions has only a modest impact on the Bayes factor (e.g., Gronau, Ly, & Wagenmakers, 2020). This impact is typically
much smaller than that caused by a change in the statistical model, by variable transformations, by different treatment of outliers, and so forth. To explain why the impact of prior distributions
is often surprisingly modest, consider TK’s critique that the default prior for the t-test –a Cauchy distribution centered at zero with scale parameter .707– is too wide. Specifically, this
distribution assigns 50% of its mass to values larger than |.707|: if this is unrealistically wide, maybe the default prior distribution is of limited use, and the resulting Bayes factor
misleading? Indeed, we ourselves have been worried in the past that the default Cauchy distribution is too wide, despite literature reviews showing that large effect sizes occur more often than
one may expect (e.g., Aczel, 2018, slide 20; Wagenmakers, Wetzels, Borsboom, Kievit, & van der Maas, 2013). However, we recently realized that the impact of the ‘wideness’ is much more modest
than one may intuit.
Consider two researchers, A and B, who analyze the same data set. Researcher A uses the default zero-centered Cauchy prior distribution with interquartile range of .707; researcher B uses the
same prior distribution, but truncated to have mass only within the interval from −.707 to +.707. Assume that, in a very large sample, the observed effect is relatively close to zero. Researcher
A reports a Bayes factor of 3.5 against the null hypothesis. It is now clear that the truncated default prior used by researcher B will provide better predictive performance, because no prior
mass is ‘wasted’ on large values of effect size that are inconsistent with the data. As it turns out, truncating the default Cauchy to its interquartile range increases the predictive performance
of the alternative hypothesis by a factor of at most 2. This means that the Bayes factor for B’s truncated alternative hypothesis versus A’s default ‘overly wide’ alternative hypothesis is at
most 2; consequently, B will report a Bayes factor against the null hypothesis that cannot be any larger than 2 × 3.5 = 7. This means that the potential predictive benefit of truncating the
default distribution to its interquartile range is just as large as the potential predictive benefit of conducting a one-sided test instead of a two-sided test.5 In other words, suppose a very
large data set has an effect size of 0.3 with almost all posterior mass ranging from 0.2 to 0.4; the predictive benefit of knowing in advance the direction of the effect is just as large as the
predictive benefit of knowing in advance that it falls within the prior interquartile range; consequently, the Bayes factor from a one-sided default Cauchy distribution is virtually identical to
the Bayes factor from a two-sided default Cauchy distribution that is truncated to the [−.707,+.707] interval.
A Demonstration with JASP
We now provide a concrete demonstration using the “Equivalence T-Test” module from JASP 0.12. We open JASP, and from the Data Library, in the category “T-tests”, select the “Kitchen Roll” data. In
Descriptives, we plot the data for “mean_NEO” across the two conditions given by “Rotation”:
There does not appear to be a large effect here. An independent-samples Bayesian t-test with a default two-sided Cauchy prior on effect size yields the following result:
The results show that 95% of the posterior mass under H1 falls in the interval from -0.503 to 0.233, well inside the default Cauchy’s interquartile range (i.e., [−.707,+.707]). So these data are
suitable to test our claim that truncating the default Cauchy to its interquartile range will give a predictive benefit that is at most 2. In other words, the Bayes factor for the truncated
alternative hypothesis versus the default untruncated alternative hypothesis is at most 2. Note that this involves an overlapping hypothesis test: the truncated hypothesis is a restricted case of the
untruncated hypothesis. This means that the only reason that the truncated hypothesis can predict the data better is because it is more parsimonious than the untruncated hypothesis.
To confirm this, click on the large + sign on the top right of the JASP screen to view all modules, and activate the “Equivalence T-Test” module. From the module, select the Bayesian Independent
Samples T-test; drag mean_NEO to the “Variables” field and drag Rotation to the “Grouping variables” field. Then define the “Equivalence region” to range from −.707 to .707 and tick “Prior and
posterior mass”. These are the resulting output tables:
The second table confirms that almost all posterior mass (i.e., 99.9%) falls inside the specified interval (defined as the interquartile range of the default Cauchy). The first row of the first table
gives the Bayes factor for the hypothesis that the effect falls inside of the interval (i.e., the truncated hypothesis) versus the hypothesis that the effect could fall anywhere (i.e., the
untruncated hypothesis). The Bayes factor for this overlapping hypothesis test is 1.997 — close to its theoretical upper bound of 2.
Although not of immediate interest here, one may also consider a non-overlapping hypothesis test, one that compares the hypothesis that the effect falls inside of the interval against the hypothesis
that the effect falls outside of the interval. The third row of the first table above shows that the associated Bayes factor is about 775, that is, the observed data are 775 times more likely to
occur under the hypothesis that the effect falls inside of the specified interval than under the hypothesis that the effect falls outside of that interval. This highlights that different questions
may evoke dramatically different answers; here, the question “it is inside of the interval instead of anywhere?” yields a BF of almost 2, whereas the question “is it inside of the interval or outside
of the interval?” yields a BF of about 775. The reason for the discrepancy is that the hypothesis “it is anywhere” can actually account very well for an effect size inside the interval, whereas this
is impossible for the more risky hypothesis “it is outside of the interval”.
Note that the Bayesian equivalence test demonstrated here is based on the work by Morey & Rouder (2011) and more generally on the work by Herbert Hoijtink, Irene Klugkist, and associates (e.g.,
Hoijtink, 2011; Hoijtink, Klugkist, & Boelen, 2008).
Hoijtink, H. (2011). Informative hypotheses: Theory and practice for behavioral and social scientists. Boca Raton, FL: Chapman & Hall/CRC.
Hoijtink, H., Klugkist, I., & Boelen, P. (2008) (Eds). Bayesian evaluation of informative hypotheses. New York: Springer.
Morey, R. D., & Rouder, J. N. (2011). Bayes factor approaches for testing interval null hypotheses. Psychological Methods, 16, 406-419.
van Ravenzwaaij, D., & Wagenmakers, E.-J. (2020). Advantages masquerading as ‘issues’ in Bayesian hypothesis testing: A commentary on Tendeiro and Kiers (2019). Manuscript submitted for publication.
Tendeiro, J. N., & Kiers, H. A. L. (in press). A review of issues about Null Hypothesis Bayesian Testing. Psychological Methods.
About The Authors
Eric-Jan Wagenmakers
Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.
Don van Ravenzwaaij
website) is an Associate Professor at the University of Groningen. In May 2018, he was awarded an NWO Vidi grant (a 5-year fellowship) for improving the evaluation of statistical evidence in the
field of biomedicine. The first pillar of his research is about proper use of statistical inference in science. The second pillar is about the advancement and application of response time models to
speeded decision making.
Jill de Ron
Jill de Ron is a Research Master student in psychology at the University of Amsterdam. | {"url":"https://www.bayesianspectacles.org/concerns-about-the-default-cauchy-are-often-exaggerated-a-demonstration-with-jasp-0-12/","timestamp":"2024-11-08T11:39:06Z","content_type":"text/html","content_length":"55731","record_id":"<urn:uuid:4819bb96-6907-46b5-b9e3-e813f6d6301f>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00226.warc.gz"} |
[no subject]
Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[no subject]
1. In a well-designed probability sample, the sum of non-normalized
probability weights is an estimate of population
size. In the sampling literature, the standard symbol for
population size is "N", and the symbol for sample size is "n". So, Rita's
usage is correct.
2. -aweights- are _not_ restricted to integers.
See http://www.stata.com/statalist/archive/2004-11/msg00274.html for an
interesting thread about the relationship of analytic weights and
probability weights.
On Oct 29, 2012, at 6:15 PM, JVerkuilen (Gmail) wrote:
On Mon, Oct 29, 2012 at 5:44 PM, Rita Luk <[email protected]> wrote:
> Hi Jverkulien,
> I am slow to pick up what you say. For example my data has a sample size of only 5 obs, with the sanmpling wt variable wtpp:
> caseid wtpp
> 1. 60.74
> 2. 700.38
> 3. 139.64
> 4. 9671.57
> 5. 1545.32
> Sum of wtpp= N= 12117.65
> According to what you said, what does the analytical weight look like? In addition to being normalized to sum to N, does the aweight need to be integer?
In the case you discuss I think what would happen is that each of the
wtpp numbers would be divided by 12117.65 and then multiplied by 5.
The number you list as N is not N. N is 5.
Usually aweights are used when you have several means and their
sampling variances and want to generate an average mean weighted by
sampling variance. The sampling variances have each means N built in.
Hopefully if I'm wrong someone will chime in.
Date: Mon, 29 Oct 2012 17:11:59 -0400
From: "JVerkuilen (Gmail)" <[email protected]>
Subject: Re: st: aweight
I have only what you saw but my guess is as follows:
(1) Start with a vector a which gives the analytic weights, and n from
the sample.
(2) Generate a vector w = a/sum(a) , which is normalized to sum to 1.
(3) Generate a vector f = n*w, which has the weighting structure of
vector a, but is rescaled to sum to n and thus can be used as if they
were frequencies.
On Mon, Oct 29, 2012 at 4:47 PM, Rita Luk <[email protected]> wrote:
> Hi Statalist,
> Where can I find the computation detail of analytical weights (aweight) ?
> In User guide 20.22.2, it says : If you specify aweights, they are: 1. Normalized to sum to N and then 2. Insert in the ... as fweights.
> What does it mean (in formula) to normalize the weight to sum to N? Where can I find the formula for the normalization.
> I am working on point estimates of descriptive statisitcs (mean, median and histogram) using survey data and not concern about the variance at the moment. In particular, I want to use non-svy commands with weights (let's leave the svy commands for now). I read comments from Steven Joel Hirsch Samuels and Austin Nichols and know that I will arrive at same weighted mean,median or histogram using either of the following 2 methods:
> Suppose I have a survey data set with sampling weight variable wtpp (the sum of wtpp over the entire sample equals the total population of the target population)
> 1. converted sampling wt to integer and use as freq weight: gen double myfw=round(wtpp*100), then tabstat xvar [fw=myfw], s(mean median)
> 2. Use aweight : tabstat xvar [aw=wtpp], s(mean median)
> I know these methods give same estimates. But do why they give same estimates? Hence my question on the formula of normalization of weight given wtpp?
> I appreciate if any one can help me on this.
> Rita Luk|University of Toronto|33 Russell St. T5. Toronto,ON. M5S2S1. Canada|T: (416) 535-8501 x4727
This email has been scanned by the CAMH Email Security System.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/ | {"url":"https://www.stata.com/statalist/archive/2012-10/msg01365.html","timestamp":"2024-11-01T20:54:55Z","content_type":"text/html","content_length":"11207","record_id":"<urn:uuid:e7127b2c-98ad-4c34-8506-ea47341d6580>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00256.warc.gz"} |
nForum - Discussion Feed (initial object)jonsterling comments on "initial object" (119604)jonsterling comments on "initial object" (119603)jonsterling comments on "initial object" (119602)Urs comments on "initial object" (115546)Samuel Adrian Antz comments on "initial object" (115543)Samuel Adrian Antz comments on "initial object" (115544)Urs comments on "initial object" (96052)Urs comments on "initial object" (94980)varkor comments on "initial object" (75109)Mike Shulman comments on "initial object" (69685)Mike Shulman comments on "initial object" (69684)Urs comments on "initial object" (69658)Urs comments on "initial object" (69657)Todd_Trimble comments on "initial object" (63694)Mike Shulman comments on "initial object" (63689)
Added example to show that the status of being the initial object is dependent on the category considered. Also added the dual version of the example from terminal object about the identity morphism
in over categories.
(I would also like to know: Is there a reason, no extra page Field for the category of fields exists, but only a section on fields? So far, I have already dead-linked it twice, on initial object and
on determinant. I certainly would like to create it, and have already written a bit for it, but first want to make sure that there isn’t a reason it doesn’t exist yet.)
(For some reason unknown to me, correcting some errors merely a minute later than the original edit was registered as a new one in the logs. Edit: I just noticed that the original comment is posted
here twice, maybe it was automatically copied into the box again.)
diff, v39, current | {"url":"https://nforum.ncatlab.org/search/?PostBackAction=Search&Type=Comments&Page=1&Feed=ATOM&DiscussionID=7873&FeedTitle=Discussion+Feed+%28initial+object%29","timestamp":"2024-11-07T19:59:16Z","content_type":"application/atom+xml","content_length":"20677","record_id":"<urn:uuid:7774fafe-02de-46fd-a64f-7c1d8566be00>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00568.warc.gz"} |
Algorithms and pseudocode in Specialist Mathematics | Mathexams
Algorithms and pseudocode in Specialist Mathematics
From an algorithm to a pseudocode
Consider a mathematical problem. Can you devise a straightforward algorithm for its solution? Perhaps you can visualize it with a flowchart. Afterward, create a pseudocode representation of your
algorithm. Finally, implement this pseudocode in Python, using the TI Nspire CAS platform. | {"url":"https://mathexams.com.au/product/algorithms-and-pseudocode-in-specialist-mathematics/","timestamp":"2024-11-12T06:54:37Z","content_type":"text/html","content_length":"113970","record_id":"<urn:uuid:941b40a1-c579-4305-993a-02b32e507b18>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00052.warc.gz"} |
Slicks top draw
This weeks top draw picks....
OK, it's about time I looked at this on a league by league basis to see if we can find an advantage..
Premier league...
20 draw picks, 4 correct
39 draw picks, 5 correct
League 1
39 draw picks, 10 correct
League 2
34 draw picks, 15 correct
hopefully that's definitely a trend occurring
Now lets see how the bottom draw compares,
Last edited:
Correct top draw picks last weekend...
Same draws all around this week in top, bottom and crossover draws, top draw fairs worst though as it took 16 picks to pick 3 out of a total of 6 draws.
This weeks top draw picks....
Correct top draw picks....
3 correct from 13 picks out of a total of 10 draws, not great but it did nab two draws the bottom draw missed.
So combined top and bottom draw gives 6 correct from 19 picks out of a total of 10 draws, very near to the 1 in 3 minimum I'm looking for.
This weeks top draw picks....
Quite a few expected this week barring League 2 according to the bookies.
Correct top draw picks....
Shocking from the top draw last week 4 draws from 18 picks out of a total of 12 draws.
I'm gonna wind up the top draws me thinks now the season is coming to a close. the system just isn't hitting enough draws.
I'll still do the crossovers though in the bottom draw as the ratio of picks to draws is much better.
I wasn't going to do anymore top drawer picks but with the absence of enough data so soon in the season to do the bottom draw picks, I'll do them to see how I diddle...
Rather than posting all the leagues and draws circled I'll post the draws straight off the bat.....
Forgot about this, after eventing I know but last night's Hull game would have also hit the criteria
Newcastle v Liverpool should also be on the list.
There's 20 draws this week in the top draw which is a bit ridiculous but I'll post them anyway for the stats...
Noe we're 5 games in I've also done the crossovers from the bottom draw....
It looks Like I've missed Newport from the mainlist.
Last edited:
I missed week 1 so I'll start on Week 2 draws week beginning Fri 18th Aug
Correct picks..
3 from 17 Shocking lol
Week 3
4 from 17 from a total of 9 draws.#
Near misses 1 draw from 2 picks
I've started the bottom draw and crossover picks this week so hopefully we get more accuracy. | {"url":"https://betnod.com/threads/slicks-top-draw.11116/page-2","timestamp":"2024-11-02T19:21:09Z","content_type":"text/html","content_length":"115176","record_id":"<urn:uuid:e34d6be3-2924-4c30-b27c-1a5e5c7b620d>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00273.warc.gz"} |
Tutorial 6: Transformers and Multi-Head Attention
Tutorial 6: Transformers and Multi-Head Attention¶
In this tutorial, we will discuss one of the most impactful architectures of the last 2 years: the Transformer model. Since the paper Attention Is All You Need by Vaswani et al. had been published in
2017, the Transformer architecture has continued to beat benchmarks in many domains, most importantly in Natural Language Processing. Transformers with an incredible amount of parameters can generate
long, convincing essays, and opened up new application fields of AI. As the hype of the Transformer architecture seems not to come to an end in the next years, it is important to understand how it
works, and have implemented it yourself, which we will do in this notebook.
Despite the huge success of Transformers in NLP, we will not include the NLP domain in our notebook here. Why? Firstly, the Master AI at UvA offers many great NLP courses that will take a closer look
at the application of the Transformer architecture in NLP (NLP2, Advanced Topics in Computational Semantics). Secondly, assignment 2 takes already a closer look at language generation on character
level, on which you could easily apply our transformer architecture. Finally, and most importantly, there is so much more to the Transformer architecture. NLP is the domain the Transformer
architecture has been originally proposed for and had the greatest impact on, but it also accelerated research in other domains, recently even Computer Vision. Thus, we focus here on what makes the
Transformer and self-attention so powerful in general. In Tutorial 15, we will discuss the application of Transformers in Computer Vision.
Below, we import our standard libraries. Similarly as in Tutorial 5, we will use PyTorch Lightning as an additional framework. If you are not familiar with PyTorch Lightning, please make sure to have
read Tutorial 5 carefully.
## Standard libraries
import os
import numpy as np
import random
import math
import json
from functools import partial
## Imports for plotting
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg', 'pdf') # For export
from matplotlib.colors import to_rgb
import matplotlib
matplotlib.rcParams['lines.linewidth'] = 2.0
import seaborn as sns
## tqdm for loading bars
from tqdm.notebook import tqdm
## PyTorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
import torch.optim as optim
## Torchvision
import torchvision
from torchvision.datasets import CIFAR100
from torchvision import transforms
# PyTorch Lightning
import pytorch_lightning as pl
except ModuleNotFoundError: # Google Colab does not have PyTorch Lightning installed by default. Hence, we do it here if necessary
!pip install --quiet pytorch-lightning>=1.4
import pytorch_lightning as pl
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
# Path to the folder where the datasets are/should be downloaded (e.g. CIFAR10)
DATASET_PATH = "../data"
# Path to the folder where the pretrained models are saved
CHECKPOINT_PATH = "../saved_models/tutorial6"
# Setting the seed
# Ensure that all operations are deterministic on GPU (if used) for reproducibility
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
print("Device:", device)
Two pre-trained models are downloaded below. Make sure to have adjusted your CHECKPOINT_PATH before running this code if not already done.
import urllib.request
from urllib.error import HTTPError
# Github URL where saved models are stored for this tutorial
base_url = "https://raw.githubusercontent.com/phlippe/saved_models/main/tutorial6/"
# Files to download
pretrained_files = ["ReverseTask.ckpt", "SetAnomalyTask.ckpt"]
# Create checkpoint path if it doesn't exist yet
os.makedirs(CHECKPOINT_PATH, exist_ok=True)
# For each file, check whether it already exists. If not, try downloading it.
for file_name in pretrained_files:
file_path = os.path.join(CHECKPOINT_PATH, file_name)
if "/" in file_name:
os.makedirs(file_path.rsplit("/",1)[0], exist_ok=True)
if not os.path.isfile(file_path):
file_url = base_url + file_name
print(f"Downloading {file_url}...")
urllib.request.urlretrieve(file_url, file_path)
except HTTPError as e:
print("Something went wrong. Please try to download the file from the GDrive folder, or contact the author with the full output including the following error:\n", e)
The Transformer architecture¶
In the first part of this notebook, we will implement the Transformer architecture by hand. As the architecture is so popular, there already exists a Pytorch module nn.Transformer (documentation) and
a tutorial on how to use it for next token prediction. However, we will implement it here ourselves, to get through to the smallest details.
There are of course many more tutorials out there about attention and Transformers. Below, we list a few that are worth exploring if you are interested in the topic and might want yet another
perspective on the topic after this one:
What is Attention?¶
The attention mechanism describes a recent new group of layers in neural networks that has attracted a lot of interest in the past few years, especially in sequence tasks. There are a lot of
different possible definitions of “attention” in the literature, but the one we will use here is the following: the attention mechanism describes a weighted average of (sequence) elements with the
weights dynamically computed based on an input query and elements’ keys. So what does this exactly mean? The goal is to take an average over the features of multiple elements. However, instead of
weighting each element equally, we want to weight them depending on their actual values. In other words, we want to dynamically decide on which inputs we want to “attend” more than others. In
particular, an attention mechanism has usually four parts we need to specify:
• Query: The query is a feature vector that describes what we are looking for in the sequence, i.e. what would we maybe want to pay attention to.
• Keys: For each input element, we have a key which is again a feature vector. This feature vector roughly describes what the element is “offering”, or when it might be important. The keys should
be designed such that we can identify the elements we want to pay attention to based on the query.
• Values: For each input element, we also have a value vector. This feature vector is the one we want to average over.
• Score function: To rate which elements we want to pay attention to, we need to specify a score function \(f_{attn}\). The score function takes the query and a key as input, and output the score/
attention weight of the query-key pair. It is usually implemented by simple similarity metrics like a dot product, or a small MLP.
The weights of the average are calculated by a softmax over all score function outputs. Hence, we assign those value vectors a higher weight whose corresponding key is most similar to the query. If
we try to describe it with pseudo-math, we can write:
\[\alpha_i = \frac{\exp\left(f_{attn}\left(\text{key}_i, \text{query}\right)\right)}{\sum_j \exp\left(f_{attn}\left(\text{key}_j, \text{query}\right)\right)}, \hspace{5mm} \text{out} = \sum_i \
alpha_i \cdot \text{value}_i\]
Visually, we can show the attention over a sequence of words as follows:
For every word, we have one key and one value vector. The query is compared to all keys with a score function (in this case the dot product) to determine the weights. The softmax is not visualized
for simplicity. Finally, the value vectors of all words are averaged using the attention weights.
Most attention mechanisms differ in terms of what queries they use, how the key and value vectors are defined, and what score function is used. The attention applied inside the Transformer
architecture is called self-attention. In self-attention, each sequence element provides a key, value, and query. For each element, we perform an attention layer where based on its query, we check
the similarity of the all sequence elements’ keys, and returned a different, averaged value vector for each element. We will now go into a bit more detail by first looking at the specific
implementation of the attention mechanism which is in the Transformer case the scaled dot product attention.
Scaled Dot Product Attention¶
The core concept behind self-attention is the scaled dot product attention. Our goal is to have an attention mechanism with which any element in a sequence can attend to any other while still being
efficient to compute. The dot product attention takes as input a set of queries \(Q\in\mathbb{R}^{T\times d_k}\), keys \(K\in\mathbb{R}^{T\times d_k}\) and values \(V\in\mathbb{R}^{T\times d_v}\)
where \(T\) is the sequence length, and \(d_k\) and \(d_v\) are the hidden dimensionality for queries/keys and values respectively. For simplicity, we neglect the batch dimension for now. The
attention value from element \(i\) to \(j\) is based on its similarity of the query \(Q_i\) and key \(K_j\), using the dot product as the similarity metric. In math, we calculate the dot product
attention as follows:
The matrix multiplication \(QK^T\) performs the dot product for every possible pair of queries and keys, resulting in a matrix of the shape \(T\times T\). Each row represents the attention logits for
a specific element \(i\) to all other elements in the sequence. On these, we apply a softmax and multiply with the value vector to obtain a weighted mean (the weights being determined by the
attention). Another perspective on this attention mechanism offers the computation graph which is visualized below (figure credit - Vaswani et al., 2017).
One aspect we haven’t discussed yet is the scaling factor of \(1/\sqrt{d_k}\). This scaling factor is crucial to maintain an appropriate variance of attention values after initialization. Remember
that we intialize our layers with the intention of having equal variance throughout the model, and hence, \(Q\) and \(K\) might also have a variance close to \(1\). However, performing a dot product
over two vectors with a variance \(\sigma^2\) results in a scalar having \(d_k\)-times higher variance:
\[q_i \sim \mathcal{N}(0,\sigma^2), k_i \sim \mathcal{N}(0,\sigma^2) \to \text{Var}\left(\sum_{i=1}^{d_k} q_i\cdot k_i\right) = \sigma^4\cdot d_k\]
If we do not scale down the variance back to \(\sim\sigma^2\), the softmax over the logits will already saturate to \(1\) for one random element and \(0\) for all others. The gradients through the
softmax will be close to zero so that we can’t learn the parameters appropriately. Note that the extra factor of \(\sigma^2\), i.e., having \(\sigma^4\) instead of \(\sigma^2\), is usually not an
issue, since we keep the original variance \(\sigma^2\) close to \(1\) anyways.
The block Mask (opt.) in the diagram above represents the optional masking of specific entries in the attention matrix. This is for instance used if we stack multiple sequences with different lengths
into a batch. To still benefit from parallelization in PyTorch, we pad the sentences to the same length and mask out the padding tokens during the calculation of the attention values. This is usually
done by setting the respective attention logits to a very low value.
After we have discussed the details of the scaled dot product attention block, we can write a function below which computes the output features given the triple of queries, keys, and values:
def scaled_dot_product(q, k, v, mask=None):
d_k = q.size()[-1]
attn_logits = torch.matmul(q, k.transpose(-2, -1))
attn_logits = attn_logits / math.sqrt(d_k)
if mask is not None:
attn_logits = attn_logits.masked_fill(mask == 0, -9e15)
attention = F.softmax(attn_logits, dim=-1)
values = torch.matmul(attention, v)
return values, attention
Note that our code above supports any additional dimensionality in front of the sequence length so that we can also use it for batches. However, for a better understanding, let’s generate a few
random queries, keys, and value vectors, and calculate the attention outputs:
seq_len, d_k = 3, 2
q = torch.randn(seq_len, d_k)
k = torch.randn(seq_len, d_k)
v = torch.randn(seq_len, d_k)
values, attention = scaled_dot_product(q, k, v)
print("Q\n", q)
print("K\n", k)
print("V\n", v)
print("Values\n", values)
print("Attention\n", attention)
tensor([[ 0.3367, 0.1288],
[ 0.2345, 0.2303],
[-1.1229, -0.1863]])
tensor([[ 2.2082, -0.6380],
[ 0.4617, 0.2674],
[ 0.5349, 0.8094]])
tensor([[ 1.1103, -1.6898],
[-0.9890, 0.9580],
[ 1.3221, 0.8172]])
tensor([[ 0.5698, -0.1520],
[ 0.5379, -0.0265],
[ 0.2246, 0.5556]])
tensor([[0.4028, 0.2886, 0.3086],
[0.3538, 0.3069, 0.3393],
[0.1303, 0.4630, 0.4067]])
Before continuing, make sure you can follow the calculation of the specific values here, and also check it by hand. It is important to fully understand how the scaled dot product attention is
Multi-Head Attention¶
The scaled dot product attention allows a network to attend over a sequence. However, often there are multiple different aspects a sequence element wants to attend to, and a single weighted average
is not a good option for it. This is why we extend the attention mechanisms to multiple heads, i.e. multiple different query-key-value triplets on the same features. Specifically, given a query, key,
and value matrix, we transform those into \(h\) sub-queries, sub-keys, and sub-values, which we pass through the scaled dot product attention independently. Afterward, we concatenate the heads and
combine them with a final weight matrix. Mathematically, we can express this operation as:
\[\begin{split}\begin{split} \text{Multihead}(Q,K,V) & = \text{Concat}(\text{head}_1,...,\text{head}_h)W^{O}\\ \text{where } \text{head}_i & = \text{Attention}(QW_i^Q,KW_i^K, VW_i^V) \end{split}\end
We refer to this as Multi-Head Attention layer with the learnable parameters \(W_{1...h}^{Q}\in\mathbb{R}^{D\times d_k}\), \(W_{1...h}^{K}\in\mathbb{R}^{D\times d_k}\), \(W_{1...h}^{V}\in\mathbb{R}^
{D\times d_v}\), and \(W^{O}\in\mathbb{R}^{h\cdot d_v\times d_{out}}\) (\(D\) being the input dimensionality). Expressed in a computational graph, we can visualize it as below (figure credit -
Vaswani et al., 2017).
How are we applying a Multi-Head Attention layer in a neural network, where we don’t have an arbitrary query, key, and value vector as input? Looking at the computation graph above, a simple but
effective implementation is to set the current feature map in a NN, \(X\in\mathbb{R}^{B\times T\times d_{\text{model}}}\), as \(Q\), \(K\) and \(V\) (\(B\) being the batch size, \(T\) the sequence
length, \(d_{\text{model}}\) the hidden dimensionality of \(X\)). The consecutive weight matrices \(W^{Q}\), \(W^{K}\), and \(W^{V}\) can transform \(X\) to the corresponding feature vectors that
represent the queries, keys, and values of the input. Using this approach, we can implement the Multi-Head Attention module below.
# Helper function to support different mask shapes.
# Output shape supports (batch_size, number of heads, seq length, seq length)
# If 2D: broadcasted over batch size and number of heads
# If 3D: broadcasted over number of heads
# If 4D: leave as is
def expand_mask(mask):
assert mask.ndim >= 2, "Mask must be at least 2-dimensional with seq_length x seq_length"
if mask.ndim == 3:
mask = mask.unsqueeze(1)
while mask.ndim < 4:
mask = mask.unsqueeze(0)
return mask
class MultiheadAttention(nn.Module):
def __init__(self, input_dim, embed_dim, num_heads):
assert embed_dim % num_heads == 0, "Embedding dimension must be 0 modulo number of heads."
self.embed_dim = embed_dim
self.num_heads = num_heads
self.head_dim = embed_dim // num_heads
# Stack all weight matrices 1...h together for efficiency
# Note that in many implementations you see "bias=False" which is optional
self.qkv_proj = nn.Linear(input_dim, 3*embed_dim)
self.o_proj = nn.Linear(embed_dim, embed_dim)
def _reset_parameters(self):
# Original Transformer initialization, see PyTorch documentation
def forward(self, x, mask=None, return_attention=False):
batch_size, seq_length, _ = x.size()
if mask is not None:
mask = expand_mask(mask)
qkv = self.qkv_proj(x)
# Separate Q, K, V from linear output
qkv = qkv.reshape(batch_size, seq_length, self.num_heads, 3*self.head_dim)
qkv = qkv.permute(0, 2, 1, 3) # [Batch, Head, SeqLen, Dims]
q, k, v = qkv.chunk(3, dim=-1)
# Determine value outputs
values, attention = scaled_dot_product(q, k, v, mask=mask)
values = values.permute(0, 2, 1, 3) # [Batch, SeqLen, Head, Dims]
values = values.reshape(batch_size, seq_length, self.embed_dim)
o = self.o_proj(values)
if return_attention:
return o, attention
return o
One crucial characteristic of the multi-head attention is that it is permutation-equivariant with respect to its inputs. This means that if we switch two input elements in the sequence, e.g. \(X_1\
leftrightarrow X_2\) (neglecting the batch dimension for now), the output is exactly the same besides the elements 1 and 2 switched. Hence, the multi-head attention is actually looking at the input
not as a sequence, but as a set of elements. This property makes the multi-head attention block and the Transformer architecture so powerful and widely applicable! But what if the order of the input
is actually important for solving the task, like language modeling? The answer is to encode the position in the input features, which we will take a closer look at later (topic Positional encodings
Before moving on to creating the Transformer architecture, we can compare the self-attention operation with our other common layer competitors for sequence data: convolutions and recurrent neural
networks. Below you can find a table by Vaswani et al. (2017) on the complexity per layer, the number of sequential operations, and maximum path length. The complexity is measured by the upper bound
of the number of operations to perform, while the maximum path length represents the maximum number of steps a forward or backward signal has to traverse to reach any other position. The lower this
length, the better gradient signals can backpropagate for long-range dependencies. Let’s take a look at the table below:
\(n\) is the sequence length, \(d\) is the representation dimension and \(k\) is the kernel size of convolutions. In contrast to recurrent networks, the self-attention layer can parallelize all its
operations making it much faster to execute for smaller sequence lengths. However, when the sequence length exceeds the hidden dimensionality, self-attention becomes more expensive than RNNs. One way
of reducing the computational cost for long sequences is by restricting the self-attention to a neighborhood of inputs to attend over, denoted by \(r\). Nevertheless, there has been recently a lot of
work on more efficient Transformer architectures that still allow long dependencies, of which you can find an overview in the paper by Tay et al. (2020) if interested.
Transformer Encoder¶
Next, we will look at how to apply the multi-head attention block inside the Transformer architecture. Originally, the Transformer model was designed for machine translation. Hence, it got an
encoder-decoder structure where the encoder takes as input the sentence in the original language and generates an attention-based representation. On the other hand, the decoder attends over the
encoded information and generates the translated sentence in an autoregressive manner, as in a standard RNN. While this structure is extremely useful for Sequence-to-Sequence tasks with the necessity
of autoregressive decoding, we will focus here on the encoder part. Many advances in NLP have been made using pure encoder-based Transformer models (if interested, models include the BERT-family, the
Vision Transformer, and more), and in our tutorial, we will also mainly focus on the encoder part. If you have understood the encoder architecture, the decoder is a very small step to implement as
well. The full Transformer architecture looks as follows (figure credit - Vaswani et al., 2017).:
The encoder consists of \(N\) identical blocks that are applied in sequence. Taking as input \(x\), it is first passed through a Multi-Head Attention block as we have implemented above. The output is
added to the original input using a residual connection, and we apply a consecutive Layer Normalization on the sum. Overall, it calculates \(\text{LayerNorm}(x+\text{Multihead}(x,x,x))\) (\(x\) being
\(Q\), \(K\) and \(V\) input to the attention layer). The residual connection is crucial in the Transformer architecture for two reasons:
1. Similar to ResNets, Transformers are designed to be very deep. Some models contain more than 24 blocks in the encoder. Hence, the residual connections are crucial for enabling a smooth gradient
flow through the model.
2. Without the residual connection, the information about the original sequence is lost. Remember that the Multi-Head Attention layer ignores the position of elements in a sequence, and can only
learn it based on the input features. Removing the residual connections would mean that this information is lost after the first attention layer (after initialization), and with a randomly
initialized query and key vector, the output vectors for position \(i\) has no relation to its original input. All outputs of the attention are likely to represent similar/same information, and
there is no chance for the model to distinguish which information came from which input element. An alternative option to residual connection would be to fix at least one head to focus on its
original input, but this is very inefficient and does not have the benefit of the improved gradient flow.
The Layer Normalization also plays an important role in the Transformer architecture as it enables faster training and provides small regularization. Additionally, it ensures that the features are in
a similar magnitude among the elements in the sequence. We are not using Batch Normalization because it depends on the batch size which is often small with Transformers (they require a lot of GPU
memory), and BatchNorm has shown to perform particularly bad in language as the features of words tend to have a much higher variance (there are many, very rare words which need to be considered for
a good distribution estimate).
Additionally to the Multi-Head Attention, a small fully connected feed-forward network is added to the model, which is applied to each position separately and identically. Specifically, the model
uses a Linear\(\to\)ReLU\(\to\)Linear MLP. The full transformation including the residual connection can be expressed as:
\[\begin{split}\begin{split} \text{FFN}(x) & = \max(0, xW_1+b_1)W_2 + b_2\\ x & = \text{LayerNorm}(x + \text{FFN}(x)) \end{split}\end{split}\]
This MLP adds extra complexity to the model and allows transformations on each sequence element separately. You can imagine as this allows the model to “post-process” the new information added by the
previous Multi-Head Attention, and prepare it for the next attention block. Usually, the inner dimensionality of the MLP is 2-8\(\times\) larger than \(d_{\text{model}}\), i.e. the dimensionality of
the original input \(x\). The general advantage of a wider layer instead of a narrow, multi-layer MLP is the faster, parallelizable execution.
Finally, after looking at all parts of the encoder architecture, we can start implementing it below. We first start by implementing a single encoder block. Additionally to the layers described above,
we will add dropout layers in the MLP and on the output of the MLP and Multi-Head Attention for regularization.
class EncoderBlock(nn.Module):
def __init__(self, input_dim, num_heads, dim_feedforward, dropout=0.0):
input_dim - Dimensionality of the input
num_heads - Number of heads to use in the attention block
dim_feedforward - Dimensionality of the hidden layer in the MLP
dropout - Dropout probability to use in the dropout layers
# Attention layer
self.self_attn = MultiheadAttention(input_dim, input_dim, num_heads)
# Two-layer MLP
self.linear_net = nn.Sequential(
nn.Linear(input_dim, dim_feedforward),
nn.Linear(dim_feedforward, input_dim)
# Layers to apply in between the main layers
self.norm1 = nn.LayerNorm(input_dim)
self.norm2 = nn.LayerNorm(input_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, x, mask=None):
# Attention part
attn_out = self.self_attn(x, mask=mask)
x = x + self.dropout(attn_out)
x = self.norm1(x)
# MLP part
linear_out = self.linear_net(x)
x = x + self.dropout(linear_out)
x = self.norm2(x)
return x
Based on this block, we can implement a module for the full Transformer encoder. Additionally to a forward function that iterates through the sequence of encoder blocks, we also provide a function
called get_attention_maps. The idea of this function is to return the attention probabilities for all Multi-Head Attention blocks in the encoder. This helps us in understanding, and in a sense,
explaining the model. However, the attention probabilities should be interpreted with a grain of salt as it does not necessarily reflect the true interpretation of the model (there is a series of
papers about this, including Attention is not Explanation and Attention is not not Explanation).
class TransformerEncoder(nn.Module):
def __init__(self, num_layers, **block_args):
self.layers = nn.ModuleList([EncoderBlock(**block_args) for _ in range(num_layers)])
def forward(self, x, mask=None):
for l in self.layers:
x = l(x, mask=mask)
return x
def get_attention_maps(self, x, mask=None):
attention_maps = []
for l in self.layers:
_, attn_map = l.self_attn(x, mask=mask, return_attention=True)
x = l(x)
return attention_maps
Positional encoding¶
We have discussed before that the Multi-Head Attention block is permutation-equivariant, and cannot distinguish whether an input comes before another one in the sequence or not. In tasks like
language understanding, however, the position is important for interpreting the input words. The position information can therefore be added via the input features. We could learn a embedding for
every possible position, but this would not generalize to a dynamical input sequence length. Hence, the better option is to use feature patterns that the network can identify from the features and
potentially generalize to larger sequences. The specific pattern chosen by Vaswani et al. are sine and cosine functions of different frequencies, as follows:
\[\begin{split}PE_{(pos,i)} = \begin{cases} \sin\left(\frac{pos}{10000^{i/d_{\text{model}}}}\right) & \text{if}\hspace{3mm} i \text{ mod } 2=0\\ \cos\left(\frac{pos}{10000^{(i-1)/d_{\text{model}}}}\
right) & \text{otherwise}\\ \end{cases}\end{split}\]
\(PE_{(pos,i)}\) represents the position encoding at position \(pos\) in the sequence, and hidden dimensionality \(i\). These values, concatenated for all hidden dimensions, are added to the original
input features (in the Transformer visualization above, see “Positional encoding”), and constitute the position information. We distinguish between even (\(i \text{ mod } 2=0\)) and uneven (\(i \text
{ mod } 2=1\)) hidden dimensionalities where we apply a sine/cosine respectively. The intuition behind this encoding is that you can represent \(PE_{(pos+k,:)}\) as a linear function of \(PE_
{(pos,:)}\), which might allow the model to easily attend to relative positions. The wavelengths in different dimensions range from \(2\pi\) to \(10000\cdot 2\pi\).
The positional encoding is implemented below. The code is taken from the PyTorch tutorial about Transformers on NLP and adjusted for our purposes.
class PositionalEncoding(nn.Module):
def __init__(self, d_model, max_len=5000):
d_model - Hidden dimensionality of the input.
max_len - Maximum length of a sequence to expect.
# Create matrix of [SeqLen, HiddenDim] representing the positional encoding for max_len inputs
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0)
# register_buffer => Tensor which is not a parameter, but should be part of the modules state.
# Used for tensors that need to be on the same device as the module.
# persistent=False tells PyTorch to not add the buffer to the state dict (e.g. when we save the model)
self.register_buffer('pe', pe, persistent=False)
def forward(self, x):
x = x + self.pe[:, :x.size(1)]
return x
To understand the positional encoding, we can visualize it below. We will generate an image of the positional encoding over hidden dimensionality and position in a sequence. Each pixel, therefore,
represents the change of the input feature we perform to encode the specific position. Let’s do it below.
encod_block = PositionalEncoding(d_model=48, max_len=96)
pe = encod_block.pe.squeeze().T.cpu().numpy()
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8,3))
pos = ax.imshow(pe, cmap="RdGy", extent=(1,pe.shape[1]+1,pe.shape[0]+1,1))
fig.colorbar(pos, ax=ax)
ax.set_xlabel("Position in sequence")
ax.set_ylabel("Hidden dimension")
ax.set_title("Positional encoding over hidden dimensions")
ax.set_xticks([1]+[i*10 for i in range(1,1+pe.shape[1]//10)])
ax.set_yticks([1]+[i*10 for i in range(1,1+pe.shape[0]//10)])
You can clearly see the sine and cosine waves with different wavelengths that encode the position in the hidden dimensions. Specifically, we can look at the sine/cosine wave for each hidden dimension
separately, to get a better intuition of the pattern. Below we visualize the positional encoding for the hidden dimensions \(1\), \(2\), \(3\) and \(4\).
fig, ax = plt.subplots(2, 2, figsize=(12,4))
ax = [a for a_list in ax for a in a_list]
for i in range(len(ax)):
ax[i].plot(np.arange(1,17), pe[i,:16], color=f'C{i}', marker="o", markersize=6, markeredgecolor="black")
ax[i].set_title(f"Encoding in hidden dimension {i+1}")
ax[i].set_xlabel("Position in sequence", fontsize=10)
ax[i].set_ylabel("Positional encoding", fontsize=10)
ax[i].tick_params(axis='both', which='major', labelsize=10)
ax[i].tick_params(axis='both', which='minor', labelsize=8)
ax[i].set_ylim(-1.2, 1.2)
As we can see, the patterns between the hidden dimension \(1\) and \(2\) only differ in the starting angle. The wavelength is \(2\pi\), hence the repetition after position \(6\). The hidden
dimensions \(2\) and \(3\) have about twice the wavelength.
Learning rate warm-up¶
One commonly used technique for training a Transformer is learning rate warm-up. This means that we gradually increase the learning rate from 0 on to our originally specified learning rate in the
first few iterations. Thus, we slowly start learning instead of taking very large steps from the beginning. In fact, training a deep Transformer without learning rate warm-up can make the model
diverge and achieve a much worse performance on training and testing. Take for instance the following plot by Liu et al. (2019) comparing Adam-vanilla (i.e. Adam without warm-up) vs Adam with a
Clearly, the warm-up is a crucial hyperparameter in the Transformer architecture. Why is it so important? There are currently two common explanations. Firstly, Adam uses the bias correction factors
which however can lead to a higher variance in the adaptive learning rate during the first iterations. Improved optimizers like RAdam have been shown to overcome this issue, not requiring warm-up for
training Transformers. Secondly, the iteratively applied Layer Normalization across layers can lead to very high gradients during the first iterations, which can be solved by using Pre-Layer
Normalization (similar to Pre-Activation ResNet), or replacing Layer Normalization by other techniques (Adaptive Normalization, Power Normalization).
Nevertheless, many applications and papers still use the original Transformer architecture with Adam, because warm-up is a simple, yet effective way of solving the gradient problem in the first
iterations. There are many different schedulers we could use. For instance, the original Transformer paper used an exponential decay scheduler with a warm-up. However, the currently most popular
scheduler is the cosine warm-up scheduler, which combines warm-up with a cosine-shaped learning rate decay. We can implement it below, and visualize the learning rate factor over epochs.
class CosineWarmupScheduler(optim.lr_scheduler._LRScheduler):
def __init__(self, optimizer, warmup, max_iters):
self.warmup = warmup
self.max_num_iters = max_iters
def get_lr(self):
lr_factor = self.get_lr_factor(epoch=self.last_epoch)
return [base_lr * lr_factor for base_lr in self.base_lrs]
def get_lr_factor(self, epoch):
lr_factor = 0.5 * (1 + np.cos(np.pi * epoch / self.max_num_iters))
if epoch <= self.warmup:
lr_factor *= epoch * 1.0 / self.warmup
return lr_factor
# Needed for initializing the lr scheduler
p = nn.Parameter(torch.empty(4,4))
optimizer = optim.Adam([p], lr=1e-3)
lr_scheduler = CosineWarmupScheduler(optimizer=optimizer, warmup=100, max_iters=2000)
# Plotting
epochs = list(range(2000))
plt.plot(epochs, [lr_scheduler.get_lr_factor(e) for e in epochs])
plt.ylabel("Learning rate factor")
plt.xlabel("Iterations (in batches)")
plt.title("Cosine Warm-up Learning Rate Scheduler")
In the first 100 iterations, we increase the learning rate factor from 0 to 1, whereas for all later iterations, we decay it using the cosine wave. Pre-implementations of this scheduler can be found
in the popular NLP Transformer library huggingface.
PyTorch Lightning Module¶
Finally, we can embed the Transformer architecture into a PyTorch lightning module. From Tutorial 5, you know that PyTorch Lightning simplifies our training and test code, as well as structures the
code nicely in separate functions. We will implement a template for a classifier based on the Transformer encoder. Thereby, we have a prediction output per sequence element. If we would need a
classifier over the whole sequence, the common approach is to add an additional [CLS] token to the sequence, representing the classifier token. However, here we focus on tasks where we have an output
per element.
Additionally to the Transformer architecture, we add a small input network (maps input dimensions to model dimensions), the positional encoding, and an output network (transforms output encodings to
predictions). We also add the learning rate scheduler, which takes a step each iteration instead of once per epoch. This is needed for the warmup and the smooth cosine decay. The training,
validation, and test step is left empty for now and will be filled for our task-specific models.
class TransformerPredictor(pl.LightningModule):
def __init__(self, input_dim, model_dim, num_classes, num_heads, num_layers, lr, warmup, max_iters, dropout=0.0, input_dropout=0.0):
input_dim - Hidden dimensionality of the input
model_dim - Hidden dimensionality to use inside the Transformer
num_classes - Number of classes to predict per sequence element
num_heads - Number of heads to use in the Multi-Head Attention blocks
num_layers - Number of encoder blocks to use.
lr - Learning rate in the optimizer
warmup - Number of warmup steps. Usually between 50 and 500
max_iters - Number of maximum iterations the model is trained for. This is needed for the CosineWarmup scheduler
dropout - Dropout to apply inside the model
input_dropout - Dropout to apply on the input features
def _create_model(self):
# Input dim -> Model dim
self.input_net = nn.Sequential(
nn.Linear(self.hparams.input_dim, self.hparams.model_dim)
# Positional encoding for sequences
self.positional_encoding = PositionalEncoding(d_model=self.hparams.model_dim)
# Transformer
self.transformer = TransformerEncoder(num_layers=self.hparams.num_layers,
# Output classifier per sequence lement
self.output_net = nn.Sequential(
nn.Linear(self.hparams.model_dim, self.hparams.model_dim),
nn.Linear(self.hparams.model_dim, self.hparams.num_classes)
def forward(self, x, mask=None, add_positional_encoding=True):
x - Input features of shape [Batch, SeqLen, input_dim]
mask - Mask to apply on the attention outputs (optional)
add_positional_encoding - If True, we add the positional encoding to the input.
Might not be desired for some tasks.
x = self.input_net(x)
if add_positional_encoding:
x = self.positional_encoding(x)
x = self.transformer(x, mask=mask)
x = self.output_net(x)
return x
def get_attention_maps(self, x, mask=None, add_positional_encoding=True):
Function for extracting the attention matrices of the whole Transformer for a single batch.
Input arguments same as the forward pass.
x = self.input_net(x)
if add_positional_encoding:
x = self.positional_encoding(x)
attention_maps = self.transformer.get_attention_maps(x, mask=mask)
return attention_maps
def configure_optimizers(self):
optimizer = optim.Adam(self.parameters(), lr=self.hparams.lr)
# Apply lr scheduler per step
lr_scheduler = CosineWarmupScheduler(optimizer,
return [optimizer], [{'scheduler': lr_scheduler, 'interval': 'step'}]
def training_step(self, batch, batch_idx):
raise NotImplementedError
def validation_step(self, batch, batch_idx):
raise NotImplementedError
def test_step(self, batch, batch_idx):
raise NotImplementedError
After having finished the implementation of the Transformer architecture, we can start experimenting and apply it to various tasks. In this notebook, we will focus on two tasks: parallel
Sequence-to-Sequence, and set anomaly detection. The two tasks focus on different properties of the Transformer architecture, and we go through them below.
Sequence to Sequence¶
A Sequence-to-Sequence task represents a task where the input and the output is a sequence, not necessarily of the same length. Popular tasks in this domain include machine translation and
summarization. For this, we usually have a Transformer encoder for interpreting the input sequence, and a decoder for generating the output in an autoregressive manner. Here, however, we will go back
to a much simpler example task and use only the encoder. Given a sequence of \(N\) numbers between \(0\) and \(M\), the task is to reverse the input sequence. In Numpy notation, if our input is \(x\)
, the output should be \(x\)[::-1]. Although this task sounds very simple, RNNs can have issues with such because the task requires long-term dependencies. Transformers are built to support such, and
hence, we expect it to perform very well.
First, let’s create a dataset class below.
class ReverseDataset(data.Dataset):
def __init__(self, num_categories, seq_len, size):
self.num_categories = num_categories
self.seq_len = seq_len
self.size = size
self.data = torch.randint(self.num_categories, size=(self.size, self.seq_len))
def __len__(self):
return self.size
def __getitem__(self, idx):
inp_data = self.data[idx]
labels = torch.flip(inp_data, dims=(0,))
return inp_data, labels
We create an arbitrary number of random sequences of numbers between 0 and num_categories-1. The label is simply the tensor flipped over the sequence dimension. We can create the corresponding data
loaders below.
dataset = partial(ReverseDataset, 10, 16)
train_loader = data.DataLoader(dataset(50000), batch_size=128, shuffle=True, drop_last=True, pin_memory=True)
val_loader = data.DataLoader(dataset(1000), batch_size=128)
test_loader = data.DataLoader(dataset(10000), batch_size=128)
Let’s look at an arbitrary sample of the dataset:
inp_data, labels = train_loader.dataset[0]
print("Input data:", inp_data)
print("Labels: ", labels)
Input data: tensor([9, 6, 2, 0, 6, 2, 7, 9, 7, 3, 3, 4, 3, 7, 0, 9])
Labels: tensor([9, 0, 7, 3, 4, 3, 3, 7, 9, 7, 2, 6, 0, 2, 6, 9])
During training, we pass the input sequence through the Transformer encoder and predict the output for each input token. We use the standard Cross-Entropy loss to perform this. Every number is
represented as a one-hot vector. Remember that representing the categories as single scalars decreases the expressiveness of the model extremely as \(0\) and \(1\) are not closer related than \(0\)
and \(9\) in our example. An alternative to a one-hot vector is using a learned embedding vector as it is provided by the PyTorch module nn.Embedding. However, using a one-hot vector with an
additional linear layer as in our case has the same effect as an embedding layer (self.input_net maps one-hot vector to a dense vector, where each row of the weight matrix represents the embedding
for a specific category).
To implement the training dynamic, we create a new class inheriting from TransformerPredictor and overwriting the training, validation and test step functions.
class ReversePredictor(TransformerPredictor):
def _calculate_loss(self, batch, mode="train"):
# Fetch data and transform categories to one-hot vectors
inp_data, labels = batch
inp_data = F.one_hot(inp_data, num_classes=self.hparams.num_classes).float()
# Perform prediction and calculate loss and accuracy
preds = self.forward(inp_data, add_positional_encoding=True)
loss = F.cross_entropy(preds.view(-1,preds.size(-1)), labels.view(-1))
acc = (preds.argmax(dim=-1) == labels).float().mean()
# Logging
self.log(f"{mode}_loss", loss)
self.log(f"{mode}_acc", acc)
return loss, acc
def training_step(self, batch, batch_idx):
loss, _ = self._calculate_loss(batch, mode="train")
return loss
def validation_step(self, batch, batch_idx):
_ = self._calculate_loss(batch, mode="val")
def test_step(self, batch, batch_idx):
_ = self._calculate_loss(batch, mode="test")
Finally, we can create a training function similar to the one we have seen in Tutorial 5 for PyTorch Lightning. We create a pl.Trainer object, running for \(N\) epochs, logging in TensorBoard, and
saving our best model based on the validation. Afterward, we test our models on the test set. An additional parameter we pass to the trainer here is gradient_clip_val. This clips the norm of the
gradients for all parameters before taking an optimizer step and prevents the model from diverging if we obtain very high gradients at, for instance, sharp loss surfaces (see many good blog posts on
gradient clipping, like DeepAI glossary). For Transformers, gradient clipping can help to further stabilize the training during the first few iterations, and also afterward. In plain PyTorch, you can
apply gradient clipping via torch.nn.utils.clip_grad_norm_(...) (see documentation). The clip value is usually between 0.5 and 10, depending on how harsh you want to clip large gradients. After
having explained this, let’s implement the training function:
def train_reverse(**kwargs):
# Create a PyTorch Lightning trainer with the generation callback
root_dir = os.path.join(CHECKPOINT_PATH, "ReverseTask")
os.makedirs(root_dir, exist_ok=True)
trainer = pl.Trainer(default_root_dir=root_dir,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc")],
accelerator="gpu" if str(device).startswith("cuda") else "cpu",
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, "ReverseTask.ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model, loading...")
model = ReversePredictor.load_from_checkpoint(pretrained_filename)
model = ReversePredictor(max_iters=trainer.max_epochs*len(train_loader), **kwargs)
trainer.fit(model, train_loader, val_loader)
# Test best model on validation and test set
val_result = trainer.test(model, val_loader, verbose=False)
test_result = trainer.test(model, test_loader, verbose=False)
result = {"test_acc": test_result[0]["test_acc"], "val_acc": val_result[0]["test_acc"]}
model = model.to(device)
return model, result
Finally, we can train the model. In this setup, we will use a single encoder block and a single head in the Multi-Head Attention. This is chosen because of the simplicity of the task, and in this
case, the attention can actually be interpreted as an “explanation” of the predictions (compared to the other papers above dealing with deep Transformers).
reverse_model, reverse_result = train_reverse(input_dim=train_loader.dataset.num_categories,
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Found pretrained model, loading...
/home/phillip/anaconda3/envs/nlp1/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: The dataloader, test dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 16 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
warnings.warn(*args, **kwargs)
The warning of PyTorch Lightning regarding the number of workers can be ignored for now. As the data set is so simple and the __getitem__ finishes a neglectable time, we don’t need subprocesses to
provide us the data (in fact, more workers can slow down the training as we have communication overhead among processes/threads). First, let’s print the results:
print(f"Val accuracy: {(100.0 * reverse_result['val_acc']):4.2f}%")
print(f"Test accuracy: {(100.0 * reverse_result['test_acc']):4.2f}%")
Val accuracy: 100.00%
Test accuracy: 100.00%
As we would have expected, the Transformer can correctly solve the task. However, how does the attention in the Multi-Head Attention block looks like for an arbitrary input? Let’s try to visualize it
data_input, labels = next(iter(val_loader))
inp_data = F.one_hot(data_input, num_classes=reverse_model.hparams.num_classes).float()
inp_data = inp_data.to(device)
attention_maps = reverse_model.get_attention_maps(inp_data)
The object attention_maps is a list of length \(N\) where \(N\) is the number of layers. Each element is a tensor of shape [Batch, Heads, SeqLen, SeqLen], which we can verify below.
torch.Size([128, 1, 16, 16])
Next, we will write a plotting function that takes as input the sequences, attention maps, and an index indicating for which batch element we want to visualize the attention map. We will create a
plot where over rows, we have different layers, while over columns, we show the different heads. Remember that the softmax has been applied for each row separately.
def plot_attention_maps(input_data, attn_maps, idx=0):
if input_data is not None:
input_data = input_data[idx].detach().cpu().numpy()
input_data = np.arange(attn_maps[0][idx].shape[-1])
attn_maps = [m[idx].detach().cpu().numpy() for m in attn_maps]
num_heads = attn_maps[0].shape[0]
num_layers = len(attn_maps)
seq_len = input_data.shape[0]
fig_size = 4 if num_heads == 1 else 3
fig, ax = plt.subplots(num_layers, num_heads, figsize=(num_heads*fig_size, num_layers*fig_size))
if num_layers == 1:
ax = [ax]
if num_heads == 1:
ax = [[a] for a in ax]
for row in range(num_layers):
for column in range(num_heads):
ax[row][column].imshow(attn_maps[row][column], origin='lower', vmin=0)
ax[row][column].set_title(f"Layer {row+1}, Head {column+1}")
Finally, we can plot the attention map of our trained Transformer on the reverse task:
plot_attention_maps(data_input, attention_maps, idx=0)
The model has learned to attend to the token that is on the flipped index of itself. Hence, it actually does what we intended it to do. We see that it however also pays some attention to values close
to the flipped index. This is because the model doesn’t need the perfect, hard attention to solve this problem, but is fine with this approximate, noisy attention map. The close-by indices are caused
by the similarity of the positional encoding, which we also intended with the positional encoding.
Set Anomaly Detection¶
Besides sequences, sets are another data structure that is relevant for many applications. In contrast to sequences, elements are unordered in a set. RNNs can only be applied on sets by assuming an
order in the data, which however biases the model towards a non-existing order in the data. Vinyals et al. (2015) and other papers have shown that the assumed order can have a significant impact on
the model’s performance, and hence, we should try to not use RNNs on sets. Ideally, our model should be permutation-equivariant/invariant such that the output is the same no matter how we sort the
elements in a set.
Transformers offer the perfect architecture for this as the Multi-Head Attention is permutation-equivariant, and thus, outputs the same values no matter in what order we enter the inputs (inputs and
outputs are permuted equally). The task we are looking at for sets is Set Anomaly Detection which means that we try to find the element(s) in a set that does not fit the others. In the research
community, the common application of anomaly detection is performed on a set of images, where \(N-1\) images belong to the same category/have the same high-level features while one belongs to another
category. Note that category does not necessarily have to relate to a class in a standard classification problem, but could be the combination of multiple features. For instance, on a face dataset,
this could be people with glasses, male, beard, etc. An example of distinguishing different animals can be seen below. The first four images show foxes, while the last represents a different animal.
We want to recognize that the last image shows a different animal, but it is not relevant which class of animal it is.
In this tutorial, we will use the CIFAR100 dataset. CIFAR100 has 600 images for 100 classes each with a resolution of 32x32, similar to CIFAR10. The larger amount of classes requires the model to
attend to specific features in the images instead of coarse features as in CIFAR10, therefore making the task harder. We will show the model a set of 9 images of one class, and 1 image from another
class. The task is to find the image that is from a different class than the other images. Using the raw images directly as input to the Transformer is not a good idea, because it is not translation
invariant as a CNN, and would need to learn to detect image features from high-dimensional input first of all. Instead, we will use a pre-trained ResNet34 model from the torchvision package to obtain
high-level, low-dimensional features of the images. The ResNet model has been pre-trained on the ImageNet dataset which contains 1 million images of 1k classes and varying resolutions. However,
during training and testing, the images are usually scaled to a resolution of 224x224, and hence we rescale our CIFAR images to this resolution as well. Below, we will load the dataset, and prepare
the data for being processed by the ResNet model.
# ImageNet statistics
DATA_MEANS = np.array([0.485, 0.456, 0.406])
DATA_STD = np.array([0.229, 0.224, 0.225])
# As torch tensors for later preprocessing
TORCH_DATA_MEANS = torch.from_numpy(DATA_MEANS).view(1,3,1,1)
TORCH_DATA_STD = torch.from_numpy(DATA_STD).view(1,3,1,1)
# Resize to 224x224, and normalize to ImageNet statistic
transform = transforms.Compose([transforms.Resize((224,224)),
transforms.Normalize(DATA_MEANS, DATA_STD)
# Loading the training dataset.
train_set = CIFAR100(root=DATASET_PATH, train=True, transform=transform, download=True)
# Loading the test set
test_set = CIFAR100(root=DATASET_PATH, train=False, transform=transform, download=True)
Files already downloaded and verified
Files already downloaded and verified
Next, we want to run the pre-trained ResNet model on the images, and extract the features before the classification layer. These are the most high-level features, and should sufficiently describe the
images. CIFAR100 has some similarity to ImageNet, and thus we are not retraining the ResNet model in any form. However, if you would want to get the best performance and have a very large dataset, it
would be better to add the ResNet to the computation graph during training and finetune its parameters as well. As we don’t have a large enough dataset and want to train our model efficiently, we
will extract the features beforehand. Let’s load and prepare the model below.
import os
os.environ["TORCH_HOME"] = CHECKPOINT_PATH
pretrained_model = torchvision.models.resnet34(weights='IMAGENET1K_V1')
# Remove classification layer
# In some models, it is called "fc", others have "classifier"
# Setting both to an empty sequential represents an identity map of the final features.
pretrained_model.fc = nn.Sequential()
pretrained_model.classifier = nn.Sequential()
# To GPU
pretrained_model = pretrained_model.to(device)
# Only eval, no gradient required
for p in pretrained_model.parameters():
p.requires_grad = False
We will now write a extraction function for the features below. This cell requires access to a GPU, as the model is rather deep and the images relatively large. The GPUs on GoogleColab are
sufficient, but running this cell can take 2-3 minutes. Once it is run, the features are exported on disk so they don’t have to be recalculated every time you run the notebook. However, this requires
>150MB free disk space. So it is recommended to run this only on a local computer if you have enough free disk and a GPU (GoogleColab is fine for this). If you do not have a GPU, you can download the
features from the GoogleDrive folder.
def extract_features(dataset, save_file):
if not os.path.isfile(save_file):
data_loader = data.DataLoader(dataset, batch_size=128, shuffle=False, drop_last=False, num_workers=4)
extracted_features = []
for imgs, _ in tqdm(data_loader):
imgs = imgs.to(device)
feats = pretrained_model(imgs)
extracted_features = torch.cat(extracted_features, dim=0)
extracted_features = extracted_features.detach().cpu()
torch.save(extracted_features, save_file)
extracted_features = torch.load(save_file)
return extracted_features
train_feat_file = os.path.join(CHECKPOINT_PATH, "train_set_features.tar")
train_set_feats = extract_features(train_set, train_feat_file)
test_feat_file = os.path.join(CHECKPOINT_PATH, "test_set_features.tar")
test_feats = extract_features(test_set, test_feat_file)
Let’s verify the feature shapes below. The training should have 50k elements, and the test 10k images. The feature dimension is 512 for the ResNet34. If you experiment with other models, you likely
see a different feature dimension.
print("Train:", train_set_feats.shape)
print("Test: ", test_feats.shape)
Train: torch.Size([50000, 512])
Test: torch.Size([10000, 512])
As usual, we want to create a validation set to detect when we should stop training. In this case, we will split the training set into 90% training, 10% validation. However, the difficulty is here
that we need to ensure that the validation set has the same number of images for all 100 labels. Otherwise, we have a class imbalance which is not good for creating the image sets. Hence, we take 10%
of the images for each class, and move them into the validation set. The code below does exactly this.
## Split train into train+val
# Get labels from train set
labels = train_set.targets
# Get indices of images per class
labels = torch.LongTensor(labels)
num_labels = labels.max()+1
sorted_indices = torch.argsort(labels).reshape(num_labels, -1) # [classes, num_imgs per class]
# Determine number of validation images per class
num_val_exmps = sorted_indices.shape[1] // 10
# Get image indices for validation and training
val_indices = sorted_indices[:,:num_val_exmps].reshape(-1)
train_indices = sorted_indices[:,num_val_exmps:].reshape(-1)
# Group corresponding image features and labels
train_feats, train_labels = train_set_feats[train_indices], labels[train_indices]
val_feats, val_labels = train_set_feats[val_indices], labels[val_indices]
Now we can prepare a dataset class for the set anomaly task. We define an epoch to be the sequence in which each image has been exactly once as an “anomaly”. Hence, the length of the dataset is the
number of images in it. For the training set, each time we access an item with __getitem__, we sample a random, different class than the image at the corresponding index idx has. In a second step, we
sample \(N-1\) images of this sampled class. The set of 10 images is finally returned. The randomness in the __getitem__ allows us to see a slightly different set during each iteration. However, we
can’t use the same strategy for the test set as we want the test dataset to be the same every time we iterate over it. Hence, we sample the sets in the __init__ method, and return those in
__getitem__. The code below implements exactly this dynamic.
class SetAnomalyDataset(data.Dataset):
def __init__(self, img_feats, labels, set_size=10, train=True):
img_feats - Tensor of shape [num_imgs, img_dim]. Represents the high-level features.
labels - Tensor of shape [num_imgs], containing the class labels for the images
set_size - Number of elements in a set. N-1 are sampled from one class, and one from another one.
train - If True, a new set will be sampled every time __getitem__ is called.
self.img_feats = img_feats
self.labels = labels
self.set_size = set_size-1 # The set size is here the size of correct images
self.train = train
# Tensors with indices of the images per class
self.num_labels = labels.max()+1
self.img_idx_by_label = torch.argsort(self.labels).reshape(self.num_labels, -1)
if not train:
self.test_sets = self._create_test_sets()
def _create_test_sets(self):
# Pre-generates the sets for each image for the test set
test_sets = []
num_imgs = self.img_feats.shape[0]
test_sets = [self.sample_img_set(self.labels[idx]) for idx in range(num_imgs)]
test_sets = torch.stack(test_sets, dim=0)
return test_sets
def sample_img_set(self, anomaly_label):
Samples a new set of images, given the label of the anomaly.
The sampled images come from a different class than anomaly_label
# Sample class from 0,...,num_classes-1 while skipping anomaly_label as class
set_label = np.random.randint(self.num_labels-1)
if set_label >= anomaly_label:
set_label += 1
# Sample images from the class determined above
img_indices = np.random.choice(self.img_idx_by_label.shape[1], size=self.set_size, replace=False)
img_indices = self.img_idx_by_label[set_label, img_indices]
return img_indices
def __len__(self):
return self.img_feats.shape[0]
def __getitem__(self, idx):
anomaly = self.img_feats[idx]
if self.train: # If train => sample
img_indices = self.sample_img_set(self.labels[idx])
else: # If test => use pre-generated ones
img_indices = self.test_sets[idx]
# Concatenate images. The anomaly is always the last image for simplicity
img_set = torch.cat([self.img_feats[img_indices], anomaly[None]], dim=0)
indices = torch.cat([img_indices, torch.LongTensor([idx])], dim=0)
label = img_set.shape[0]-1
# We return the indices of the images for visualization purpose. "Label" is the index of the anomaly
return img_set, indices, label
Next, we can setup our datasets and data loaders below. Here, we will use a set size of 10, i.e. 9 images from one category + 1 anomaly. Feel free to change it if you want to experiment with the
SET_SIZE = 10
test_labels = torch.LongTensor(test_set.targets)
train_anom_dataset = SetAnomalyDataset(train_feats, train_labels, set_size=SET_SIZE, train=True)
val_anom_dataset = SetAnomalyDataset(val_feats, val_labels, set_size=SET_SIZE, train=False)
test_anom_dataset = SetAnomalyDataset(test_feats, test_labels, set_size=SET_SIZE, train=False)
train_anom_loader = data.DataLoader(train_anom_dataset, batch_size=64, shuffle=True, drop_last=True, num_workers=4, pin_memory=True)
val_anom_loader = data.DataLoader(val_anom_dataset, batch_size=64, shuffle=False, drop_last=False, num_workers=4)
test_anom_loader = data.DataLoader(test_anom_dataset, batch_size=64, shuffle=False, drop_last=False, num_workers=4)
To understand the dataset a little better, we can plot below a few sets from the test dataset. Each row shows a different input set, where the first 9 are from the same class.
def visualize_exmp(indices, orig_dataset):
images = [orig_dataset[idx][0] for idx in indices.reshape(-1)]
images = torch.stack(images, dim=0)
images = images * TORCH_DATA_STD + TORCH_DATA_MEANS
img_grid = torchvision.utils.make_grid(images, nrow=SET_SIZE, normalize=True, pad_value=0.5, padding=16)
img_grid = img_grid.permute(1, 2, 0)
plt.title("Anomaly examples on CIFAR100")
_, indices, _ = next(iter(test_anom_loader))
visualize_exmp(indices[:4], test_set)
We can already see that for some sets the task might be easier than for others. Difficulties can especially arise if the anomaly is in a different, but yet visually similar class (e.g. train vs bus,
flour vs worm, etc.).
After having prepared the data, we can look closer at the model. Here, we have a classification of the whole set. For the prediction to be permutation-equivariant, we will output one logit for each
image. Over these logits, we apply a softmax and train the anomaly image to have the highest score/probability. This is a bit different than a standard classification layer as the softmax is applied
over images, not over output classes in the classical sense. However, if we swap two images in their position, we effectively swap their position in the output softmax. Hence, the prediction is
equivariant with respect to the input. We implement this idea below in the subclass of the Transformer Lightning module.
class AnomalyPredictor(TransformerPredictor):
def _calculate_loss(self, batch, mode="train"):
img_sets, _, labels = batch
preds = self.forward(img_sets, add_positional_encoding=False) # No positional encodings as it is a set, not a sequence!
preds = preds.squeeze(dim=-1) # Shape: [Batch_size, set_size]
loss = F.cross_entropy(preds, labels) # Softmax/CE over set dimension
acc = (preds.argmax(dim=-1) == labels).float().mean()
self.log(f"{mode}_loss", loss)
self.log(f"{mode}_acc", acc, on_step=False, on_epoch=True)
return loss, acc
def training_step(self, batch, batch_idx):
loss, _ = self._calculate_loss(batch, mode="train")
return loss
def validation_step(self, batch, batch_idx):
_ = self._calculate_loss(batch, mode="val")
def test_step(self, batch, batch_idx):
_ = self._calculate_loss(batch, mode="test")
Finally, we write our train function below. It has the exact same structure as the reverse task one, hence not much of an explanation is needed here.
def train_anomaly(**kwargs):
# Create a PyTorch Lightning trainer with the generation callback
root_dir = os.path.join(CHECKPOINT_PATH, "SetAnomalyTask")
os.makedirs(root_dir, exist_ok=True)
trainer = pl.Trainer(default_root_dir=root_dir,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc")],
accelerator="gpu" if str(device).startswith("cuda") else "cpu",
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, "SetAnomalyTask.ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model, loading...")
model = AnomalyPredictor.load_from_checkpoint(pretrained_filename)
model = AnomalyPredictor(max_iters=trainer.max_epochs*len(train_anom_loader), **kwargs)
trainer.fit(model, train_anom_loader, val_anom_loader)
model = AnomalyPredictor.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
# Test best model on validation and test set
train_result = trainer.test(model, train_anom_loader, verbose=False)
val_result = trainer.test(model, val_anom_loader, verbose=False)
test_result = trainer.test(model, test_anom_loader, verbose=False)
result = {"test_acc": test_result[0]["test_acc"], "val_acc": val_result[0]["test_acc"], "train_acc": train_result[0]["test_acc"]}
model = model.to(device)
return model, result
Let’s finally train our model. We will use 4 layers with 4 attention heads each. The hidden dimensionality of the model is 256, and we use a dropout of 0.1 throughout the model for good
regularization. Note that we also apply the dropout on the input features, as this makes the model more robust against image noise and generalizes better. Again, we use warmup to slowly start our
model training.
anomaly_model, anomaly_result = train_anomaly(input_dim=train_anom_dataset.img_feats.shape[-1],
GPU available: True, used: True
WARNING: Logging before flag parsing goes to stderr.
I1109 10:43:31.036801 139648634296128 distributed.py:49] GPU available: True, used: True
TPU available: False, using: 0 TPU cores
I1109 10:43:31.038146 139648634296128 distributed.py:49] TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
I1109 10:43:31.039162 139648634296128 accelerator_connector.py:385] LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Found pretrained model, loading...
/home/phillip/anaconda3/envs/nlp1/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: Your test_dataloader has `shuffle=True`, it is best practice to turn this off for validation and test dataloaders.
warnings.warn(*args, **kwargs)
We can print the achieved accuracy below.
print(f"Train accuracy: {(100.0*anomaly_result['train_acc']):4.2f}%")
print(f"Val accuracy: {(100.0*anomaly_result['val_acc']):4.2f}%")
print(f"Test accuracy: {(100.0*anomaly_result['test_acc']):4.2f}%")
Train accuracy: 97.77%
Val accuracy: 94.38%
Test accuracy: 94.30%
With ~94% validation and test accuracy, the model generalizes quite well. It should be noted that you might see slightly different scores depending on what computer/device you are running this
notebook. This is because despite setting the seed before generating the test dataset, it is not the same across platforms and numpy versions. Nevertheless, we can conclude that the model performs
quite well and can solve the task for most sets. Before trying to interpret the model, let’s verify that our model is permutation-equivariant, and assigns the same predictions for different
permutations of the input set. For this, we sample a batch from the test set and run it through the model to obtain the probabilities.
inp_data, indices, labels = next(iter(test_anom_loader))
inp_data = inp_data.to(device)
with torch.no_grad():
preds = anomaly_model.forward(inp_data, add_positional_encoding=False)
preds = F.softmax(preds.squeeze(dim=-1), dim=-1)
# Permut input data
permut = np.random.permutation(inp_data.shape[1])
perm_inp_data = inp_data[:,permut]
perm_preds = anomaly_model.forward(perm_inp_data, add_positional_encoding=False)
perm_preds = F.softmax(perm_preds.squeeze(dim=-1), dim=-1)
assert (preds[:,permut] - perm_preds).abs().max() < 1e-5, "Predictions are not permutation equivariant"
print("Preds\n", preds[0,permut].cpu().numpy())
print("Permuted preds\n", perm_preds[0].cpu().numpy())
[5.4543594e-05 1.4208173e-04 6.6922468e-05 7.6413504e-05 7.7112330e-05
8.7848457e-05 6.6820685e-05 9.9929154e-01 7.3219831e-05 6.3545609e-05]
Permuted preds
[5.4543532e-05 1.4208158e-04 6.6922395e-05 7.6413417e-05 7.7112243e-05
8.7848362e-05 6.6820678e-05 9.9929142e-01 7.3219751e-05 6.3545544e-05]
You can see that the predictions are almost exactly the same, and only differ because of slight numerical differences inside the network operation.
To interpret the model a little more, we can plot the attention maps inside the model. This will give us an idea of what information the model is sharing/communicating between images, and what each
head might represent. First, we need to extract the attention maps for the test batch above, and determine the discrete predictions for simplicity.
attention_maps = anomaly_model.get_attention_maps(inp_data, add_positional_encoding=False)
predictions = preds.argmax(dim=-1)
Below we write a plot function which plots the images in the input set, the prediction of the model, and the attention maps of the different heads on layers of the transformer. Feel free to explore
the attention maps for different input examples as well.
def visualize_prediction(idx):
visualize_exmp(indices[idx:idx+1], test_set)
print("Prediction:", predictions[idx].item())
plot_attention_maps(input_data=None, attn_maps=attention_maps, idx=idx)
Depending on the random seed, you might see a slightly different input set. For the version on the website, we compare 9 tree images with a volcano. We see that multiple heads, for instance, Layer 2
Head 1, Layer 2 Head 3, and Layer 3 Head 1 focus on the last image. Additionally, the heads in Layer 4 all seem to ignore the last image and assign a very low attention probability to it. This shows
that the model has indeed recognized that the image doesn’t fit the setting, and hence predicted it to be the anomaly. Layer 3 Head 2-4 seems to take a slightly weighted average of all images. That
might indicate that the model extracts the “average” information of all images, to compare it to the image features itself.
Let’s try to find where the model actually makes a mistake. We can do this by identifying the sets where the model predicts something else than 9, as in the dataset, we ensured that the anomaly is
always at the last position in the set.
mistakes = torch.where(predictions != 9)[0].cpu().numpy()
print("Indices with mistake:", mistakes)
Indices with mistake: [36 49]
As our model achieves ~94% accuracy, we only have very little number of mistakes in a batch of 64 sets. Still, let’s visualize one of them, for example the last one:
for i, p in enumerate(preds[mistakes[-1]].cpu().numpy()):
print(f"Image {i}: {100.0*p:4.2f}%")
Image 0: 0.06%
Image 1: 1.63%
Image 2: 89.63%
Image 3: 0.01%
Image 4: 0.01%
Image 5: 0.01%
Image 6: 0.01%
Image 7: 0.01%
Image 8: 0.01%
Image 9: 8.63%
In this example, the model confuses a palm tree with a building, giving a probability of ~90% to image 2, and 8% to the actual anomaly. However, the difficulty here is that the picture of the
building has been taken at a similar angle as the palms. Meanwhile, image 2 shows a rather unusual palm with a different color palette, which is why the model fails here. Nevertheless, in general,
the model performs quite well.
In this tutorial, we took a closer look at the Multi-Head Attention layer which uses a scaled dot product between queries and keys to find correlations and similarities between input elements. The
Transformer architecture is based on the Multi-Head Attention layer and applies multiple of them in a ResNet-like block. The Transformer is a very important, recent architecture that can be applied
to many tasks and datasets. Although it is best known for its success in NLP, there is so much more to it. We have seen its application on sequence-to-sequence tasks and set anomaly detection. Its
property of being permutation-equivariant if we do not provide any positional encodings, allows it to generalize to many settings. Hence, it is important to know the architecture, but also its
possible issues such as the gradient problem during the first iterations solved by learning rate warm-up. If you are interested in continuing with the study of the Transformer architecture, please
have a look at the blog posts listed at the beginning of the tutorial notebook. | {"url":"https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/tutorial6/Transformers_and_MHAttention.html?ref=blog.paperspace.com","timestamp":"2024-11-11T12:37:29Z","content_type":"text/html","content_length":"269412","record_id":"<urn:uuid:e9eb90d0-7636-4e8a-8693-dc86c06a8bf2>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00731.warc.gz"} |
What is a Number Key?
Number key
A number key may refer to any of the following:
1. A number key is any key on the keyboard with a number. On computer keyboards, there are ten number keys (1 through 0) above the top row of letters. Computer keyboards with a numeric keypad also
have ten additional keys on the right side of the keyboard.
What are the symbols on the number keys?
Symbols are also on each number key that are accessed by pressing either Shift and the number key at the same time. For example, if you hold down Shift and press the number two on a U.S. keyboard,
you'll get the at sign (@). Below are each of the ten numbers and their associated symbols on U.S. keyboards. Keyboards in other parts of the world have different symbols.
What row are the number keys on?
The number keys are on the number row, which is above the top row keys and below the function key row.
What keys are on the number row?
If you go from the left side of the keyboard to the right side of the keyboard, the number row has the following keys.
2. A number key sometimes describes the Num Lock key. | {"url":"http://hkci.net/number-key.html","timestamp":"2024-11-14T15:41:10Z","content_type":"text/html","content_length":"11553","record_id":"<urn:uuid:3f4a95f8-720f-4719-a0ba-07a7be2a66b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00360.warc.gz"} |
How to use the LCM function
What is the LCM function?
The LCM function calculates the least common multiple.
1. Introduction
What is the least common multiplier?
The least common multiple is the smallest positive integer that is a multiple of all integer arguments.
When is the LCM function useful?
Use the LCM function to find fractions with different denominators.
What is a fraction?
A fraction has a numerator and a denominator:
• Numerator - The top number in a fraction.
• Denominator - The bottom number.
For example:
• The numerator is 6, meaning 6 parts.
• The denominator is 7, meaning the whole was split into 7 equal parts.
How is the least common multiple calculated?
Finding least common multiples is the product of two numbers divided by their greatest common divisor.
For example,
least common multiple(number1,number2) = (number1*number2) / greatest common divisor(number1,number2).
least common multiple(6,8) = 6*8/2 = 48/2 = 24
What is the greatest common divisor?
The greatest common divisor of two or more integers is the largest integer that divides them all. For two numbers number1 and number2, it is the largest integer that divides both number1 and number2
without remainder.
For example:
greatest common divisor(8,12) = 4
This is because 4 is the largest number that divides both 8 and 12.
greatest common divisor(18,24) = 6
6 is the greatest common factor of 18 and 24.
greatest common divisor(10,15,25) = 5
5 is the largest number that divides 10, 15 and 25.
What is a remainder?
A remainder is the amount left over after dividing two integers.
The remainder is what is left after dividing two integers. If you divide 15 with 2 you get 7 and 1 is left over. 2*7 equals 14. 15 - 14 equals 1. The remainder is 1.
What is a divisor?
A divisor is a number that divides into another number either cleanly or leaving a remainder.
dividend / divisor = quotient( + remainder)
What is a dividend?
In division, the dividend is the number being divided.
For example, in the division:
15 / 5 = 3
15 is the dividend 5 is the divisor and 3 is the quotient
What is a quotient?
The quotient is the result when two numbers are divided.
What is a factor?
A factor is a number that divides evenly into another number, in other words, without leaving a remainder.
When is the greatest common divisor useful?
The GCD can be used to reduce fractions to their simplest form. For example, greatest common divisor(36,68) = 4 can simplify 36/68 to 9/17.
What other Excel functions are about division, dividend, divisor, and quotient?
Excel Function Description
GCD Returns the greatest common divisor of two numbers
LCM Returns the least common multiple of two numbers
QUOTIENT Returns the integer portion of a division between two numbers
MOD Returns the remainder after dividing num1 by num2
2. Syntax
LCM(number1, [number2], ...)
number1 Required. The number you want to calculate the least common multiple for.
[number2] Optional. Up to 254 additional numbers.
The LCM function returns an error value if any argument is not a number.
3. Example
This example demonstrates how to calculate the least common multiple using the LCM function. The image above shows the arguments in cell range B3:C7 and the results in cell range E3:E7.
The first argument pair is 3 and 4 which are displayed in cells B3:C3 respectively. Cell E3 which contains the formula below returns 12. 12 is the least common multiple based on 3 and 4.
Formula in cell E3:
Lets calculate this manually:
First we need to calculate the greatest common divisor.
1. Divide a by b and get the quotient q and remainder r, such that a = b * q + r.
a = 4, b = 3
a / b = 1 + 1, q = 1, r = 1
2. If r ≠ 0, then greatest common divisor(a, b) = greatest common divisor(b, r), since the greatest common divisor of a and b is the same as the greatest common divisor of b and the remainder r.
3. Replace a with b and b with r, and repeat above steps until the remainder becomes 0.
a = 3, b = 1
a/b = 3 + 0, q = 3, r = 0
4. If r = 0, then greatest common divisor(a, b) = b, since b is a divisor of a.
greatest common divisor(3, 1) = 1
The greatest common divisor is 1 based on values 4 and 3.
Finding least common multiples is the product of two numbers divided by their greatest common divisor.
least common multiple(number1,number2) = (number1*number2) / greatest common divisor(number1,number2).
least common multiple(4,3) = 4*3/1 = 12/1 = 12
Our calculated value 12 matches the result in cell E3.
The second argument pair is 24 and 46 which are displayed in cells B4:C4 respectively. Cell E4 which contains the formula below returns 552. 552 is the least common multiple based on 24 and 46.
Formula in cell E4:
Lets calculate this manually:
First we need to calculate the greatest common divisor.
1. Divide a by b and get the quotient q and remainder r, such that a = b * q + r.
a = 46, b = 24
a / b = 1 + 22, q = 1, r = 22
2. If r ≠ 0, then greatest common divisor(a, b) = greatest common divisor(b, r), since the greatest common divisor of a and b is the same as the greatest common divisor of b and the remainder r.
3. Replace a with b and b with r, and repeat above steps until the remainder becomes 0.
a = 24, b = 22
a/b = 1 + 2, q = 1, r = 2
4. Replace a with b and b with r, and repeat above steps until the remainder becomes 0.
5. a = 22, b = 2
a/b = 11 + 0, q = 11, r = 0
6. If r = 0, then greatest common divisor(a, b) = b, since b is a divisor of a.
greatest common divisor(46, 24) = 2
The greatest common divisor is 1 based on values 46 and 24.
Finding least common multiples is the product of two numbers divided by their greatest common divisor.
least common multiple(number1,number2) = (number1*number2) / greatest common divisor(number1,number2).
least common multiple(46,24) = 46*24/2 = 1104/2 = 552
Our calculated value 552 matches the result in cell E4.
The third argument pair is 12 and 4 which are displayed in cells B5:C5 respectively. Cell E5 which contains the formula below returns 12. 12 is the least common multiple based on 12 and 4.
Formula in cell E4:
Lets calculate this manually:
First we need to calculate the greatest common divisor.
1. Divide a by b and get the quotient q and remainder r, such that a = b * q + r.
a = 12, b = 4
a / b = 3 + 0, q = 3, r = 0
2. If r = 0, then greatest common divisor(a, b) = b, since b is a divisor of a.
greatest common divisor(12, 4) = 4
The greatest common divisor is 4 based on values 12 and 4.
Finding least common multiples is the product of two numbers divided by their greatest common divisor.
least common multiple(number1,number2) = (number1*number2) / greatest common divisor(number1,number2).
least common multiple(12,4) = 12*4/4 = 12
Our calculated value 12 matches the result in cell E4.
Functions in 'Math and trigonometry' category
The LCM function function is one of 62 functions in the 'Math and trigonometry' category.
Excel function categories
Excel categories
How to comment
How to add a formula to your comment
<code>Insert your formula here.</code>
Convert less than and larger than signs
Use html character entities instead of less than and larger than signs.
< becomes < and > becomes >
How to add VBA code to your comment
[vb 1="vbnet" language=","]
Put your VBA code here.
How to add a picture to your comment:
Upload picture to postimage.org or imgur
Paste image link to your comment. | {"url":"https://www.get-digital-help.com/how-to-use-the-lcm-function/","timestamp":"2024-11-07T01:20:20Z","content_type":"application/xhtml+xml","content_length":"183404","record_id":"<urn:uuid:fab08d87-00e9-4c16-8a86-8b87233a82cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00189.warc.gz"} |