content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Climb Foundation
Paper 5 Question 1
Let $f(x) = |x − 2|.$ Sketch $y = f(f(f(|x|)))$ for $x\in \mathbb{R}.$
Why not start by sketching $f(x)?$
How does changing the argument to $|x|$ affect the graph? What would $f(|x|)$ look like?
A good way to think about this graph is to consider the sequence of transformations we are performing. Applying $|\cdot|$ to the argument of the function makes it symmetrical about the y-axis.
Subtracting $2$ shifts the entire graph downwards by $2$ units and applying $|\cdot|$ to the result flips parts of the graph below the x-axis. By sketching the simpler cases, we can deduce the
pattern that $n$ applications of $f$ results in $n$ peaks. | {"url":"https://openclimb.io/practice/p5/q1/","timestamp":"2024-11-06T11:31:43Z","content_type":"text/html","content_length":"16785","record_id":"<urn:uuid:f7c3f0c7-5edb-4a70-ad80-9343df5fdd3b>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00543.warc.gz"} |
Cone – Definition, Formulas, Examples and Diagrams
A cone is a unique three-dimensional shape with a flat circular face at one end and a pointed tip at another end. The word ‘cone’ is derived from the Greek word ‘konos’, meaning a peak or a wedge.
A traffic signal cone, an ice-cream cone, or a birthday hat are some common examples of a cone.
• Its circular face is the base.
• Above the circular base is the curved surface that narrows to a pointed tip called the vertex (or apex).
• Has no edge.
• Since it has a curved surface, it is not a polyhedron.
• The shape is achieved by rotating a triangle. So it can be said to be a rotated triangle.
Parts of a Cone
1. Radius (r) – It is the distance between the center of its circular base to any point on the circumference of the base.
2. Height (h) – It is the perpendicular distance between its vertex to the base center. We consider the height of a right circular cone as its axis.
3. Slant Height (s) – It is the distance from its vertex to the point on the outer edge of its circular base.
Types of Cones – Right vs. Oblique
Based on the position of the vertex with respect to its base, a cone is of two types, as shown in the figure.
Right Circular Cone
It has its vertex aligned right above the center of the base. The axis coincides with the height and makes a right angle at the base center.
Oblique Cone
It does not have its vertex aligned perpendicular to its base. It looks tilted.
Since, in practical life, a cone means a right circular cone, here, we will learn the formulas related to it.
Surface Area of a Cone
The formula of the surface area (or total surface area) of a right circular cone is:
Surface Area (SA) = πr^2 + πrs, here r = radius, s = slant height, π = 3.141, πr^2 = base area, πrs = lateral (curved) surface area of a cone (LSA)
So, we can rewrite the formula as SA = πr^2 + LSA
Find the surface area of a right circular cone with a slant height of 6 cm and radius of 4 cm.
As we know,
Surface Area (SA) = πr^2 + πrs, here r = 4 cm, s = 6 cm, π = 3.141
∴ SA = 3.141 × 4^2 + 3.141 × 4 × 6
= 125.7 cm^2
Calculate the lateral surface area of a cone with a radius of 3 in and slant height of 8 in.
As we know,
Lateral Surface Area (LSA) = πrs, here r = 3 in, s = 8 in, π = 3.141
∴ LSA = 3.141 × 3 × 8
= 75.3 cm^2
Find the base area of a cone with a radius of 7.5 cm.
As we know,
Base area = πr^2, here π = 3.141, r = 7.5 cm
= 3.141 × (7.5)^2
= 176.7 cm^2
The formula is:
Volume ${\left( V\right) =\dfrac{1}{3}\pi r^{2}h}$, here r = radius, h = height, π = 3.141
[Volume (V) = 1/3 πr^2h]
We can relate the formula of the volume of a cylinder which is πr^2h
Therefore, the volume of a cone is precisely one-third of that of a cylinder.
Calculate the volume of a cone with a radius of 7 mm and height of 12 mm.
As we know,
Volume ${\left( V\right) =\dfrac{1}{3}\pi r^{2}h}$, here r = 7 mm, h = 12mm, π = 3.141
∴ ${V=\dfrac{1}{3}\times \pi \times 7^{2}\times 12}$
= 615.7 mm^3
Slant Height
The formula is:
Slant Height ${\left( s\right) =\sqrt{r^{2}+h^{2}}}$, here r = radius, h = height
Find the slant height of a cone with a radius of 6 mm and height of 11 mm.
As we know,
Slant Height ${\left( s\right) =\sqrt{r^{2}+h^{2}}}$, here r = 6 mm, h = 11 mm
${\therefore s=\sqrt{6^{2}+11^{2}}}$
= 12.53 mm
Finding the HEIGHT of a cone when the VOLUME and RADIUS are known
Find the height of a cone with a volume of 65.9 mm^3 and a radius of 3 mm..
∵ Volume ${\left( V\right) =\dfrac{1}{3}\pi r^{2}h}$, here r = radius, h = height, π = 3.141
So, by rearranging the equation, we get,
Height ${h=\dfrac{3V}{\pi r^{2}}}$, here V = 65.9 mm^3, r = 3 mm
${= \dfrac{3\times 65\cdot 9}{3.141\times 3^{2}}}$
= 6.99 mm
Finding the RADIUS when the VOLUME and HEIGHT are known
Find the radius of a cone with a volume of 165.4 cm^2 and a height of 7.8 mm.
∵ Volume ${\left( V\right) =\dfrac{1}{3}\pi r^{2}h}$, here r = radius, h = height, π = 3.141
So, by rearranging the equation, we get,
Radius ${\left( r\right) =\sqrt{\dfrac{3V}{\pi h}}}$, here V = 165.4 cm^2, h = 7.8 cm
${= \sqrt{\dfrac{3\times 165\cdot 4}{3\cdot 141\times 7\cdot 8}}}$
= 4.5 cm
Leave a comment | {"url":"https://mathmonks.com/cone","timestamp":"2024-11-14T04:56:35Z","content_type":"text/html","content_length":"163681","record_id":"<urn:uuid:ef3d1848-3436-4a3e-9129-3a0a624b12dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00879.warc.gz"} |
Followers 0 Following 0 Joined Last Online
• filipe
I am finishing up my comparative EQ with visual feedback (FFT analysis) and I was able to make my FFT "screens" (arrays) logarithmic. The problem is that my EQ is based on the I03 patch in the
audio examples folder, which is linear. I assume I have to introduce some kind of anti log feature in the patch in order to make the Gain array match up with the FFT arrays. Any ideas of how to
do this?
Thank you very much!
• filipe
I am finishing up my comparative EQ with visual feedback (FFT analysis) and I was able to make my FFT "screens" (arrays) logarithmic. The problem is that my EQ is based on the I03 patch in the
audio examples folder, which is linear. I assume I have to introduce some kind of anti log feature in the patch in order to make the Gain array match up with the FFT arrays. Any ideas of how to
do this?
Thank you very much!
• filipe
Hi everyone
I was wondering if anyone had what is called the Box of Tricks or any existing group of patches for better looking GUIs. It'd be really good to get hold of those, in order to finish the
application I a currently working on.
Thank you very much in advance.
• filipe
I am trying to build an FFT analyser so I can use it as visual feedback for an equaliser.
In order for this FFT graph to continuously average the amplitude values of each frequency, I wanted to be able to average this values as more data gets drawn in the graph.
The idea is to use a tabwrite of a certain block size, and then have several tabreads, with the block size of the initial one, divided by the number of tabreads, and then avergae the values
outputted by the tabreads.
My biggest problem at the moment is to make the various tabreads read data from one value n to N/x; N being the block size, x the number of tabreads and n the value at which the previous tabread
stops reading.
I have tried using [count] but it is very slow, since it depends on the metro, and I don't seem to make it work with [until] either...
Any suggestions?
Thanks for your time!
• filipe
I have been tweaking Katjav's patch, in order to have a bit more definition on the lower frequencies. I added a high-pass filter (to get rid of DC) and changed the fft analysis for a whole new
I wanted to add more "steps" to the log sweep, in order to have a more reliable representation of the fft. The $0-logsweep is made of 2048 steps, which means that there are 2048 values of x and
the correspondent y values. Is there any way of expanding this array, in order for it to have (for example) four times more values (8192)? I feel like, this way, it'd have a much more reliable
graphical representation of the fft.
Thanks very much!
• filipe
Thank you very much. I find that when making that change, the curve is also a lot smoother and much more intuitive.
Thank you.
• filipe
Great, thank you very much! I had seen the link you gave me before, but for some reason the representation of the lower frequencies isn't that accurate and user friendly. I'll try to modify it,
in order to get a more accurate representation.
Thank you very much for your help once again.
• filipe
@emacpher said:
I can't totally picture what you are trying to do (average x previous versions of the FFT?) but how about making a "leaky" FFT table integrator as in the attached patch. Every time you
compute an FFT, compute a weighted sum of the new FFT and the old weighted sum. Basically it is a 1-pole low-pass filter on the the FFT value in each bin running at whatever metro rate you
Maybe this can be done at block rate without the tabread~ , but I got a "DSP loop" error when I tried that.
All I was trying to do was to give an "averaged look" to my graphical representation of the FFT. So basically, instead of having an ever-changing graph, the frequency amplitudes will accumulate,
and by the end of the track (for example), you can tell which frequencies/frequency bands are more present.
I think you solved this problem for me emacphe, so thank you ever so much. All I need to do now is adapt it to my application in order for it to work better with music and so on. Any suggestion
on how to make a graphical representation logarithmic, instead of linear? | {"url":"https://forum.puredata.info/user/filipe","timestamp":"2024-11-11T11:06:57Z","content_type":"text/html","content_length":"53311","record_id":"<urn:uuid:910333bc-f58a-45c8-818a-a942c8e3467a>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00115.warc.gz"} |
Understanding the Thermodynamic Properties of Black Holes – WorldWideHeadline
Understanding the Thermodynamic Properties of Black Holes
Introduction to Black Holes and Thermodynamics
Black holes are one of the most intriguing objects in the universe, attracting significant attention from both astronomers and physicists. Defined as regions in space where gravitational forces are
so overwhelming that nothing, not even light, can escape their grasp, black holes challenge our understanding of physics. They generally form from the remnants of massive stars that undergo
gravitational collapse at the end of their life cycles. The resulting structure is characterized by an event horizon, a boundary beyond which escape is impossible, and a singularity, where matter is
thought to be infinitely dense.
The study of thermodynamics, which deals with heat, work, and the laws governing energy transformation, has become increasingly relevant when discussing black holes. These celestial phenomena seem to
possess entropy, a core concept in thermodynamic principles, indicating that they might obey the same statistical laws that govern other thermodynamic systems. One of the pivotal concepts is that the
entropy of a black hole is proportional to the area of its event horizon, rather than its volume, marking a significant shift in how we understand physical laws at cosmic scales.
The historical relationship between black holes and thermodynamics was revolutionized by the work of several notable physicists, most famously Stephen Hawking. In the 1970s, Hawking’s theoretical
insights led to the conclusion that black holes are not entirely black; they emit radiation, now known as Hawking radiation, due to quantum mechanical effects near the event horizon. This
breakthrough merged classical mechanics with quantum mechanics, creating pathways for scientists to explore connections between gravitational phenomena and thermodynamic laws. Understanding these
relationships is essential for advancing our knowledge in astrophysics and confronting questions about the universe’s origins and ultimate fate.
The Laws of Black Hole Thermodynamics
The study of black holes has led to the formulation of four distinct laws that parallel the principles of classical thermodynamics. These laws serve not only as theoretical constructs but also as
fundamental guidelines that describe the behavior of black holes in relation to thermodynamic processes. Each law sheds light on the unique characteristics of these enigmatic celestial bodies.
The first of these laws, often referred to as the zeroth law of black hole thermodynamics, asserts that black holes have a uniform temperature across their event horizons. This can be likened to how
temperature is uniform in a system at thermal equilibrium in classical thermodynamics. The uniformity refers to the fact that, regardless of size or mass, a black hole emits Hawking radiation at a
constant temperature determined solely by its mass. The implications of this phenomenon are profound, as it indicates that black holes do not merely embody singularities but also possess
thermodynamic properties akin to traditional physical systems.
The first law pertains to the conservation of energy within a black hole system. This law states that the change in the mass of a black hole corresponds directly to the thermal energy emitted and the
work done on or by the black hole. It can be summarized as a relationship between variations in mass, temperature, and entropy. Following this, the second law posits that the entropy of a black hole
is proportional to the area of its event horizon, rather than its volume. This finding intertwines black hole physics with the concept of entropy, suggesting that the total entropy of a system,
including that of black holes, should never decrease.
Lastly, the third law states that it is impossible to reduce the temperature of a black hole to absolute zero. This aligns with the classical assertion that systems cannot reach absolute zero due to
the Heisenberg uncertainty principle, which prevents a system from achieving a state of minimal entropy. Together, these laws collectively establish a framework for understanding black hole
thermodynamics, enriching our comprehension of these complex entities despite their unique properties.
Entropy and Information Paradox
Entropy, a fundamental concept in thermodynamics, measures the degree of disorder within a system. In the realm of black holes, entropy takes on a unique significance. According to the laws of
thermodynamics, particularly the second law, entropy tends to increase as energy becomes more dispersed. For black holes, this principle is intriguingly linked to their surface area, as articulated
by Jacob Bekenstein and later expanded upon by Stephen Hawking. The Bekenstein-Hawking formula proposes that a black hole’s entropy is proportional to the area of its event horizon, presenting the
surprising finding that a black hole can encapsulate a vast amount of information within its surface area, despite its seemingly simplistic nature.
This leads to the famous black hole information paradox, which challenges our understanding of physics. When matter crosses the event horizon, it seemingly disappears from the observable universe,
raising the question: what happens to the information contained within that matter? Hawking’s initial proposal suggested that information is lost forever when a black hole evaporates, a notion that
directly contrasts with the principles of quantum mechanics, which maintain that information cannot be destroyed. This paradox has sparked extensive debate among physicists, leading to multiple
theories regarding the fate of information.
One prominent interpretation is the concept of “information scrambling,” which implies that information is not lost but rather transformed and spread throughout the black hole, ultimately being
released during Hawking radiation. Alternatively, some theorists posited that information may be stored in a two-dimensional form on the event horizon, known as holographic principles. These
discussions underscore the intricate relationship between entropy, information, and the fundamental laws governing our universe.
As research continues, the implications of black hole entropy may extend beyond astrophysics, influencing our comprehension of information theory and quantum mechanics. This ongoing exploration
signifies a monumental endeavor in understanding the universe’s most enigmatic phenomena.
Implications and Future Research in Black Hole Thermodynamics
The study of black hole thermodynamics has far-reaching implications for modern physics, bridging concepts from quantum gravity, quantum field theory, and cosmology. Black holes challenge our
understanding of fundamental principles, leading to profound insights into the interplay between spacetime and thermodynamics. For instance, the identification of temperature and entropy associated
with black holes has prompted theorists to rethink the laws of thermodynamics. These insights encourage further investigation into how black holes function within the broader framework of the
Ongoing research into Hawking radiation, proposed by physicist Stephen Hawking, exemplifies the efforts to demystify black holes. Hawking radiation implies that black holes are not entirely black;
they emit thermal radiation due to quantum effects near the event horizon. This phenomenon has substantial implications for quantum field theory and prompts critical questions regarding the fate of
information swallowed by black holes. Addressing these questions could bridge the gap between quantum mechanics and general relativity, thus leading to a more unified understanding of physics.
Moreover, the exploration of black holes may provide insights into the fundamental laws of the universe. Observations of accretion disks and gravitational waves from merging black holes offer
empirical data that can be reconciled with theoretical predictions. Such studies may illuminate the role of black holes in cosmic evolution, including galaxy formation and the distribution of matter
throughout the universe.
As research progresses, potential discoveries related to black hole thermodynamics could reshape our understanding of the cosmos, shifting paradigms in both theoretical and observational
astrophysics. The implications of these findings extend beyond black holes, prompting inquiries into dark energy and the fabric of spacetime itself. Hence, the future of black hole research not only
enhances our comprehension of these enigmatic entities but also advances our grasp of the universe at large. | {"url":"https://worldwideheadline.com/technology/understanding-the-thermodynamic-properties-of-black-holes/","timestamp":"2024-11-05T18:34:18Z","content_type":"text/html","content_length":"353595","record_id":"<urn:uuid:5c7515f4-241a-4f99-91eb-f068ff08789a>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00660.warc.gz"} |
Software Model Checking Takes Off
A translator framework enables the use of model checking in complex avionics systems and other industrial settings.
Although formal methods have been used in the development of safety- and security-critical systems for years, they have not achieved widespread industrial use in software or systems engineering.
However, two important trends are making the industrial use of formal methods practical. The first is the growing acceptance of model-based development for the design of embedded systems. Tools such
as MATLAB Simulink^6 and Esterel Technologies SCADE Suite^2 are achieving widespread use in the design of avionics and automotive systems. The graphical models produced by these tools provide a
formal, or nearly formal, specification that is often amenable to formal analysis.
The second is the growing power of formal verification tools, particularly model checkers. For many classes of models they provide a “push-button” means of determining if a model meets its
requirements. Since these tools examine all possible combinations of inputs and state, they are much more likely to find design errors than testing.
Here, we describe a translator framework developed by Rockwell Collins and the University of Minnesota that allows us to automatically translate from some of the most popular commercial modeling
languages to a variety of model checkers and theorem provers. We describe three case studies in which these tools were used on industrial systems that demonstrate that formal verification can be used
effectively on real systems when properly supported by automated tools.
Model-Based Development
Model-based development (MBD) refers to the use of domain-specific, graphical modeling languages that can be executed and analyzed before the actual system is built. The use of such modeling
languages allows the developers to create a model of the system, execute it on their desktops, analyze it with automated tools, and use it to automatically generate code and test cases.
Throughout this article we use MBD to refer specifically to software developed using synchronous dataflow languages such as those found in MATLAB Simulink and Esterel Technologies SCADE Suite.
Synchronous modeling languages latch their inputs at the start of a computation step, compute the next system state and its outputs as a single atomic step, and communicate between components using
dataflow signals. This differs from the more general class of modeling languages that include support for asynchronous execution of components and communication using message passing. MBD has become
very popular in the avionics and automotive industries and we have found synchronous data-flow models to be especially well suited for automated verification using model checking.
Model checkers are formal verification tools that evaluate a model to determine if it satisfies a given set of properties.^1 A model checker will consider every possible combination of inputs and
state, making the verification equivalent to exhaustive testing of the model. If a property is not true, the model checker produces a counterexample showing how the property can be falsified.
There are many types of model checkers, each with their own strengths and weaknesses. Explicit state model checkers such as SPIN^4 construct and store a representation of each state visited. Implicit
state (symbolic) model checkers use logical representations of sets of states (such as Binary Decision Diagrams) to describe regions of the model state space that satisfy the properties being
evaluated. Such compact representations generally allow symbolic model checkers to handle a much larger state space than explicit state model checkers. We have used the BDD-based model checker NuSMV^
5 to analyze models with over 10^120 reachable states.
More recent model checkers, such as SAL^11 and Prover Plug-In,^9 use satisfiability modulo theories (SMT) solvers for reasoning about infinite state models containing real numbers and unbounded
arrays. These checkers use a form of induction over the state transition relation to automatically prove that a property holds over all executable paths in a model. While these tools can handle a
larger class of models, the properties to be checked must be written to support inductive proof.
The Translator Framework
As part of NASA’s Aviation Safety Program (AvSP), Rockwell Collins and the University of Minnesota developed a product family of translators that bridge the gaps between some of the most popular
commercial modeling languages and several model checkers and theorem provers.^8 An overview of this framework is shown in Figure 1.
These translators work primarily with the Lustre formal specification language,^3 but this is hidden from the users. The starting point for translation is a design model in MATLAB Simulink/Stateflow
or Esterel Technologies SCADE Suite/Safe State Machines. SCADE Suite produces Lustre models directly. Simulink or Stateflow models can be imported using SCADE Suite or the Reactis^10 tool and a
translator developed by Rockwell Collins. To ensure each Simulink or Stateflow construct has a well-defined semantics, the translator restricts the models that it will accept to those that can be
translated unambiguously into Lustre.
Once in Lustre, the specification is loaded into an abstract syntax tree (AST) and a number of transformation passes are applied to it. Each transformation pass produces a new Lustre AST that is
syntactically closer to the target specification language and preserves the semantics of the original Lustre specification. This allows all Lustre type checking and analysis tools to be used as
debugging aids during the development of the translator. When the AST is sufficiently close to the target language, a pretty printer is used to output the target specification.
A model checker will consider every possible combination of inputs and state, making the verification equivalent to exhaustive testing of the model. If a property is not true, the model checker
produces a counterexample showing how the property can be falsified.
We refer to our translator framework as a product family since most transformation passes are reused in the translators for each target language. Reuse of the transformation passes makes it much
easier to support new target languages; we have developed new translators in a matter of days. The number of transformation passes depends on the similarity of the source and target languages and on
the number of optimizations to be made. Our translators range in size from a dozen to over 60 passes.
The translators produce highly optimized specifications appropriate for the target language. For example, when translating to NuSMV, the translator eliminates as much redundant internal state as
possible, making it very efficient for BDD-based model checking. When translating to the PVS theorem prover, the specification is optimized for readability and to support the development of proofs in
PVS. When generating executable C or Ada code, the code is optimized for execution speed on the target processor. These optimizations can have a dramatic effect on the target analysis tools. For
example, optimization passes incorporated into the NuSMV translator reduced the time required for NuSMV to check one model from over 29 hours to less than a second.
However, some optimizations are better incorporated into the verification tools rather than the translator. For example, predicate abstraction^1 is a well-known technique for reducing the size of the
reachable state space, but automating this during translation would require a tight interaction between our translator and the analysis tool to iteratively refine the predicates based on the
counterexamples. Since many model checkers already implement this technique, we have not tried to incorporate it into our translator framework.
We have developed tools to translate the counterexamples produced by the model checkers into two formats. The first is a simple spreadsheet that shows the inputs and outputs of the model for each
step (similar to steps noted in the accompanying table). The second is a test script that can be read by the Reactis tool to step forward and backward through the counterexample in the Reactis
Our translator framework currently supports input models written in Simulink, Stateflow, and SCADE. It generates specifications for the NuSMV, SAL, and Prover model checkers, the PVS and ACL2 theorem
provers, and C and Ada source code.
A Small Example
To make these ideas concrete, we present a very small example, the mode logic for a simple microwave oven shown in Figure 2. The microwave initially starts in Setup mode. It transitions to Running
mode when the Start button is pressed and the Steps Remaining to cook (initially provided by the keypad entry subsystem) is greater than zero. On transition to Running mode, the controller enters
either the Cooking or Suspended submode, depending on whether Door Closed is true. In Cooking mode, the controller decrements Steps Remaining on each step. If the door is opened in Cooking mode or
the operator presses the Clear button, the controller enters the Suspended submode. From the Suspended submode, the operator can return to Cooking submode by pressing the Start button while the door
is closed, or return to Setup mode by pressing the Clear button. When Steps Remaining decrements to zero, the controller exits Running mode and returns to Setup mode.
Since this model consists only of Boolean values (Start, Clear, Door Closed), enumerated types (mode), and two small integers (Steps Remaining and Steps to Cook range from 0 to 639, the largest value
that can be entered on the keypad) it is well suited for analysis with a symbolic model checker such as NuSMV. A valuable property to check is that the door is always closed when the microwave is
cooking. In CTL^1 (one of the property specification languages of NuSMV), this is written as:
Translation of the model into NuSMV and checking this property takes only a few seconds and yields the counterexample shown in Table 1.
In step 2 of the counterexample, we see the value of Start change from 0 to 1, indicating the start button was pressed. Also in step 2, the door is closed and Steps Remaining takes on the value 1. As
a result, the microwave enters Cooking mode in step 2. In step 3, the door is opened, but the microwave remains in Cooking mode, violating our safety property.
To better understand how this happened, we use Reactis to step through the generated counterexample. This reveals that instead of taking the transition from Cooking to Suspended when the door is
opened, the microwave took the transition from Cooking to Cooking that decrements Steps Remaining because this transition has a higher priority (priority 1) than the transition from Cooking to
Suspended (priority 2). Worse, the microwave would continue cooking with the door open until Steps Remaining becomes zero. Changing the priority of these two transitions and rerunning the model
checker shows that in all possible states, the door is always closed if the microwave is cooking.
While this example is tiny, the two integers (Steps Remaining and Steps to Cook) still push its reachable state space to 9.8 × 10^6 states. Also note that the model checker does not necessarily find
the “best” counterexample. It actually would have been clearer if the Steps Remaining had been set to a value larger than 1 in step 2. However, this counterexample is very typical. In the production
models that we have examined, very few counterexamples are longer than a few steps.
Case Studies
To be of any real value, model checking must be able to handle much larger problems. Three case studies on the application of our tools to industrial examples are described here. A fourth case study
is discussed in Miller et al.^7
ADGS-2100 Window Manager. One of the largest and most successful applications of our tools was to the ADGS-2100 Adaptive Display and Guidance System Window Manager.^13 In modern aircraft, pilots are
provided aircraft status primarily through computerized display panels similar to those shown in Figure 3. The ADGS-2100 is a Rockwell Collins product that provides the heads-down and heads-up
displays and display management software for next-generation commercial aircraft.
The Window Manager (WM) ensures that data from different applications is routed to the correct display panel. In normal operation, the WM determines which applications are being displayed in response
to the pilot selections. However, in the case of a component failure, the WM also decides which information is most critical and routes this information from one of the redundant sources to the most
appropriate display panel. The WM is essential to the safe flight of the aircraft. If the WM contains logic errors, critical flight information could be unavailable to the flight crew.
While very complex, the WM is specified in Simulink using only Booleans and enumerated types, making it ideal for verification using a BDD-based model checker such as NuSMV. The WM is composed of
five main components that can be analyzed independently. These five components contain a total of 16,117 primitive Simulink blocks that are grouped into 4,295 instances of Simulink subsystems. The
reachable state space of the five components ranges from 9.8 × 10^9 to 1.5 × 10^37 states.
Ultimately, 563 properties about the WM were developed and checked, and 98 errors were found and corrected in early versions of the WM model. This verification was done early in the design process
while the design was still changing. By the end of the project, the WM developers were checking the properties after every design change.
CerTA FCS Phase I. Our second case study was sponsored by the U.S. Air Force Research Laboratory (AFRL) under the Certification Technologies for Advanced Flight Critical Systems (CerTA FCS) program
in order to compare the effectiveness of model checking and testing.^12 In this study, we applied our tools to the Operational Flight Program (OFP) of an unmanned aerial vehicle developed by Lockheed
Martin Aerospace. The OFP is an adaptive flight control system that modifies its behavior in response to flight conditions. Phase I of the project concentrated on applying our tools to the Redundancy
Management (RM) logic, which is based almost entirely on Boolean and enumerated types.
The RM logic was broken down into three components that could be analyzed individually. While relatively small (they contained a total of 169 primitive Simulink blocks organized into 23 subsystems,
with reachable state spaces ranging from 2.1 × 10^4 to 6.0 × 10^13 states), the RM logic was replicated in the OFP once for each of the 10 control surfaces on the aircraft, making it a significant
portion of the OFP logic.
To compare the effectiveness of model checking and testing at discovering errors, this project had two independent verification teams, one that used testing and one that used model checking. The
formal verification team developed a total of 62 properties from the OFP requirements and checked these properties with the NuSMV model checker, uncovering 12 errors in the RM logic. Of these 12
errors, four were classified by Lockheed Martin as severity 3 (only severity 1 and 2 can affect the safety of flight), two were classified as severity 4, two resulted in requirements changes, one was
redundant, and three resulted from requirements that had not yet been implemented in the release of the software.
In similar fashion, the testing team developed a series of tests from the same OFP requirements. Even though the testing team invested almost half as much time in testing as the formal verification
team spent in model checking, testing failed to find any errors. The main reason for this was that the demonstration was not a comprehensive test program. While some of these errors could be found
through testing, the cost would be much higher, both to find and fix the errors. In addition, the errors found through model checking tended to be intermittent, near simultaneous, or combinatory
sequences of failures that would be very difficult to detect through testing. The conclusion of both teams was that model checking was shown to be more cost effective than testing in finding design
CerTa FCS Phase II. The purpose of Phase II of the CerTA FCS project was to investigate whether model checking could be used to verify large, numerically intensive models. In this study, the
translation framework and model checking tools were used to verify important properties of the Effector Blender (EB) logic of an OFP for a UAV similar to that verified in Phase I.
The EB is a central component of the OFP that generates the actuator commands for the aircraft’s six control surfaces. It is a large, complex model that repeatedly manipulates a 3 × 6 matrix of
floating point numbers. It inputs 32 floating point inputs and a 3 × 6 matrix of floating point numbers and outputs a 1 × 6 matrix of floating point numbers. It contains over 2,000 basic Simulink
blocks organized into 166 Simulink subsystems, many of which are Stateflow models.
Running a set of properties after each model revision is a quick and easy way to see if anything has been broken. We encourage our developers to “check your models early and check them often.”
Because of its extensive use of floating point numbers and large state space, the EB cannot be verified using a BDD-based model checker such as NuSMV. Instead, the EB was analyzed using the Prover
SMT-solver from Prover Technologies. Even with the additional capabilities of Prover, several new issues had to be addressed, the hardest being dealing with floating point numbers.
While Prover has powerful decision procedures for linear arithmetic with real numbers and bit-level decision procedures for integers, it does not have decision procedures for floating point numbers.
Translating the floating point numbers into real numbers was rejected since much of the arithmetic in the EB is inherently nonlinear. Also, the use of real numbers would mask floating point
arithmetic errors such as overflow and underflow.
Instead, the translator framework was extended to convert floating point numbers to fixed point numbers using a scaling factor provided by the OFP designers. The fixed point numbers were then
converted to integers using bit-shifting to preserve their magnitude. While this allowed the EB to be verified using Prover’s bit-level integer decision procedures, the results were unsound due to
the loss of precision. However, if errors were found in the verified model, their presence could easily be confirmed in the original model. This allowed the verification to be used as a highly
effective debugging step, even though it did not guarantee correctness.
Determining what properties to verify was also a difficult problem. The requirements for the EB are actually specified for the combination of the EB and the aircraft model, but checking both the EB
and the aircraft model exceeded the capabilities of the Prover Plug-In model checker. After consultation with the OFP designers, the verification team decided to verify whether the six actuator
commands would always be within dynamically computed upper and lower limits. Violation of these properties would indicate a design error in the EB logic.
Even with these adjustments, the EB model was large enough that it had to be decomposed into a hierarchy of components several levels deep. The leaf nodes of this hierarchy were then verified using
Prover Plug-In and their composition was manually verified using manual proofs. This approach also ensured that unsoundness could not be introduced through circular reasoning since Simulink enforces
the absence of cyclic dependencies between atomic subsystems.
Ultimately, five errors in the EB design logic were discovered and corrected through model checking of these properties. In addition, several potential errors that were being masked by defensive
design practices were found and corrected.
Lessons from the Case Studies
The case studies described here demonstrate that model checking can be effectively used to find errors early in the development process for many classes of models. In particular, even very complex
models can be verified with BDD-based model checkers if they consist primarily of Boolean and enumerated types. Every industrial system we have studied contains large sections that either meet this
constraint or can be made to meet it with some alteration.
For this class of models, the tools are simple enough for developers to use them routinely and without extensive training. In our experience, a single day of training and a low level of ongoing
mentoring are usually sufficient. This also makes it practical to perform model checking early in the development process while a model is still changing.
Running a set of properties after each model revision is a quick and easy way to see if anything has been broken. We encourage our developers to “check your models early and check them often.” The
time spent model checking is recovered several times over by avoiding rework during unit and integration testing.
Since model checking examines every possible combination of input and state, it is also far more effective at finding design errors than testing, which can only check a small fraction of the possible
inputs and states. As demonstrated by the CerTA FCS Phase I case study, it can also be more cost effective than testing.
Future Directions
There are many directions for further research. As illustrated in the CerTA FCS Phase II study, numerically intensive models still pose a challenge for model checking. SMT-based model checkers hold
promise for verification of these systems, but the need to write properties that can be verified through induction over the state transition relation make them more difficult for developers to use.
Most industrial models used to generate code make extensive use of floating point numbers. Other models, particularly those that deal with spatial relationships such as navigation, make extensive use
of trigonometric and other transcendental functions. A sound and efficient way of checking systems using floating point arithmetic and transcendental functions would be very helpful.
It can also be difficult to determine how many properties must be checked. Our experience has been that checking even a few properties will find errors, but that checking more properties will find
more errors. Unlike testing for which many objective coverage criteria have been developed, completeness criteria for properties do not seem to exist. Techniques for developing or measuring the
adequacy of a set of properties are needed.
As discussed in the CerTA FCS Phase II case study, the verification of very large models may be achieved by using model checking on subsystems and more traditional reasoning to compose the
subsystems. Combining model checking and theorem proving in this way could be a very effective approach to the compositional verification of large systems.
The authors wish to thank Ricky Butler, Celeste Bellcastro, and Kelly Hayhurst of the NASA Langley Research Center, Mats Heimdahl, Yunja Choi, Anjali Joshi, Ajitha Rajan, Sanjai Rayadurgam, and Jeff
Thompson of the University of Minnesota, Eric Danielson, John Innis, Ray Kamin, David Lempia, Alan Tribble, Lucas Wagner, and Matt Wilding of Rockwell Collins, Bruce Krogh of CMU, Vince Crum, Wendy
Chou, Ray Bortner, and David Homan of the Air Force Research Lab, and Greg Tallant and Walter Storm of Lockheed Martin for their contributions and support.
This work was supported in part by the NASA Langley Research Center under contract NCC-01001 of the Aviation Safety Program (AvSP) and by the Air Force Research Lab under contract FA8650-05-C-3564 of
the Certification Technologies for Advanced Flight Control Systems program (CerTA FCS) [88ABW-2009-2730].
Figure 1. The translator framework.
Figure 2. Microwave mode logic.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment
The Latest from CACM
Shape the Future of Computing
ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.
Get Involved
Communications of the ACM (CACM) is now a fully Open Access publication.
By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.
Learn More | {"url":"https://cacm.acm.org/practice/software-model-checking-takes-off/","timestamp":"2024-11-05T22:32:04Z","content_type":"text/html","content_length":"158350","record_id":"<urn:uuid:6492a039-c7a5-49d0-8730-adebb18876f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00338.warc.gz"} |
Elementary Algebra Chapter 2 - Real Numbers - Chapters 1-2 Cumulative Review Problem Set - Page 92 45
Work Step by Step
Since dividing fractions is not possible, we need to flip the fraction to the right of the division sign in order to replace the division sign with the multiplication sign. Then, we simplify the
fractions by cancelling out common factors. Step 1: $(\frac{6x^{2}y}{11})\div(\frac{9y^{2}}{22})$ Step 2: $(\frac{6x^{2}y}{11})\times(\frac{22}{9y^{2}})$ Step 3: $(\frac{6x^{2}y}{1})\times(\frac{2}
{9y^{2}})$ Step 4: $(\frac{6x^{2}}{1})\times(\frac{2}{9y^{2-1}})$ Step 5: $(\frac{6x^{2}}{1})\times(\frac{2}{9y})$ Step 6: $(\frac{2x^{2}}{1})\times(\frac{2}{3y})$ Step 7: $(\frac{4x^{2}}{3y})$ | {"url":"https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-2-real-numbers-chapters-1-2-cumulative-review-problem-set-page-92/45","timestamp":"2024-11-03T09:10:48Z","content_type":"text/html","content_length":"56273","record_id":"<urn:uuid:2b090212-6632-41eb-9b4f-f72d1c70ee8f>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00303.warc.gz"} |
Population recovery and partial identification
We study several problems in which an unknown distribution over an unknown population of vectors needs to be recovered from partial or noisy samples, each of which nearly completely erases or
obliterates the original vector. For example, consider a distribution p over a population V ⊆ {0, 1} ^n. A noisy sample ν′ is obtained by choosing v according to p and flipping each coordinate of ν
with probability say 0.49 independently. The problem is to recover V, p as efficiently as possible from noisy samples. Such problems naturally arise in a variety of contexts in learning, clustering,
statistics, computational biology, data mining and database privacy, where loss and error may be introduced by nature, inaccurate measurements, or on purpose. We give fairly efficient algorithms to
recover the data under fairly general assumptions. Underlying our algorithms is a new structure we call a partial identification (PID) graph for an arbitrary finite set of vectors over any alphabet.
This graph captures the extent to which certain subsets of coordinates in each vector distinguish it from other vectors. PID graphs yield strategies for dimension reductions and re-assembly of
statistical information. The quality of our algorithms (sequential and parallel runtime, as well as numerical stability) critically depends on three parameters of PID graphs: width, depth and cost.
The combinatorial heart of this work is showing that every set of vectors posses a PID graph in which all three parameters are small (we prove some limitations on their trade-offs as well). We
further give an efficient algorithm to find such near-optimal PID graphs for any set of vectors. Our efficient PID graphs imply general algorithms for these recovery problems, even when loss or noise
are just below the information-theoretic limit! In the learning/clustering context this gives a new algorithm for learning mixtures of binomial distributions (with known marginals) whose running time
depends only quasi-polynomially on the number of clusters. We discuss implications to privacy and coding as well.
• information recovery
• learning theory
• noisy data
All Science Journal Classification (ASJC) codes
Dive into the research topics of 'Population recovery and partial identification'. Together they form a unique fingerprint. | {"url":"https://cris.iucc.ac.il/en/publications/population-recovery-and-partial-identification-2","timestamp":"2024-11-04T17:23:14Z","content_type":"text/html","content_length":"50752","record_id":"<urn:uuid:7b07263c-b40c-427a-b323-e78622816722>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00896.warc.gz"} |
Reinforced Concrete Fundamentals: Analysis & Design of Reinforcement | EngineeringSkills.com | EngineeringSkills.com
Updated 24 February 2023
Reading time: 15 mins
Reinforced Concrete Fundamentals - Analysis and Design of Steel Reinforcement
In this tutorial we'll explore the role of steel reinforcement in reinforced concrete design and establish the fundamental design equations
In the previous concrete tutorial, we introduced concrete – plain unreinforced concrete. In this tutorial, we’ll see that to take full advantage of concrete as a construction material, we need to
combine it with steel reinforcement. This leads us nicely into the exciting world of reinforced concrete design. We take our first steps towards exploring this vast field in this tutorial.
Fig 1. (a) Steel reinforcement cages for columns, (b) Steel reinforcement cages for beams.
This tutorial is taken from lecture 13 and lecture 14 in my course, Fundamentals of Reinforced Concrete Design to Eurocode 2. After reading this, if you want to dive in further and understand how we
design reinforced concrete for bending and shear, take a look at the course details below.
Fundamentals of Reinforced Concrete Design to Eurocode 2
An introduction to ultimate limit state design for bending and shear with optional calculation automation using Python.
After completing this course...
• You will be able to determine design actions using the Eurocodes Basis of Structural Design (EC0) and Actions on Structures (EC1).
• You will understand balanced section design and how to analyse and safely design singly and doubly reinforced concrete sections.
• You will understand how to apply the Variable Strut Inclination Method for shear reinforcement design.
• You will have developed your own reinforced concrete design codes in Python (the Python pathway is optional in this course).
1.0 Reinforced Concrete Design and Behaviour under Load
Checkout the video and lecture notes in the course..
Plain concrete on its own actually has quite limited value as a construction material. This is because concrete is brittle and has a relatively low tensile capacity by comparison to its compression
capacity. A typical concrete with a compressive strength of $30 \:N/mm^2$ would experience cracking under tensile loading at a stress of approximately $2.5-3.5\:N/mm^2$.
For this reason, we introduce steel reinforcement as a means of resisting the tensile forces that develop in the concrete cross-section. Including steel reinforcement also leads to ductile failures,
which are safer than sudden explosive brittle failures.
1.1 Steel Reinforcement
In a reinforced concrete structure, reinforcing steel or ‘rebar’ cages are used to provide the tension capacity required to balance the compression forces that develop in the concrete. We’ll explore
this in much greater detail below. When resisting flexure, the compression forces in the concrete and tension forces in the rebar form a couple that resists the moment generated by applied loading.
Rebar in common use throughout Europe and the UK is high-strength steel with a characteristic yield strength of $500\:N/mm^2$. Although they are notionally circular in cross-section they have a
deformed surface to improve the mechanical bond between the steel and encasing concrete. They are supplied in common diameters; $[6, 8, 10,12,16,20,25,32,40]\:mm$.
Individual bars are bent into shape and tied together to form reinforcement cages. Even today, this is still a manual, labour-intensive task. This, combined with the curing time needed for concrete
leads to longer construction times compared to a similar structure constructed from structural steel for example. However, as we saw in the previous tutorial, concrete structures offer many other
advantages to compensate for slower construction times.
Fig 2. Deformed high-strength steel reinforcement bars.
2.0 Reinforced concrete cross-section analysis
At this point, we’ve developed a good overall understanding of concrete’s strengths and weaknesses and a general understanding that embedded steel reinforcement compensates for concrete’s inherent
weakness in tension. Next, we need to start exploring exactly how the steel and concrete act compositely to resist externally applied loading.
2.1 Stress-resultants and the internal moment of resistance
We’ll start by considering a simply supported beam subject to a uniformly distributed load (UDL). We can think of this loading as generating an external bending moment, $M_{EXT}$.
In response, the beam generates a resistance to $M_{EXT}$ through the development of internal normal stresses in the material. These normal stresses can be represented by stress resultant forces
(simply stress multiplied by the area over which it develops); one in tension, $F_T$ and one in compression, $F_C$. The internal stress resultants, separated by a lever arm, $z$, form a couple, $M_
{INT}$. It is this internal couple or bending moment that resists the externally applied bending moment, Fig. 3. So far, this is classical beam bending theory – nothing specific to reinforced
concrete behaviour.
Fig 3. Simply supported beam, with a cut at section $X X$ , revealing the internal stress resultants, $F_C$ and $F_T$ and corresponding internal bending moment, $M_{INT}$ which resists the
externally applied loading.
As mentioned earlier, concrete is strong in compression but comparatively weak in tension, with a tensile capacity of approximately $1/10^{th}$ of its compression capacity. Therefore we must use
reinforcing steel to provide tensile resistance. In fact, when considering the ultimate moment capacity of a concrete beam, we conservatively assume the concrete has no tensile capacity whatsoever
and that the section is cracked between the extreme tensile fibre and the neutral axis.
2.2 Stress and strain distributions in reinforced concrete
Consider again the simply supported reinforced concrete beam subject to a UDL. The external loading will induce stresses internally within the beam. When we make an imaginary cut (shown above in Fig
3), we expose the internal stresses acting on the exposed cross-section. The cross-section we are referring to is shown in Fig. 4.
Fig 4. The red shaded area is the section under consideration in the following discussion on section analysis.
Before proceeding any further, we also define the ultimate moment of resistance of a reinforced concrete section as the maximum value of the internal moment that can be generated, beyond which either
the concrete would fail in compression or the steel would yield in tension.
Now, consider the cross-section identified in figures 3 and 4. The orientation of the loading is such that the top fibres of the beam will be in compression and the bottom fibres in tension. In Fig.
5 (a) below, we can see an elevation view of the section and its associated dimensions and in (b), we can see how the strain varies with height across the section.
Figures (c), (d) and (e) represent three possible stress distribution diagrams:
• Figure (c) shows a linear stress distribution. As we saw in the previous tutorial, stress is not linearly proportional to strain in plain concrete. However, for relatively low levels of strain,
we can reasonably approximate stress as being linearly proportional to strain. This assumption is synonymous with serviceability limit state analysis.
• Figure (d) is referred to as the rectangular-parabolic stress block and is observed at the point of concrete compression failure. The stresses at the outermost (furthest from the neural axis)
fibres have reached their compression limit $(0.85f_{ck}/1.5=0.567f_{ck})$.
• Figure (e) represents the equivalent rectangular stress block. This stress block is an approximation of the rectangular-parabolic stress block. It yields almost identical numerical results while
being much easier to manage numerically.
Fig 5. (a) Concrete cross-section, (b) strain distribution and (c-e) three possible stress distributions.
In all three stress diagrams, the tensile stress is only developed at the level of the reinforcing steel, i.e. tensile stresses are only developed in the steel. The concrete in all cases is assumed
to be cracked up to the height of the neutral axis and therefore does not provide any tensile resistance. This is a fundamental assumption of ultimate limit state (ULS) section analysis.
2.3 Balanced section design
Now we can start to think about the limits imposed by fundamental material behaviour. Remember, our models of material behaviour dictate that the steel will yield at a strain value of $\epsilon_y=
0.00217$, while the concrete will crush at a compressive strain $\epsilon_{cu,2}=0.0035$. Looking at the strain distribution in Fig. 5(b), using basic geometry (similar triangles), these two strain
limits can be used to determine the position of the neutral axis as a function of the effective depth, $d$ (distance from the top of the compression zone to the centre of the steel reinforcement):
\begin{align*} x=&\frac{d}{1+\frac{0.00217}{0.0035}}\\\\ x=&0.617\:d \end{align*} \tag{1}
If the neural axis is in this position when the beam is subject to its ultimate load, in theory, the concrete will crush in compression, and the steel will yield in tension simultaneously. This is
referred to as a balanced section design.
This is not a desirable mode of failure. Concrete compression failures are brittle failures providing little or no visible warning signs, whereas yielding of steel is a ductile failure. A ductile
beam failure, one in which the steel yields before the concrete crushes, is preceded by a period of pronounced tensile cracking of the concrete. This provides a warning that a failure is possible,
even likely.
For this reason, we design concrete sections to fail due to yielding of the steel first. Practically we do this by placing limits on the position of the neutral axis, and we do this by controlling
how much reinforcing steel we place in the cross-section. Put simply, if we put too much steel in our beam, the tensile force that can be developed in the steel reinforcement will be larger than the
compressive force that the concrete can resist, so the concrete will fail first – a sudden, explosive, brittle failure.
The material properties and geometry of the section dictate that for a ductile failure (steel yielding first), the neutral axis, $x\le0.617d$. However, the Eurocode specifies that $x\le0.45d$. This
means that when the steel has yielded, the concrete strain is far enough away from its ultimate value to allow further rotation of the section to occur.
To further emphasise the meaning of this limit on neutral axis depth, consider the strain distribution in Fig. 6. If we impose the condition that at the instant of failure, the steel will have
yielded, the bottom strain value is fixed at $\epsilon_{yield}=0.00217$.
Now, if the neutral axis was permitted to be as low as $0.617d$, this would result in the strain in the concrete being equal to the ultimate value of $\epsilon_{cu,2}=0.0035$, or a balanced design,
as discussed above. However, if we limit the neutral axis depth to $0.45d$, the maximum strain in the concrete will be the value shown, $\epsilon_1<0.0035$. This leaves an additional capacity strain
of $\epsilon_2=0.0035−\epsilon_1$.
This allows rotation of the beam to continue after the steel has yielded but before the concrete crushes in compression. This further rotation is referred to as plastic rotation or plastic hinge
behaviour and allows redistribution of moments to take place in the structure, provided it has sufficient structural redundancy.
Fig 6. Limits on neutral axis depth.
3.0 Ultimate Moment Capacity of Reinforced Concrete
Checkout the video and lecture notes in the course..
Now that we’ve introduced the concept of balanced section design and understand the limit placed on neutral axis depth, our next task is to develop a set of design equation that respect this limit.
We’ll focus here on so-called singly reinforced concrete sections, taken from a typical beam.
The term ‘singly reinforced’ refers to the fact that reinforcing steel is provided in the tension zone of the beam. Later in your study of reinforced concrete, you’ll see that we can also reinforce
the compression zone, such sections are referred to as doubly reinforced and are covered further in Fundamentals of Reinforced Concrete Design to Eurocode 2.
3.1 Ultimate moment of resistance – singly reinforced concrete section
Let’s start by considering the reinforced concrete beam cross-section shown in Fig. 7 below.
Fig 7. Reinforced concrete section showing stress and strain distribution at the ultimate limit state.
The beam from which the cross-section was obtained is subject to a bending moment that induces compression on the top of the section. The beam is reinforced with steel in the tension zone only.
The strain and stress distribution at failure are also shown. The task now is to determine an expression for the ultimate moment capacity of this section that ensures a ductile failure, i.e. where
the steel yields in tension before the concrete crushes in compression.
Let’s state at the outset that the internal bending moment is equal in magnitude to the externally applied bending moment, thus $M_{INT}=M_{EXT}$. To be consistent with Eurocode nomenclature, we will
refer to external bending moments as design moments (as in, moments we are designing for), denoted by $M_{Ed}$ (where the subscript $E$ stands for the effect of actions). An internal bending moment
shall be referred to as a moment of resistance, denoted by $M_{Rd}$. Therefore, at risk of stating the obvious, we seek to achieve $M_{Rd}\ge M_{Ed}$.
$M_{Rd} = F_{C}\times z = F_T \times z \tag{2}$
where $F_C$ and $F_T$ are the compression and tension stress resultants acting on the cross-section and $z$ is the lever-arm between them.
The compressive stress resultant (in the concrete), $F_C$ is simply the stress magnitude multiplied by the area over which it acts,
$F_C = \overbrace{0.567\:f_{ck}}^{\text{stress}}\:\overbrace{b\:(0.8x)}^{\text{area}} \tag{3}$
Noting that the lever-arm is $z=d−0.4x$, we can obtain an expression for the internal moment of resistance as a function of the neutral axis depth $x$,
$M_{Rd} = [0.567\:f_{ck}\:b\:0.8x]\:[d-0.4\:x] \tag{4}$
Remembering that at the point of failure, the neutral axis is limited to a depth of $x=0.45d$, we can substitute this into the Eqn. 4, the expression then simplifies to:
$\boxed{M_{Rd} = 0.167\: f_{ck}\:bd^2} \tag{5}$
This equation can be interpreted as:
$M_{Rd} = K_{lim}\: f_{ck}\:bd^2$
where $K_{lim}=0.167$. This is the maximum bending moment that can be generated in a singly reinforced beam while still respecting the neutral axis depth limit, ensuring a ductile failure.
3.2 Area of steel reinforcement
So far, we’ve established the ultimate moment our beam can safely resist. Notice that this simply required that we state the characteristic cylinder strength and cross-section dimensions. However, we
still need to determine how much steel reinforcement is required to generate this moment of resistance.
Remember, if we don’t put enough steel into the beam, the steel we do provide will yield too soon, i.e. before the ultimate moment capacity can be generated. If, on the other hand, we put too much
steel in, the beam will continue to resist loading until the concrete crushes in compression; this would give a higher moment capacity but a brittle (unsafe) failure. So we need to identify the
specific area of steel required.
Consider again the section in Fig. 7 and Eqn. 4 above. If we replace the lever-arm with $z$ and instead of stating the height of the stress block as $0.8x$, we use the equivalent expression, $2(d−z)$
, we can write an equivalent expression for $M_{Rd}$ in terms of the lever arm $z$, rather than $x$,
\begin{align*}M_{Rd} = &0.567\:f_{ck}\:b\:\overbrace{[2(d-z)]}^{=0.8x}\times z\\\\ M_{Rd} = &1.134\:f_{ck}\:b(d-z)z \end{align*} \tag{6}
Now, letting $K=M_{Ed}/(bd^2fck)$ and imposing the limit that $K\le0.167$, rearranging we get a quadratic equation in $z$,
$\Bigg(\frac{z}{d}\Bigg)^2 - \Bigg(\frac{z}{d}\Bigg) + \frac{K}{1.134} = 0; \tag{7}$
Solving this quadratic for $z$, we obtain:
$\boxed{z = d\Bigg[0.5+\sqrt{0.25-\frac{K}{1.134}}\Bigg]} \tag{8}$
Using Eqn. 8, we can obtain an expression for the lever arm, $z$, based on the externally applied design moment, geometry of the beam and its material properties. The required area of steel can now
be simply determined by considering the steel stress at failure:
$M_{Ed} = M_{Rd} = F_T\times z \tag{9}$
$F_T = \frac{f_{yk}}{\gamma_s}\:A_s = 0.87\:f_{yk}\:A_s$
we have:
$\boxed{A_s = \frac{M_{Ed}}{0.87\:f_{yk}\:z}}$
So in summary, the key equations when designing a singly reinforced beam for a ductile failure are:
\begin{align*} &M_{Rd,max} = 0.167\: f_{ck}\:bd^2\\\\ &K = \frac{M_{Ed}}{bd^2f_{ck}}\\\\ &z = d\Bigg[0.5+\sqrt{0.25-\frac{K}{1.134}}\Bigg]\\\\ &A_s = \frac{M_{Ed}}{0.87\:f_{yk}\:z} \end{align*}
4.0 Wrapping up and where to next?
In this tutorial, we’ve introduced the role of steel reinforcement in reinforced concrete design. We’ve seen that steel plays a critical role in the development of an internal moment of resistance
and compensates for the concrete’s inherent brittleness and weakness in tension.
We’ve also explored the fundamental mechanical model used to describe the behaviour of the cross-section under load. We’ve seen that to avoid a brittle failure, we must limit the depth of the neutral
axis at the ultimate limit state.
Although we’ve covered a lot of ground and established a strong foundational understanding of reinforced concrete, we’ve really only scratched the surface. There are numerous other cross-section
configurations to explore, not to mention how resistance to shear force is developed.
If you want to explore reinforced concrete design further, take a look at the course below. After completing that course, you’ll have a much more complete understanding of fundamental reinforced
concrete behaviour.
That’s all for now. I’ll see you in the next one.
Fundamentals of Reinforced Concrete Design to Eurocode 2
An introduction to ultimate limit state design for bending and shear with optional calculation automation using Python.
After completing this course...
• You will be able to determine design actions using the Eurocodes Basis of Structural Design (EC0) and Actions on Structures (EC1).
• You will understand balanced section design and how to analyse and safely design singly and doubly reinforced concrete sections.
• You will understand how to apply the Variable Strut Inclination Method for shear reinforcement design.
• You will have developed your own reinforced concrete design codes in Python (the Python pathway is optional in this course).
Dr Seán Carroll
BEng (Hons), MSc, PhD, CEng MIEI, FHEA
Hi, I’m Seán, the founder of EngineeringSkills.com (formerly DegreeTutors.com). I hope you found this tutorial helpful. After spending 10 years as a university lecturer in structural engineering, I
started this site to help more people understand engineering and get as much enjoyment from studying it as I do. Feel free to get in touch or follow me on any of the social accounts.
Dr Seán Carroll's latest courses.
Do you have some knowledge or expertise you'd like to share with the EngineeringSkills community?
Check out our
guest writer programme
- we pay for every article we publish.
Featured Tutorials and Guides
If you found this tutorial helpful, you might enjoy some of these other tutorials. | {"url":"https://www.engineeringskills.com/posts/reinforced-concrete-design","timestamp":"2024-11-04T11:34:04Z","content_type":"text/html","content_length":"1049138","record_id":"<urn:uuid:29311452-357d-4c67-a2b7-058806e6081c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00664.warc.gz"} |
Vortices in the New Superconductors
This thesis deals with vortices in stacked long Josephson junctions and in two-gap superconductors. The first part is about Josephson vortices, or fluxons, in stacked long Josephson junctions. The
thesis introduces the model which is related to high T[c] superconductors. Some of the well-known and very important solutions to the non-linear equations are discussed. A possible relationship
between the linear and non-linear modes is investigated numerically. The fluxon-solutions can be made to shuttle back and forth in the junctions and they may emit radiation near the junction edge.
This radiation is typically in the THz range. The main problem is, however, that the radiated power in a single junction is too small for applications. This is usually solved by stacking more
junctions and getting the fluxons in the different junctions to bunch, radiate coherently, and thus increase the emitted power. The main problem is that the vortices repel each other and therefore
prefer to be far apart, preventing coherent radiation. Some different ways of obtaining bunched solutions are discussed. A microwave field is shown to be able to introduce bunching in weakly coupled
systems. This may also be done using a cavity instead of a microwave field. And finally, the very important flux-flow modes are investigated numerically. It is shown, that in some cases the flux-flow
modes spontaneously jump from a triangular fluxon-lattice to a square fluxon-lattice, even in stacks with a strong inductive coupling. The second subject is vortices in two-gap superconductors, such
as MgB[2]. These superconductors are investigated through the two-component Ginzburg-Landau theory. The usual Abrikosov vortex is investigated in the two-component version. The equations are solved
in the far-field and the effect of a Josephson-type coupling is considered. The subject of vortex-vortex interaction is briefly discussed in the case of zero Josephson coupling. Due to the added
complexity of having two order parameters, new features arise. A texture vortex solution is found analytically and numerically in the two-component theory for the case of zero magnetic field. The
case of non-zero magnetic field is investigated numerically. The textured vortex seems to be unstable in even a small applied magnetic field.
Dive into the research topics of 'Vortices in the New Superconductors'. Together they form a unique fingerprint.
• Madsen, S. P. (PhD Student), Pedersen, N. F. (Main Supervisor), Tønnesen, O. (Supervisor), Christiansen, P. L. (Supervisor), Sørensen, M. P. (Examiner), Ustinov, A. V. (Examiner) & Hedegård, P.
01/01/2003 → 28/04/2006
Project: PhD | {"url":"https://orbit.dtu.dk/en/publications/vortices-in-the-new-superconductors","timestamp":"2024-11-12T15:16:13Z","content_type":"text/html","content_length":"60533","record_id":"<urn:uuid:14ca0b27-ca74-43d7-b4de-24e78a5acb07>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00595.warc.gz"} |
Neural Networks Part 2: Backpropagation Main Ideas - StatQuest!!!
Neural Networks Part 2: Backpropagation Main Ideas
NOTE: This StatQuest was supported by these awesome people who support StatQuest at the Double BAM level: S. V. Dhulipala, Z. Rosenberg, T. Nguyen, J. Smith, G Heller-Wagner, J. N., S. Shah, H. M.
Chang, S. Özdemir, J. Horn, S. Cahyawijaya, N.Fleming, R., A. Eng, F. Prado, J. Malone-Lee
3 thoughts on “Neural Networks Part 2: Backpropagation Main Ideas”
1. Hi Josh – Thank you for making these awesome videos. These really helped me understand the foundations on Data Science especially on Deep Learning / Neural Network. I have a quick question on
this video regarding the chain rule that was used at 10:45 of Neural Networks Part 2: Backpropagation Main Ideas video, why is the chain rule (d SSR / d b3) consists of two parts? One is (d SSR /
d Predicted) and the other one is (d Predicted / d b3). And then we multiply them together.
I got confused because I initially watched your video on Gradient Descent step by step and when we do the chain rule on that video , it only consists of one part which is d SSR / d intercept –>>
observed minus predicted.
The only difference I see is that on this video, the predicted = blue + orange + b3 while on the other video, the predicted = y = b0 + b1x
□ To get a better idea of how The Chain Rule is being applied here, check out the StatQuest on… The Chain Rule: https://youtu.be/wl1myxrtQHQ
☆ Thank you! | {"url":"https://statquest.org/neural-networks-part-2-backpropagation-main-ideas/","timestamp":"2024-11-14T22:02:44Z","content_type":"text/html","content_length":"37449","record_id":"<urn:uuid:e094d6e4-b415-4caa-9640-c3242697b5c6>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00299.warc.gz"} |
Price Elasticity Calculator: Measure Demand Sensitivity to Price Changes
Unlock the power of pricing strategy with our Price Elasticity of Demand Calculator. Discover how small price changes impact consumer behavior, optimize revenue, and gain a competitive edge. From
luxury goods to everyday essentials, master the art of pricing. Ready to revolutionize your business decisions? Calculate now!
Price Elasticity Calculator
How to Use the Price Elasticity of Demand Calculator Effectively
Our Price Elasticity of Demand Calculator is designed to help you quickly and accurately measure how sensitive the demand for a product or service is to changes in its price. Follow these simple
steps to use the calculator effectively:
1. Enter the Initial Price: Input the original price of the product or service in USD.
2. Enter the Final Price: Input the new price of the product or service in USD.
3. Enter the Initial Quantity: Input the original quantity demanded at the initial price.
4. Enter the Final Quantity: Input the new quantity demanded at the final price.
5. Click “Calculate”: The calculator will process your inputs and display the results.
6. Interpret the Results: The calculator will provide the price elasticity of demand value and an interpretation of what it means for your product or service.
Remember, all input values must be positive numbers greater than zero for the calculator to function correctly.
Understanding Price Elasticity of Demand: Definition, Purpose, and Benefits
Price elasticity of demand is a crucial economic concept that measures the responsiveness of consumer demand to changes in price. It provides valuable insights into how price fluctuations affect the
quantity of a product or service that consumers are willing to purchase. Understanding price elasticity is essential for businesses, economists, and policymakers as it helps in making informed
decisions about pricing strategies, market analysis, and economic policies.
The primary purpose of calculating price elasticity of demand is to quantify the relationship between price changes and demand changes. This measurement allows businesses and analysts to:
• Predict how changes in price will affect sales volume
• Optimize pricing strategies to maximize revenue or market share
• Understand consumer behavior and market dynamics
• Assess the competitiveness of products or services in the market
• Make informed decisions about production levels and inventory management
The benefits of using a price elasticity of demand calculator are numerous:
• Quick and accurate calculations without complex manual computations
• Easy comparison of elasticity across different products or time periods
• Instant interpretation of results for better decision-making
• Ability to simulate various pricing scenarios and their potential impacts
• Enhanced understanding of market dynamics and consumer behavior
The Mathematics Behind Price Elasticity of Demand
The price elasticity of demand is calculated using the following formula:
$$ \text{Price Elasticity of Demand} = \frac{\text{Percentage Change in Quantity Demanded}}{\text{Percentage Change in Price}} $$
Our calculator uses the midpoint method to calculate percentage changes, which provides a more accurate result when dealing with large price or quantity changes. The midpoint formulas are:
$$ \text{Percentage Change in Price} = \frac{\text{Final Price} – \text{Initial Price}}{(\text{Final Price} + \text{Initial Price}) / 2} $$$$ \text{Percentage Change in Quantity} = \frac{\text{Final
Quantity} – \text{Initial Quantity}}{(\text{Final Quantity} + \text{Initial Quantity}) / 2} $$
The calculator then takes the absolute value of the result to ensure a positive elasticity value, as we’re primarily interested in the magnitude of the relationship between price and quantity
Interpreting Price Elasticity of Demand Results
The calculated price elasticity of demand value provides crucial information about the nature of demand for a product or service:
• Elastic Demand (Elasticity > 1): A small change in price leads to a large change in quantity demanded. This indicates that consumers are highly sensitive to price changes.
• Inelastic Demand (Elasticity < 1): A large change in price leads to a small change in quantity demanded. This suggests that consumers are less sensitive to price changes.
• Unit Elastic Demand (Elasticity = 1): The change in price leads to a proportional change in quantity demanded.
How the Price Elasticity of Demand Calculator Addresses User Needs
Our calculator addresses several key user needs and solves specific problems related to understanding market dynamics:
1. Simplifying Complex Calculations
Manually calculating price elasticity of demand can be time-consuming and prone to errors, especially when dealing with large datasets or multiple products. Our calculator automates this process,
providing instant and accurate results.
2. Facilitating Quick Decision-Making
In fast-paced business environments, quick access to market insights is crucial. This calculator allows users to rapidly assess the potential impact of price changes on demand, enabling faster and
more informed decision-making.
3. Enhancing Market Understanding
By providing both numerical results and interpretations, the calculator helps users gain a deeper understanding of their market’s price sensitivity, even if they’re not economics experts.
4. Enabling Scenario Analysis
Users can easily input different price and quantity scenarios to see how they affect elasticity, allowing for comprehensive what-if analyses and strategy planning.
5. Improving Pricing Strategies
With a clear understanding of price elasticity, businesses can optimize their pricing strategies to achieve specific goals, such as maximizing revenue or increasing market share.
Practical Applications and Examples
Let’s explore some practical applications of the Price Elasticity of Demand Calculator through examples:
Example 1: Luxury Goods Retailer
A high-end watch retailer wants to understand the impact of a price increase on their bestselling model.
• Initial Price: $5,000
• Final Price: $5,500
• Initial Quantity Sold (per month): 100
• Final Quantity Sold (per month): 95
Using our calculator, we find the price elasticity of demand is approximately 0.52. This indicates that demand is inelastic, suggesting that the price increase will likely result in higher overall
revenue despite the slight decrease in quantity sold.
Example 2: Grocery Store Staple
A supermarket chain is considering lowering the price of milk to attract more customers.
• Initial Price: $3.50
• Final Price: $3.00
• Initial Quantity Sold (per day): 1,000
• Final Quantity Sold (per day): 1,300
The calculator reveals a price elasticity of demand of approximately 1.38. This elastic demand suggests that the price decrease could significantly boost sales volume, potentially leading to
increased foot traffic and overall revenue.
Example 3: Subscription-Based Service
A streaming service is evaluating the impact of a price increase on its subscriber base.
• Initial Price: $9.99 per month
• Final Price: $11.99 per month
• Initial Number of Subscribers: 1,000,000
• Final Number of Subscribers: 950,000
The calculated price elasticity of demand is about 0.53, indicating inelastic demand. This suggests that the price increase might be beneficial for the company’s revenue, as the decrease in
subscribers is proportionally less than the increase in price.
Benefits of Using the Price Elasticity of Demand Calculator
1. Informed Pricing Decisions
By understanding how sensitive demand is to price changes, businesses can make more informed decisions about pricing strategies. This can lead to optimized pricing that balances revenue, market
share, and customer satisfaction.
2. Competitive Analysis
Comparing the price elasticity of demand for different products or across competitors can provide valuable insights into market positioning and competitive advantages.
3. Revenue Forecasting
With accurate elasticity measurements, businesses can better predict how price changes will affect their revenue, allowing for more accurate financial forecasting and planning.
4. Market Segmentation
Understanding price elasticity can help identify different market segments based on their price sensitivity, enabling targeted marketing and pricing strategies.
5. Product Development
Insights from price elasticity calculations can inform product development decisions, helping businesses create offerings that align with consumer price sensitivities.
6. Risk Management
By anticipating how price changes might affect demand, businesses can better manage risks associated with pricing decisions and market fluctuations.
Frequently Asked Questions (FAQ)
Q1: What does a negative price elasticity of demand mean?
A: A negative price elasticity of demand is actually the norm and indicates an inverse relationship between price and quantity demanded. As price increases, quantity demanded decreases, and vice
versa. Our calculator displays the absolute value of elasticity for easier interpretation.
Q2: Can price elasticity of demand be zero?
A: In theory, a price elasticity of demand of zero would indicate perfectly inelastic demand, where changes in price have no effect on quantity demanded. In practice, this is extremely rare and
usually only applies to essential goods with no substitutes.
Q3: How often should I recalculate price elasticity of demand?
A: It’s advisable to recalculate price elasticity regularly, especially after significant market changes, new product launches, or shifts in consumer behavior. For most businesses, quarterly or
bi-annual calculations are sufficient, but industries with rapid price fluctuations may benefit from more frequent analysis.
Q4: Can price elasticity of demand be used for services as well as products?
A: Yes, price elasticity of demand can be calculated for both products and services. The principle remains the same – it measures how changes in price affect the quantity demanded, regardless of
whether it’s a physical product or an intangible service.
Q5: How does price elasticity of demand relate to total revenue?
A: The relationship between price elasticity and total revenue is crucial for pricing strategy:
• If demand is elastic (elasticity > 1), lowering prices will increase total revenue.
• If demand is inelastic (elasticity < 1), raising prices will increase total revenue.
• If demand is unit elastic (elasticity = 1), changes in price will not affect total revenue.
Q6: Are there limitations to using price elasticity of demand?
A: While price elasticity of demand is a powerful tool, it has limitations:
• It assumes all other factors affecting demand remain constant (ceteris paribus).
• It may not capture the full complexity of consumer behavior, especially for luxury or highly differentiated products.
• Short-term elasticity may differ from long-term elasticity.
• It doesn’t account for cross-price elasticity (how changes in the price of one good affect demand for another).
Please note that while we strive for accuracy and reliability, we cannot guarantee that the results from our web tool are always correct, complete, or reliable. Our content and tools might have
mistakes, biases, or inconsistencies. Always use professional judgment and consider multiple sources when making important decisions based on these calculations.
Conclusion: Harness the Power of Price Elasticity Insights
Understanding price elasticity of demand is crucial for businesses, economists, and policymakers in today’s dynamic market environment. Our Price Elasticity of Demand Calculator offers a powerful,
user-friendly tool to gain these valuable insights quickly and accurately.
By leveraging this calculator, you can:
• Make data-driven pricing decisions
• Optimize revenue and market share strategies
• Gain deeper insights into consumer behavior and market dynamics
• Enhance competitive analysis and market positioning
• Improve financial forecasting and risk management
Don’t let complex calculations hinder your market understanding. Use our Price Elasticity of Demand Calculator today to unlock valuable insights and drive your business forward. Whether you’re a
small business owner, a corporate strategist, or an economics student, this tool will help you navigate the intricate relationship between price and demand with confidence and precision.
Start calculating price elasticity now and take the first step towards more informed, strategic decision-making in your pricing and market strategies!
Important Disclaimer
The calculations, results, and content provided by our tools are not guaranteed to be accurate, complete, or reliable. Users are responsible for verifying and interpreting the results. Our content
and tools may contain errors, biases, or inconsistencies. We reserve the right to save inputs and outputs from our tools for the purposes of error debugging, bias identification, and performance
improvement. External companies providing AI models used in our tools may also save and process data in accordance with their own policies. By using our tools, you consent to this data collection and
processing. We reserve the right to limit the usage of our tools based on current usability factors. By using our tools, you acknowledge that you have read, understood, and agreed to this disclaimer.
You accept the inherent risks and limitations associated with the use of our tools and services. | {"url":"https://www.pulsafutura.com/price-elasticity-calculator-measure-demand-sensitivity-to-price-changes/","timestamp":"2024-11-03T10:27:39Z","content_type":"text/html","content_length":"111896","record_id":"<urn:uuid:eed8a3f5-b7f4-4c80-b49d-d5eeb43f27cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00675.warc.gz"} |
help plz
Jen's phone uses a simple algorithm to count the number of strides she takes. The algorithm looks at the phone's accelerometer measurements, and counts a stride each time the acceleration goes from
below to above $3$ g. Based on the number of strides counted in the $20$-second window shown here, and assuming that Jen travels $140$ cm per stride, what was Jen's average walking speed, in meters
per second, over the $20$-second window? Express your answer as a decimal to the nearest hundredth.
Guest Nov 16, 2022
We need meters per second, not centimeters per second, so convert the stride of 140 cm to meters:
140 cm times 1 m/100 cm = 1.4 meters.
The number of pulses going above the dashed line for the 3 g threshold to
count as strides are 19. Therefore, a total assumed distance of $19 times 1.4 meters is covered in 20 seconds, making the average speed
19 times 1.4/20 = 19 times 0.07 = 1.33m/s.
Guest Apr 18, 2023 | {"url":"https://web2.0calc.com/questions/help-plz_98785","timestamp":"2024-11-08T17:25:27Z","content_type":"text/html","content_length":"20674","record_id":"<urn:uuid:73028e4c-31f4-435c-89fd-86a60149735f>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00085.warc.gz"} |
Physics Diagram Expalined
admin 03 mins
Physics 90 is a course offered at some universities and colleges. Unfortunately, I don’t have any information about a specific Physics 90 course. However, I can tell you about the general topics that
are usually covered in a Physics 90 course.
Physics 90 is an advanced course that builds on the concepts learned in introductory physics courses. It covers a wide range of topics, including classical mechanics, electromagnetism, quantum
mechanics, and thermodynamics. The course is designed to provide students with a deeper understanding of the fundamental principles of physics and their applications in the real world.
In classical mechanics, students learn about the motion of objects and the forces that cause them to move. Topics covered include kinematics, dynamics, work and energy, and momentum. Students also
learn about oscillations and waves.
In electromagnetism, students learn about the behavior of electric and magnetic fields. Topics covered include Coulomb’s law, Gauss’s law, Faraday’s law, and Maxwell’s equations. Students also learn
about electromagnetic waves and electromagnetic radiation.
In quantum mechanics, students learn about the behavior of matter and energy at the atomic and subatomic level. Topics covered include wave-particle duality, quantum states, quantum operators, and
quantum mechanics of simple systems.
In thermodynamics, students learn about the relationship between heat, energy, and work. Topics covered include the laws of thermodynamics, entropy, and heat engines.
Physics 90 is a challenging course that requires a strong foundation in mathematics and physics. Students are expected to have a good understanding of calculus, differential equations, and linear
algebra. They should also be familiar with the concepts covered in introductory physics courses. | {"url":"https://chartdiagram.com/physics-diagram-expalined/","timestamp":"2024-11-08T20:46:31Z","content_type":"text/html","content_length":"79447","record_id":"<urn:uuid:51bfecb1-9718-4090-9834-c42de61f33cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00014.warc.gz"} |
OASIcs, Volume 46, WPTE'15, Complete VolumeFrontmatter, Table of Contents, Preface, Workshop OrganizationMechanizing Meta-Theory in Beluga (Invited Talk)Head reduction and normalization in a call-by-value lambda-calculusTowards Modelling Actor-Based Concurrency in Term RewritingObserving Success in the Pi-CalculusFormalizing Bialgebraic Semantics in PVS 6.0
eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Open Access Series in Informatics 2190-6807 2015-06-17 46 0 0 10.4230/OASIcs.WPTE.2015 article Chiba, Yuki Escobar, Santiago Nishida, Naoki
Sabel, David Schmidt-Schauß, Manfred OASIcs, Volume 46, WPTE'15, Complete Volume https://drops.dagstuhl.de/storage/01oasics/oasics-vol046_wpte2015/OASIcs.WPTE.2015/OASIcs.WPTE.2015.pdf Conference
proceedings, Concurrent Programming, Formal Definitions and Theory, Specifying and Verifying and Reasoning about Programs, Semantics of Programming Languages, Mathematical Logic, Grammars and Other
Rewriting Systems, Deduction and Theorem Proving eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Open Access Series in Informatics 2190-6807 2015-06-17 46 i xvi 10.4230/OASIcs.WPTE.2015.i
article Chiba, Yuki Escobar, Santiago Nishida, Naoki Sabel, David Schmidt-Schauß, Manfred Frontmatter, Table of Contents, Preface, Workshop Organization https://drops.dagstuhl.de/storage/01oasics/
oasics-vol046_wpte2015/OASIcs.WPTE.2015.i/OASIcs.WPTE.2015.i.pdf Frontmatter Table of Contents Preface Workshop Organization eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Open Access Series
in Informatics 2190-6807 2015-06-17 46 1 1 10.4230/OASIcs.WPTE.2015.1 article Pientka, Brigitte Mechanizing formal systems, given via axioms and inference rules, together with proofs about them plays
an important role in establishing trust in formal developments. In this talk, I will survey the proof environment Beluga. To specify formal systems and represent derivations within them, Beluga
provides a sophisticated infrastructure based on the logical framework LF; in particular, its infrastructure not only supports modelling binders via binders in LF, but extends and generalizes LF with
first-class contexts to abstract over a set of assumptions, contextual objects to model derivations that depend on assumptions, and first-class simultaneous substitutions to relate contexts. These
extensions allow us to directly support key and common concepts that frequently arise when describing formal systems and derivations within them. To reason about formal systems, Beluga provides a
dependently typed functional language for implementing inductive proofs about derivations as recursive functions on contextual objects following the Curry-Howard isomorphism. Recently, the Beluga
system has also been extended with a totality checker which guarantees that recursive programs are well-founded and correspond to inductive proofs and an interactive program development environment
to support incremental proof / program construction. Taken together these extensions enable direct and compact mechanizations. To demonstrate Beluga's strength, we develop a weak normalization proof
using logical relations. The Beluga system together with examples is available from http://complogic.cs.mcgill.ca/beluga. https://drops.dagstuhl.de/storage/01oasics/oasics-vol046_wpte2015/
OASIcs.WPTE.2015.1/OASIcs.WPTE.2015.1.pdf Type systems Dependent Types Logical Frameworks eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Open Access Series in Informatics 2190-6807 2015-06-17
46 3 17 10.4230/OASIcs.WPTE.2015.3 article Guerrieri, Giulio Recently, a standardization theorem has been proven for a variant of Plotkin's call-by-value lambda-calculus extended by means of two
commutation rules (sigma-reductions): this result was based on a partitioning between head and internal reductions. We study the head normalization for this call-by-value calculus with
sigma-reductions and we relate it to the weak evaluation of original Plotkin's call-by-value lambda-calculus. We give also a (non-deterministic) normalization strategy for the call-by-value
lambda-calculus with sigma-reductions. https://drops.dagstuhl.de/storage/01oasics/oasics-vol046_wpte2015/OASIcs.WPTE.2015.3/OASIcs.WPTE.2015.3.pdf sequentialization lambda-calculus sigma-reduction
call-by-value head reduction internal reduction (strong) normalization evaluation confluence eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Open Access Series in Informatics 2190-6807
2015-06-17 46 19 29 10.4230/OASIcs.WPTE.2015.19 article Palacios, Adrián Vidal, Germán In this work, we introduce a scheme for modelling actor systems within sequential term rewriting. In our
proposal, a TRS consists of the union of three components: the functional part (which is specific of a system), a set of rules for reducing concurrent actions, and a set of rules for defining a
particular scheduling policy. A key ingredient of our approach is that concurrent systems are modelled by terms in which concurrent actions can never occur inside user-defined function calls. This
assumption greatly simplifies the definition of the semantics for concurrent actions, since no term traversal will be needed. We prove that these systems are well defined in the sense that concurrent
actions can always be reduced. Our approach can be used as a basis for modelling actor-based concurrent programs, which can then be analyzed using existing techniques for term rewrite systems. https:
//drops.dagstuhl.de/storage/01oasics/oasics-vol046_wpte2015/OASIcs.WPTE.2015.19/OASIcs.WPTE.2015.19.pdf concurrency actor model rewriting eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Open
Access Series in Informatics 2190-6807 2015-06-17 46 31 46 10.4230/OASIcs.WPTE.2015.31 article Sabel, David Schmidt-Schauß, Manfred A contextual semantics - defined in terms of successful termination
and may- and should-convergence - is analyzed in the synchronous pi-calculus with replication and a constant Stop to denote success. The contextual ordering is analyzed, some nontrivial process
equivalences are proved, and proof tools for showing contextual equivalences are provided. Among them are a context lemma and new notions of sound applicative similarities for may- and
should-convergence. A further result is that contextual equivalence in the pi-calculus with Stop conservatively extends barbed testing equivalence in the (Stop-free) pi-calculus and thus results on
contextual equivalence can be transferred to the (Stop-free) pi-calculus with barbed testing equivalence. https://drops.dagstuhl.de/storage/01oasics/oasics-vol046_wpte2015/OASIcs.WPTE.2015.31/
OASIcs.WPTE.2015.31.pdf Concurrency Process calculi Pi-calculus Rewriting Semantics eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Open Access Series in Informatics 2190-6807 2015-06-17 46 47
61 10.4230/OASIcs.WPTE.2015.47 article Smetsers, Sjaak Madlener, Ken van Eekelen, Marko Both operational and denotational semantics are prominent approaches for reasoning about properties of programs
and programming languages. In the categorical framework developed by Turi and Plotkin both styles of semantics are unified using a single, syntax independent format, known as GSOS, in which the
operational rules of a language are specified. From this format, the operational and denotational semantics are derived. The approach of Turi and Plotkin is based on the categorical notion of
bialgebras. In this paper we specify this work in the theorem prover PVS, and prove the adequacy theorem of this formalization. One of our goals is to investigate whether PVS is adequately suited for
formalizing metatheory. Indeed, our experiments show that the original categorical framework can be formalized conveniently. Additionally, we present a GSOS specification for the simple imperative
programming language While, and execute the derived semantics for a small example program. https://drops.dagstuhl.de/storage/01oasics/oasics-vol046_wpte2015/OASIcs.WPTE.2015.47/
OASIcs.WPTE.2015.47.pdf operational semantics denotational semantics bialgebras distributive laws adequacy theorem proving PVS WHILE | {"url":"https://drops.dagstuhl.de/entities/volume/OASIcs-volume-46/metadata/doaj-xml","timestamp":"2024-11-05T10:41:27Z","content_type":"application/xml","content_length":"14765","record_id":"<urn:uuid:52b5faf0-bd90-446b-ad32-191f8e43b740>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00250.warc.gz"} |
Quantitative Investing - How Math Drives Effective Portfolio Allocation Strategies – Toni Esteves
Citizens, families, and businesses all face the issue of financial investment. Families sometimes want to preserve their assets from inflation or earn more money without risking their resources.
Companies must navigate more sophisticated investing methods in order to find high-return opportunities while taking reasonable risks. Financial institutions invest on behalf of any investor. The
number of investment opportunities has grown considerably over time as a result of financial market globalization and the development of new investment vehicles. Bonds, equities or stocks, investment
funds (including recently established socially responsible funds), derivatives, and assets resulting from securitization operations are now prominent types of instruments.
The term Modern Portfolio Theory (MPT) refers to the optimization theory and models that aim to optimize asset investments, typically by maximizing the portfolio expected return for a given level of
portfolio risk or minimizing risk for a given level of expected return. The Modern Portfolio Theory (MPT) originates with Harry Markowitz [1],[2] who mathematically formalized the concepts of
expected portfolio return and risk. Markowitz’s mean-variance model for asset portfolio selection involves delineating an efficient frontier, a smooth curve illustrating the trade-off between return
and risk (variance). Portfolios on the efficient frontier can be found through quadratic programming, which offers efficient solvers available in terms of computing time. The solutions generated by
these solvers are optimal, and the selection process may be constrained by practical considerations that can be expressed as linear restrictions. The concept of modern portfolio theory is widely used
as synonymous with models and optimizations aimed at enhancing investment in financial asset portfolios, typically by maximizing the expected return of an investment portfolio or minimizing the
exposure risk of said portfolio given the risk to which it is exposed.
However, like all mathematical models, Markowitz relied on certain assumptions:
• Investors take into account not only the expected return but also the risk when analysing potential capital allocations.
• Risk is assumed to stem from uncertainty about the future price of assets, which is captured by variance.
• Different individuals have different levels of risk tolerance.
• An investor seeking higher returns must also accept higher risk.
• As asset distributions are considered, in theory, to be normal distributions, we assume that the distribution of the resulting portfolio will also be.
But, what about the best portfolio to invest in?
A common misconception is thinking that there’s a definitive “best portfolio.” We should instead consider a range of efficient portfolios tailored to different types of investors.
Portfolio Theory and Efficient Portfolios
Let’s consider the following example and define some observations afterwards:
Let \(R_{A}\) and \(R_{B}\) be the simple returns of assets A and B. In the MPT, we “assume” that \(R_{A}\) and \(R_{B}\) have a joint normal distribution and that the following information is known:
\[\mu_{A} = E[R_{A}]\] \[\sigma^2_{A} = Var(R_{A})\] \[\mu_{B} = E[R_{B}]\] \[\sigma^2_{B} = Var(R_{B})\] \[\sigma_{AB} = Cov(R_{A},R_{B})\] \[\rho_{AB} = Corr(R_{A},R_{B}) = \frac{\sigma_{AB}}{\
As previously mentioned, let’s mathematically formulate the concepts of return and risk of a portfolio considering its variance:
• Portfolio Return: \(\mu_{p} = E[R_{p}] = w_{A}\mu_{A} + w_{B}\mu_{B} + . . . + w_{n}\mu_{n}\)
• Variance: \(\sigma^2_{p} = Var(R_{p}) = w_{A}^2\sigma_{A}^2 + w_{B}^2\sigma_{B}^2 + 2w_{A}w_{B}\sigma_{AB}\)
• Standart Deviation: \(\sigma^2_{p} = sd(R_{p}) = \sqrt{w_{A}^2\sigma_{A}^2 + w_{B}^2\sigma_{B}^2 + 2w_{A}w_{B}\sigma_{AB}}\)
Since the distributions of the assets are considered to be normal distributions, we assume that the portfolio’s distribution will also be, and it is given by:
\[R_{p} \sim N(\mu_{p}, \sigma^2_{p})\]
Disclaimer: The premise that the returns distributions follow a normal distribution is not necessarily true; it’s just an assumption of the Markowitz model, and in fact, this is one of the reasons
why there are criticisms of this model.
Applying the formulations so far, we have:
• Portfolio equally weighted: \(w_{A} = 0.5\), \(w_{B} = 0.5\)
\(\qquad \mu_{p} = (0.5) \times (0.175) + (0.5) \times (0.055) = \textbf{0.115}\) \(\qquad \sigma^2_{p} = (0.5)^2 \times (0.067) + (0.5)^2 \times (0.013) + 2 \times (0.5)(0.5)(-0.004866) = 0.01751\)
\(\qquad \sigma_{p} = \sqrt{0.01751} = \textbf{0.1323}\)
• Portfolio Delta: \(w_{A} = 1.5\), \(w_{B} = -0.5\)
\(\qquad \mu_{p} = (1.5) \times (0.175) + (-0.5) \times (0.055) = \textbf{0.235}\) \(\qquad \sigma^2_{p} = (0.5)^2 \times (0.067) + (-0.5)^2 \times (0.013) + 2 \times (1.5)(-0.5)(-0.004866) =
0.011604\) \(\qquad \sigma_{p} = \sqrt{0.01604} = \textbf{0.4005}\)
So we have:
Risk Return
Portfolio equally weighted 0.1323 0.115
Portfolio Delta 0.235 0.4005
Note that the Equally Weighted Portfolio is less risky than the Delta Portfolio; however, the latter has a higher return. It turns out that the ideal portfolio is directly linked to an investor’s
risk appetite.
Markowitz also presented assumptions considering sets of efficient portfolios:
• Portfolio returns are stationary in covariance and ergodic, and they possess a joint normal distribution over the investment horizon.
• Investors know the values of the means, variances, and covariances of the assets.
• Investors are concerned only with the expected return and expected variance of the portfolio.
• We can then define a set of efficient portfolios as those portfolios that have maximum expected return for a given level of risk, - which is measured by the portfolio’s variance. Thus, an
efficient portfolio is not dominated by any other.
A portfolio is said to be not dominated by any other if both its risk is lower and its expected return is higher.
This can be expressed by the following notation:
E(R_{x}) \leq E(R_{y}) \ \ \ \ \ \ \text{e} \ \ \ \ \ \ \rho(R_{x}) \geq \rho(R_{y})
The notation above expresses the concept of portfolio dominance, where portfolio \(y\) dominates \(x\). Therefore, no rational investor would choose to invest in \(x\) instead of \(y\).
Definition 1: Efficient portfolios are the feasible portfolios that offer the highest expected return for a given level of risk, as measured by portfolio standard deviation. These are the portfolios
that investors are most interested in holding. Graphically, these portfolios start with the global minimum variance portfolio and are located above and to the right of it.
Definition 2: Inefficient portfolios are those for which there exists another feasible portfolio with the same risk but a higher expected return. Graphically, these portfolios are located below and
to the right of the global minimum variance portfolio.
The diagram above depicts both efficient and inefficient portfolios. Efficient portfolios yield the highest expected return for a given standard deviation value. These portfolios are indicated by
green dots, commencing with the global minimum variance portfolio situated at the apex of the Markowitz bullet. Inefficient portfolios, marked by red dots, lie below the global minimum variance
portfolio. For example, within the diagram, two feasible portfolios, 3 and 11, exhibit identical standard deviation but diverge in expected return values. Portfolio 11, boasting a superior expected
return, is categorised as efficient, whereas Portfolio 3, with a lower expected return, is deemed inefficient.
But, how do we find the portfolio with the minimum global variance?
Global Variance Minimum Portfolio
Firstly, we need to determine the optimal values for \(w_{A}\) and \(w_{B}\) that solve the following optimisation problem:
min \ \ \ \ \ w_{A}^2\sigma_{A}^2 + w_{B}^2\sigma_{B}^2 + 2w_{A}w_{B}\sigma_{AB}
\(\qquad\) \(\qquad \qquad \qquad \qquad \qquad s.t\)
w_{A} + w_{B} = 1
So, to find the portfolio with the minimum global variance, we need to solve an optimisation problem defined as follows:
What are the values of \(w_{A}\) and \(w_{B}\) that minimise the risk of a portfolio, given by the variance, while ensuring that the sum of these weights is equal to 1?
Note also that the absence of the constraint \(w_{A} + w_{B} = 1\) would simply imply assigning \(w_{A} = w_{B} = 0\), resulting in the minimum possible variance. Thus, we would then encounter what
is known as an unrestricted problem. However, assigning \(w_{A} = w_{B} = 0\) does not satisfy our constraint because we are dealing with a constrained problem. In general and at a high-level
formalisation, we can solve this problem using Lagrange multipliers.
1. The constraint \(w_{A} + w_{B} = 1\) is rewritten in homogeneous form:
\[w_{A} + w_{B} = 1 \Rightarrow w_{A} + w_{B} -1 = 0\]
1. Next, we create the Lagrangian function:
\[L(w_{A}, w_{B}, \lambda) = w_{A}^2\sigma_{A}^2 + w_{B}^2\sigma_{B}^2 + 2w_{A}w_{B}\sigma_{AB} + \lambda(w_{a}+w_{B}-1)\]
1. We must then minimize this function with respect to \(w_{A}\), \(w_{B}\) e \(\lambda\):
\[\frac{\partial}{\partial w_{A}} = 2w_{A}\sigma^2_{A} + 2w_{B}\sigma_{AB} + \lambda = 0\] \[\frac{\partial}{\partial w_{B}} = 2w_{B}\sigma^2_{B} + 2w_{A}\sigma_{AB} + \lambda = 0\] \[\frac{\partial}
{\partial \lambda} = w_{A} + w_{B} - 1 = 0\]
Thus, we obtain the following system:
\[\frac{\partial L(w_{A},w_{B},\lambda)}{\partial w_{A}} = 2w_{A}\sigma^2_{A} + 2w_{B}\sigma_{AB} + \lambda = 0\] \[\frac{\partial L(w_{A},w_{B},\lambda)}{\partial w_{B}} = 2w_{B}\sigma^2_{B} + 2w_
{A}\sigma_{AB} + \lambda = 0\] \[\frac{\partial L(w_{A},w_{B},\lambda)}{\partial \lambda} = w_{A} + w_{B} - 1 = 0\]
Representing the system in matrix notation and performing some operations, we find the optimal weight that minimizes the variance:
\[w^*_{A} = \frac{\sigma^2_{B} - \sigma_{AB}}{\sigma^2_{A} + \sigma^2_{B} + 2 \sigma_{AB}}, \ \ \ w^*_{B}=1-w^*_{A}\]
Using the previously presented data, we have:
\[w^*_{A} = \frac{0.01323 - (-0.004866)}{0.06656 + 0.01323 - 2(-0.004866)} = \textbf{0.2021}, \ \ \ w^*_{B} = \textbf{0.7979}\]
Therefore, the expected return, variance, and standard deviation of the portfolio are:
\(\qquad \mu_{p} = (0.2021) \times (0.175) + (0.0.7979) \times (0.055) = \textbf{0.07925}\) \(\qquad \sigma^2_{p} = (0.2021)^2 \times (0.067) + (0.7979)^2 \times (0.013) + 2 \times (0.2021)(0.7979)
(-0.004875) = 0.00975\) \(\qquad \sigma_{p} = \sqrt{0.00975} = \textbf{0.09782}\)
In the figure below, the green dot represents the global minimum variance portfolio.
The main feature of this parable is the relationship between assets A and B, indicating their inter-correlation. The fundamental premise of MPT aims to encourage diversification by blending portfolio
assets that are not closely correlated, in order to diminish portfolio risk (variance) without compromising returns.
What are the benefits of diversification?
The role of Diversification
The variance depends on the individual variances of the asset rates of return and on their correlation. For given values of \(w_{A}\) and \(w_{B}\) Eq. (1.8) shows that the variance of the portfolio
rate of return decreases when decreasing \(\rho_{AB}\) from \(1\) to \(-1\).
The variance is influenced by the individual variances of the asset returns and their correlation. For given values of \(w_{A}\) and \(w_{B}\) the variance of the portfolio return decreases as the
correlation \(\rho_{AB}\) decreases from \(1\) to \(-1\)
The variance is affected by both the individual variances of the asset returns and their correlation. With fixed values of \(w_{A}\) and \(w_{B}\), the portfolio return’s variance decreases as the
correlation \(\rho_{AB}\) decreases from \(1\) to \(-1\)
Let’s examine the extreme cases:
1. \(\overline{\rho}_{AB}\) = 1: When the assets are perfectly positively correlated, the frontier becomes a straight line connecting the points corresponding to portfolios comprising of each asset
individually. The frontier comprises deviations of the two assets, with coefficients \(w_{A}\) and \(w_{B}\), \(w_{A} + w_{B} = 1\).
\[\sigma^2(x) = w_{A}^2\sigma_{A}^2 + w_{B}^2\sigma_{B}^2 + 2w_{A}w_{B}\sigma_{AB} = (\sigma_{A}w_{A} + \sigma_{B}w_{B})^2\]
In this case, the portfolio’s standard deviation is a convex combination of the standard deviations of the two assets, with coefficients \(w_{A}\) and \(w_{B}\), \(w_{A} + w_{B} = 1\).
1. \(\overline{\rho}_{AB}\) = -1: When assets are perfectly negatively correlated, the frontier comprises two intersecting lines on the expected return axis (zero standard deviation). This
phenomenon can be elucidated by examining the following formula:
\[\sigma^2(x) = w_{A}^2\sigma_{A}^2 + w_{B}^2\sigma_{B}^2 + 2w_{A}w_{B}\sigma_{AB} = (\sigma_{A}w_{A} - \sigma_{B}w_{B})^2\]
Since \(w_{B} = 1 - w_{A}\), the standard deviation \(\sigma_{A}w_{A} - \sigma_{B}w_{B}\) becomes null when \(w_{A} = \frac{\sigma_{B}}{\sigma_{A} + \sigma_{B}}\). For values of \(w_{A}\) lower and
greater than \(\frac{\sigma_{B}}{\sigma_{A} + \sigma_{B}}\) two straight lines are obtained.
Now consider \(N\) uncorrelated assets with the same expected return and variance.
\[E[R_{i}] = 0.2\] \[\sigma^2_{i} = 1\] \[cov(i,j) = 0 \ \ \forall \ \ i \neq j\]
If we invest all resources in just one of the assets, then the portfolio performance will be \((R_{p} , \sigma^2_{p}) = (0.2, 1)\), which represents the return and variance, respectively. Now,
consider a portfolio with \(N\) assets and weights \(W_{i} = 1/N\). The expected return of the portfolio equals that of its components.
\[\mu_{p} = \sum_{i=1}^{N}\frac{1}{N} \times 0.2 = 0.2\]
However, the variance is given by:
\[\sigma^2_{p} = \sum_{i=1}^{N}\frac{1}{N^2} \times 1 = \frac{N}{N^2} = \frac{1}{N}\]
where \(N\) is the number of assets in the portfolio.
Note that the variance tends towards zero as \(N\) increases, and the portfolio becomes risk-free, assuming that the assets are uncorrelated, which in practice is very unlikely to happen - In
reality, there is always correlation between assets. Generalizing the model proposed by Markowitz, let’s assume that there are \(N\) assets that can be part of the portfolio, with expected returns \
(M_{i} = E[R_{i}]\) and covariance \(\sigma_{ij}\) for \(i\), \(j=1,2,...,N\). A portfolio is then defined by a set of weights \(W = \{ w_{1},...,w_{n}\}\) onde \(w_{1} +...+ w_{n} = 1\). Thus, the
general model proposed by Markowitz to find the portfolio of minimum variance is given by:
\[\sigma^2_{p} = \sum_{i=1}^{N}\sum_{j=1}^{N}w_{i}w_{j}\sigma_{ij}\]
\(\qquad \qquad \qquad \qquad \qquad \qquad\) s.t
Note: A subtle change is to consider \(j+1\) in the minimization to only contemplate the upper triangular part.
Also note that
\[\sum_{i=1}^{N}\sum_{j=1}^{N}w_{i}w_{j}\sigma_{ij} = \sum_{i=1}^{N}\sum_{j=1}^{N}w^2_{i}\sigma^2_{i} + 2\sum_{i=1}^{N-1}\sum_{j=i+1}^{N}w_{i}\sigma_{i}\]
• \(\sum_{i=1}^{N}\sum_{j=1}^{N}w_{i}\sigma^2_{i}\) it is the sum of the squared weights times the squared variance.
• \(2\sum_{i=1}^{N-1}\sum_{j=i+1}^{N}w_{i}\sigma_{i}\) It is twice the weight of \(i\) times the weight of \(i\) times the weight of \(j\) times their covariance for all pairs.
The system above can be solved using Lagrange multipliers, where we then have a system with \(N+1\) equations and \(N + 1\) variables. The variables are given by \(w_{1,...,w_{N}}\) and \(\lambda\)
and the \(N+1\) equations are the partial derivatives with respect to each of the weights and with respect to \(\lambda\) as we did in the previous section. The above model is referred to as a**
Quadratic Programming** model with linear constraints.
Portfolio Optimization Process (Bonus)
When defining the optimization framework, timing becomes a crucial factor that requires careful consideration. There’s a specific point in time when the portfolio is initially constructed, and assets
are purchased, which is referred to as the investment time. Subsequently, there may be occasions when the existing portfolio undergoes revisions and adjustments to accommodate market changes. During
such times, assets may be bought or sold.
The optimization process typically involves several key steps:
1. Gathering information, beliefs, and methods to estimate the performance of available assets at the target time.
2. Selecting a model that, based on the estimations obtained in the previous phase, generates a portfolio.
3. Assessing the model through ex-post analysis and providing possible feedback to one of the previous phases.
4. Implementing the portfolio.
Measuring the past performance of a portfolio against a benchmark is relatively straightforward using its rate of return, including potential dividends. However, ensuring the future performance of a
portfolio is a more challenging task. The expected return of a portfolio over a certain period should not be a single measure for an asset or portfolio. The rate of return of an asset over a certain
period can be modeled as a random variable, and consequently, the rate of return of a specific portfolio is also a random variable.
The performance issue relates to the trade-off between the expected value of the uncertain rate of return of a portfolio and its level of variability. Variability should measure the associated risk,
the risk of not achieving what is expected. Typically, a high expected return cannot be achieved without a high level of risk. The selection of a specific trade-off level, such as between a portfolio
with high return and high risk and one with low return and low risk, can only be made by the investor. Risk aversion will lead an investor to prefer the latter over the former, while risk propensity
will lead to the former over the latter.
To conclude, the Markowitz model epitomizes the fusion of mathematics and finance, providing investors with a potential instrument to enhance their portfolios in an unpredictable environment, and
although the model’s mathematical foundations may seem complex initially, its fundamental principles are rooted in intuitive concepts that resonate with investors globally.
As we explore the complexities of the Markowitz model, it becomes clear that its robustness stems not just from its mathematical precision but also from its capacity to adjust to real-world
investment situations. Despite criticisms highlighting its assumptions’ limitations, the model’s continued relevance highlights its importance as a guiding principle for wise investment
decision-making. Despite that, It is evident that the portfolio optimization methods employed in the financial industry are based on assumptions that do not align well with the real world. As we
strive to deepen our understanding of financial markets, the Markowitz model endures yet in a constantly evolving landscape.
However, these tools are often used and trusted by individual investors and advisers as if the underlying assumptions were accurate. As I read from an economist statement: Within the models, there
are no flaws. The flaws lie in the assumptions.
Final Thoughts
If you’ve read this post so far, thank you very much, I hope this material has been useful and makes sense to you, and if any other topic related or not to the content of this post interests you, or
has left you in doubt, put it in the comments and I will be very happy to bring the content more clearly in a new post.
Remembering that any feedback, whether positive or negative, just get in touch via my twiter, linkedin, Github or in the comments below. Thanks.
[1] Markowitz, H. M. (1952). Portfolio selection. Journal of Finance 7, 1: 77–91
[2] Foundations of Portfolio Theory
[3] Benninga, S. 2000. Financial Modeling. Second Edition. Cambridge, MA: MIT Press.
[4] Bodie, Z., A. Kane, and A. J. Marcus. 2013. Investments. 10th Edition. McGraw-Hill Education.
[5] Elton, E., G. Gruber, S. J. Brown, and W. N. Goetzmann. 2014. Modern Portfolio Theory and Investment Analysis. 9th Edition. New York: Wiley.
[6] Markowitz, H. 1959. Portfolio Selection: Efficient Diversification of Investments. New York: Wiley.
[7] Markowitz, H. 1987. Mean-Variance Analysis in Portfolio Choice and Capital Markets. Cambridge, MA: Basil Blackwell.
[8] Portfolio Optimization Theory Versus Practice
[9] Linear and Mixed Integer Programming for Portfolio Optimization | {"url":"https://www.toniesteves.com/mathematical-portfolio-optimization","timestamp":"2024-11-09T07:22:01Z","content_type":"text/html","content_length":"116211","record_id":"<urn:uuid:8b19370f-e5bc-4d11-a96c-0e3e8affd37e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00619.warc.gz"} |
The Most Famous Paradox in Physics Nears Its End
The black hole information paradox has fascinated and perplexed theoretical physicists for almost 50 years. However, in a succession of ground-breaking articles, they have tantalizingly close to
solving it. They may now confidently claim that information does indeed escape a black hole. If you enter one, you won't be lost forever. The knowledge required to reconstruct your body will emerge,
particle by particle. Since string theory was their top contender for a unified theory of nature, most physicists have long thought that it would. However, although being motivated by string theory,
the new computations are entirely independent and lack any reference to strings. Information is transmitted through gravity's natural mechanisms — merely regular gravity with a thin overlay of
quantum effects.
Gravity has taken on an odd new function in this scenario. A black hole's tremendous gravity prevents anything from escaping, according to Einstein's general theory of relativity. This idea was
uncontested in the more complex theory of black holes that Stephen Hawking and his associates came up with in the 1970s. A hybrid technique known as "semiclassical" physics was used by Hawking and
others to describe matter in and around black holes using quantum theory while continuing to describe gravity using Einstein's classical theory. The technique suggested fresh effects near the hole's
edge, but the inside remained completely walled off. Scientists believed Hawking had successfully performed the semiclassical computation. Any future development would have to include gravity as a
quantum phenomenon.
The authors of the new research contest that. They have discovered new gravitational combinations that Hawking did not include but that Einstein's theory allows. These are known as semiclassical
effects. These effects first appear muted, but when the black hole ages significantly, they start to take center stage. The hermit kingdom in the pit changes into a ferociously open system. Not only
does knowledge leak out, but everything new that enters is often instantly regurgitated. Although the new semiclassical theory hasn't yet explained how precisely the information escapes, theorists
already have signs of the escape mechanism because to how quickly discoveries have been made over the previous two years.
One of the co-authors, Donald Marolf from the University of California, Santa Barbara, remarked, "That is the most fascinating thing that has happened in this area, in my opinion, since Hawking."
Leading theoretical physicist Eva Silverstein of Stanford University, who was not directly engaged, called the result "a milestone calculation."
The authors claim that while you may anticipate them to rejoice, they also feel disappointed. It could have been much more challenging to complete the computation if it had incorporated the deep
characteristics of quantum gravity rather than only a light dusting, but once completed, it would have shed light on those depths. They are concerned that they could have found a solution to this
specific issue but not the comprehensive closure they desired. In reference to a completely quantum theory of gravity, Geoff Penington of the University of California, Berkeley remarked, "The goal
was that if we could answer this issue — if we could see the information flowing out — in order to accomplish so we would have had to learn about the microscopic theory."
In Zoom calls and webinars, there is a heated discussion over what it all means. The piece is quite mathematical and has a Rube Goldberg feel to it, tying together one calculational trick after
another in a way that is difficult to understand. Wormholes, the holographic principle, emergent space-time, quantum entanglement, and quantum computers are just a few of the modern concepts that
occur in the field of basic physics, making it both fascinating and perplexing.
Moreover, not everyone is persuaded. Some people still believe that Hawking was correct and that new physics, like as string theory, must be used in order for information to escape. Nick Warner of
the University of Southern California remarked, "I'm quite skeptical to folks who come in and say, 'I've got a solution with simply quantum mechanics and gravity. "Because it has already led us in
But it seems like everyone can agree on one thing. Space-time appears to break apart near a black hole in some way, suggesting that it is not the foundation of reality but rather an emerging
structure from something deeper. Despite the fact that Einstein defined gravity as the geometry of space-time, his theory also calls for the breakdown of space-time, which is ultimately the reason
why information may evade the gravitational hold that it has on it.
The Key Is in the Curve
Don Page and his family spent Christmas 1992 house-sitting in Pasadena, where they took advantage of the pool and Rose Parade. The break was also utilized by Page, a physicist at the University of
Alberta in Canada, to reflect on how paradoxical black holes actually are. His early research on black holes during his doctoral studies in the 1970s was crucial in helping his advisor Stephen
Hawking realize that radiation from black holes is caused by arbitrary quantum processes near the hole's edge. A black hole rots from the outside in, to put it simply.
Its shed particles don't seem to have any information about what's within. The hole's mass increases by 100 kilograms if an astronaut weighing 100 kilograms falls in. However, the radiation that the
hole produces—the equivalent of 100 kilograms—is wholly unstructured. The radiation has no characteristics that indicate whether it originated from an astronaut or a lead lump.
That's an issue since the black hole eventually discharges its final bit of energy and vanishes. All that is left is a sizable cloud of amorphous particles that are randomly flitting around. Whatever
fell in would not be recoverable. As a result, the creation and dissolution of black holes are irreversible processes that seem to contradict the principles of quantum physics.
At the time, Hawking and the majority of other theorists came to the same conclusion: if irreversibility violated the then-understood physical rules, so much the worse for those laws. But Page was
troubled because irreversibility would go against time's basic symmetry. In 1980, he diverged from his old mentor and asserted that information must be released from black holes, or at the very least
preserved. That split the physics community in two. The majority of general relativists with whom Page spoke concurred with Hawking. Particle scientists, though, tended to concur with me.
Page discovered on his Pasadena trip that both groups had overlooked a crucial detail. The mystery was not simply what happened when the black hole's life came to an end, but also what occurred in
the meantime.
He thought about quantum entanglement, a part of the process that has received very little attention. The radiation that is released still has a quantum mechanical connection to where it came from.
Both the radiation and the black hole appear random when measured separately, but when combined, they show a pattern. It's similar to using a password to secure your data. Without the password, the
data is meaningless. If you picked a strong password, it will also be meaningless. But by working together, they can access the data. Maybe information may leave the black hole in a similarly
encrypted fashion, Page reasoned.
Page determined what it would entail for the overall entanglement—also referred to as the entanglement entropy—between the black hole and the radiation. Since the black hole has not yet released any
radiation to become entangled with, the entanglement entropy is 0 at the beginning of the whole process. Since there is no longer a black hole at the conclusion of the operation, provided information
is retained, the entanglement entropy should be zero once more. Page stated, "I started wondering how the radiation entropy might alter in between.
Initially, the entanglement entropy increases as radiation slowly leaks out. Page concluded that this pattern must change. If the entropy is to reach zero at the terminus, it must cease increasing
and begin to decline. The entanglement entropy should follow a curve with an inverted V form over time.
At a point now referred to as the Page time, Page determined that this reversal would have to happen around halfway through the process. This happened a lot sooner than scientists anticipated. At
that time, the black hole is still massive; it is undoubtedly a long way from the subatomic level at which any alleged exotic effects would manifest. The recognized rules of physics need to remain
applicable. Furthermore, nothing in those regulations can make the slope bend downward.
That made the issue even worse. Scientists had previously believed that a quantum theory of gravity only applied in absurdly extreme circumstances, such as when a star shrank to the size of a proton.
Now Page was telling them that under some circumstances, conditions similar to those in your kitchen, quantum gravity mattered.
According to Page's view, the black hole information problem should be classified as a paradox rather than just a conundrum. A contradiction in the semiclassical approximation was made clear. David
Wallace, a philosopher of physics at the University of Pittsburgh, stated that the Page-time paradox "seems to hint to a breakdown of low-energy physics at a place where it has no business breaking
down, because the energies are still low."
On the plus side, Page's explanation of the issue opened the door to a fix. He demonstrated that information leaves the black hole if entanglement entropy follows the Page curve. He converted a
discussion into a computation by doing this. Scientists aren't usually great writers, according to Harvard University's Andrew Strominger. "Sharp equations are ideal for us,"
It was now just necessary for physicists to compute the entanglement entropy. They would receive a direct response if they were successful. Does the entropy of entanglement exhibit an inverted V
pattern or not? If it occurs, the black hole keeps the data, proving the particle scientists correct. If not, information is either destroyed or trapped in the black hole, and general relativists are
welcome to take the first doughnut during faculty meetings.
It took theorists over three decades to find out how to accomplish it, despite the fact that Page clearly outlined what physicists needed to do.
The Black Hole From the Inside
In the past two years, scientists have demonstrated that black holes' entanglement entropy truly does follow the Page curve, proving that information may leave the black hole. The analysis was
carried out in phases. Initially, they used concepts from string theory to demonstrate how it might function. Then, in articles released last November, physicists completely severed their ties to
string theory.
When Ahmed Almheiri of the Institute for Advanced Study described a method for examining how black holes evaporate, the work really got going. Almheiri, shortly joined by a number of colleagues, put
into practice a proposal created in 1997 by Juan Maldacena, who is currently employed at IAS. Penington was simultaneously working.
Think of a cosmos that is enclosed in a border, much like a snow globe. The interior is essentially identical to our universe, with the exception of a large wall surrounding it. It possesses gravity,
matter, and other properties. The border is a form of universe as well. It lacks depth and gravity since it is only a surface. However, lively quantum physics makes up for that, and overall, it is
just as intricate as the inside. Despite how unlike these two universes appear, they are a great complement. Everything on the boundary has a counterpart in the inside, or "bulk." And since since
Maldacena proposed it, this "AdS/CFT" duality has been string theorists' favorite playground, despite the fact that the geometry of the bulk differs from the geometry of our own world.
According to the logic of this duality, a black hole has a counterpart on the boundary if it exists in the bulk. It clearly maintains information since the border is determined by quantum physics
without the difficulties of gravity. The dark hole must also.
In order to study how black holes evaporate in AdS/CFT, researchers first had to solve a little issue: black holes don't really evaporate in AdS/CFT. Like steam in a pressure cooker, radiation fills
the small space, and whatever the hole emits ultimately absorbs it again. Jorge Varelas da Rocha, a theoretical physicist at the University Institute of Lisbon, predicted that the system will
approach a stable state.
Almheiri and his coworkers followed Rocha's proposal to install the equivalent of a steam valve on the border to drain off the radiation and stop it from coming back in to address this issue. One of
Almheiri's co-authors, Netta Engelhardt of the Massachusetts Institute of Technology, remarked, "It suctions the radiation out." The scientists created a black hole in the center of the large volume
of space, started to release radiation, and then watched what occurred.
They used on the more detailed knowledge of AdS/CFT that Engelhardt and others, notably Aron Wall at the University of Cambridge, had acquired in the last ten years to track the entanglement entropy
of the black hole. Today, physicists are able to identify which portion of the bulk and which portion of the boundary correspond to one another, as well as which characteristics of the bulk and which
characteristics of the frontier.
The quantum extremal surface, as it is known in physics, is the connection between the two halves of the duality. (These surfaces are generic characteristics; a black hole is not necessary for them
to exist.) In essence, picture blowing a soap bubble in the large. Naturally, the bubble takes on a form that reduces its surface area. Because the laws of geometry might be different from those we
are accustomed with, the form need not be circular, like the bubbles at a kid's birthday party; instead, the bubble is a probe of that geometry. Quantum effects can also lengthen it.
The location of the quantum extremal surface can provide researchers with two crucial pieces of knowledge. The bulk is split into two pieces by the surface, which then aligns each with a different
section of the boundary. Second, a fraction of the entanglement entropy between those two regions of the border is proportional to the surface's area. By connecting a geometric idea (area) with a
quantum concept (entanglement), the quantum extremal surface offers a look into how gravity and quantum theory can merge.
But an odd thing happened when scientists utilized these quantum extremal surfaces to analyze an evaporating black hole. They discovered that the boundary's entanglement entropy increased early on in
the evaporation process, as they had predicted. The authors concluded that the hole's entanglement entropy was increasing since it was the solitary object inside of space. So far, so good for
Hawking's initial estimates.
That abruptly changed. Just inside the black hole's horizon, a quantum extremal surface suddenly appeared. At first, the remainder of the system was unaffected by this surface. But ultimately it took
over as the entropy determining factor, causing a decline. It is compared to a transition like boiling or freezing by the researchers. According to Engelhardt, "We think of this as a shift in phase
akin to thermodynamic phases—between gas and liquid.
It had three meanings. First, Hawking's estimate did not account for the unexpected change, which indicated the emergence of new physics. Second, the cosmos was split in half by the extremal surface.
The border was represented by one component. The other was a world of here-be-dragons, about which the border had no information, suggesting that radiation leaking from the system was having an
impact on the informational content of the realm.
Third, the quantum extremal surface's location was very important. It was situated just within the black hole's horizon. The quantum extremal surface and entanglement entropy decreased together with
the hole's reduction. The first time a computation has done that was when it produced the downward slope Page expected.
The team was able to demonstrate that the entanglement entropy matched the Page curve and so proved that black holes do indeed emit information. The quantum entanglement that makes this possible
allows it to leak out in a highly encrypted form. In fact, the information is so securely encrypted that it appears the black hole has not divulged anything. However, the black hole ultimately
reaches a breaking point where the data can be deciphered. The study, which was published in May 2019, demonstrated all of this using new theoretical techniques that geometrically quantify
The math has to be simplified to the bare minimum even using these tools. For instance, there was just one spatial dimension in the majority of this AdS/CFT world. The black hole was a brief line
segment rather than a large black ball. However, the scholars maintained that gravity is gravity and that what applies to this underdeveloped Lineland should apply to the rest of the universe. (In
April 2020, researchers from Osaka University Koji Hashimoto, Norihiro Iizuka, and Yoshinori Matsuo examined black holes in a more accurate flat geometry and proved the findings are still true.)
Almheiri and a different group of colleagues moved on to studying the radiation in August 2019. They discovered that the black hole and the radiation it emits both exhibit the same Page curve,
indicating that data must be sent from one to the other. The computation just states that it is transferred; it makes no mention of how.
They learned as part of their research that the cosmos goes through a perplexing reconfiguration. The black hole is in the center of space at this point, and radiation is escaping. The calculations
predict that once enough time has elapsed, particles located inside the black hole's interior become a component of the radiation rather than the hole itself. They have just been transferred, not
flown overseas.
This is noteworthy because normally these inner particles would increase the entropy of the entanglement between the radiation and the black hole. They no longer contribute to the entropy if they are
no longer a member of the black hole, which explains why it starts to fall.
The inner core of radiation was referred to by the authors as a "island" and they found its existence to be "surprising." What does it imply that particles may exist in a black hole but not be a part
of it? The physicists were able to establish that information is kept, but in doing so they simply managed to make the riddle greater. Almheiri and the others would gaze off into the distance
whenever I questioned them about what it meant since they were at a loss for words.
Go through the Wormholes
The AdS/CFT duality, often known as the snow globe world, has been assumed in the computations thus far. While this is a vital test case, it is ultimately somewhat fabricated. The next stage was to
have a broader view of black holes.
The scientists used a theory that Richard Feynman created in the 1940s. It is the mathematical representation of a fundamental quantum mechanical concept and is known as the path integral: Everything
that is possible does occur. A particle traveling from point A to point B in quantum physics travels all conceivable pathways, which are added together to form a weighted total. Generally speaking,
but not always, the highest-weighted path is the one you would anticipate from standard classical physics. The particle can abruptly switch from one route to another if the weights are altered, which
is not feasible according to conventional physics.
Because the path integral predicts particle motion so well, it was proposed as a quantum theory of gravity in the 1950s. That required swapping out a single space-time geometry for a variety of
potential forms. To humans, space-time appears to have a single, well defined geometry; for instance, it is just curved enough near Earth such that things often orbit its center. However, alternative
forms, including ones that are more curvier, are latent in quantum gravity and can emerge under the correct situations. This theory was first advanced by Feynman in the 1960s, and by Hawking in the
1970s and 1980s. The gravitational path integral was difficult to implement even with their considerable genius, so physicists abandoned it in favor of other theories of quantum gravity. John
Preskill of the California Institute of Technology remarked, "We never really understood how to explain exactly what it is—and guess what, we still don't.
What are "all" potential forms, to begin with? That meant all topologies to Hawking. It's possible that space-time can form knots that resemble doughnuts or pretzels. The increased connectedness
makes "wormholes" or tunnels connecting otherwise distant places and times. There are several varieties of them.
Similar to the beloved science fiction authors' portals, spatial wormholes connect different star systems. Little universes that sprout from our own and eventually reconnect with it are known as
space-time wormholes. Although general relativity allows for these structures, astronomers have never observed either type. However, the theory has a strong track record of making predictions that
later turn out to be correct, such as those regarding black holes and gravitational waves. Although the experts doing the new analysis of black holes did not unanimously agree with Hawking that these
strange forms belonged in the mix, they took the notion proviso.
They only focused on the topologies that were crucial to an evaporating black hole since they could not practically take into account all potential topologies, which are figuratively uncountable.
These are referred to as saddle points for mathematical reasons, and they have a calm-appearing geometry. In the end, the teams were unable to complete the whole summation of forms since it was above
their capabilities. To find the saddle points, they mainly employed the path integral as a tool.
The entanglement entropy was calculated after the route integral was applied to the black hole and its radiation. The logarithm of a matrix, or collection of numbers, is what this quantity is
described as. Even under the best of circumstances, the computation is challenging, but in this instance the scientists didn't really have the matrix, which would have necessitated determining the
route integral. As a result, they were forced to carry out a surgery on an unknown amount. They used another mathematical ruse to do it.
They discovered that entropy can be calculated without knowing the entire matrix. Instead, they can consider repeatedly measuring the black hole and integrating those data in a way that preserves the
information they want. This "replica trick" was first used in relation to gravity in 2013 and dates back to the 1970s' research of magnets.
Tom Hartman of Cornell University, one of the authors of the new study, compared the replica technique to determining if a coin is fair. Normally, you'd throw it repeatedly to determine whether it
had a 50/50 chance of landing on each side. But imagine you're unable to accomplish it for any reason. The "replicas" are two identical coins that you toss in its place, and you observe how
frequently they land on the same side. The coins are fair if it occurs 50% of the time or less. Even if you are still unsure about each probability, you can still generalize about randomness. This is
comparable to not knowing the entire black hole matrix but nevertheless calculating its entropy.
Despite being trick, it uses actual physics. The gravitational path integral cannot tell fake black holes apart from real ones. They are taken literally. Some of the latent topologies included in the
gravitational route integral are therefore activated. The outcome is a brand-new saddle point with numerous black holes connected by wormholes in space-time. It faces up against the regular geometry
of a solitary black hole encircled by a Hawking radiation mist.
In essence, the amount of entanglement entropy determines how heavily weighted the wormholes and the single black hole are. Wormholes are given a low weighting since they have a lot, making them
initially uninteresting. However, compared to the Hawking radiation, their entropy is decreasing. The dynamics of the black hole are eventually controlled by the wormholes, which eventually emerge as
the dominant of the two. In classical general relativity, changing from one geometry to another is not conceivable since it is a fundamentally quantum process. The two primary findings of the
investigation are the additional geometric configuration and the transition procedure that accesses it.
Due to their geographical ties, the West Coast and East Coast groups of physicists published their research in November 2019 demonstrating how they were able to replicate the Page curve using this
technique. This allowed them to establish that anything enters the black hole loses its informational content due to radiation. Even a fervent opponent of string theory can agree with the
gravitational path integral; string theory need not be true. Even so, the analysis doesn't yet explain how the information escapes, despite how sophisticated it is.
Building Blocks of Space-Time
These calculations show that the radiation contains a wealth of data. You ought to be able to determine what dropped into the black hole some way by measuring it. Yet how?
The West Coast group of theorists conjured up the idea of directing the radiation into a quantum computer. A computer simulation is a physical system in and of itself; a quantum simulation is
particularly similar to the phenomenon it is modeling. Therefore, the scientists conjured up the idea of gathering all the radiation, putting it into a huge quantum computer, and then simulating the
black hole in its entirety.
And the narrative took an amazing turn as a result. The quantum computer also gets extremely entangled with the black hole since the radiation is so intricately linked to the source of it.
Entanglement between the real and simulated black holes in the simulation results in a geometric connection. Simply said, a wormhole connects the two. According to Douglas Stanford, a theoretical
physicist at Stanford and a member of the West Coast team, "there's the actual black hole and then there's the simulated one in the quantum computer, and there can be a replica wormhole linking
those." This concept is an illustration of the hypothesis put out by Stanford researchers Maldacena and Leonard Susskind in 2013 that quantum entanglement may be viewed as a wormhole. In turn, the
wormhole offers a covert passageway via which information can leave the interior.
How literally to interpret all of these wormholes has been a hotly contested topic among theorists. Although the wormholes seem to have a fragile relationship to reality since they are so deeply
buried in the equations, they do have real-world repercussions. "There's something plainly correct about these wormholes," said Raghu Mahajan, a physicist at Stanford, "but it's impossible to say
what's real and what's unphysical."
Mahajan and others believe that the wormholes are a result of new, nonlocal physics rather than being actual portals that exist somewhere in the cosmos. Wormholes allow events at one place to
directly affect another place, without a particle, force, or other influence having to travel across the intervening distance. This is an example of what physicists refer to as nonlocality. Almheiri
said, "They seem to imply that you have nonlocal effects that come in. The island and radiation are treated as one system in the black hole computations, which amounts to a failure of the notion of
"location." We've long known that nonlocal effects of some type must play a role in gravity, and this is one of them, according to Mahajan. "What you believed to be independent is not actually
This seems quite shocking at first. General relativity was developed by Einstein specifically to do away with nonlocality in physics. It takes time for gravity to extend throughout space. Like every
other interaction in nature, it must spread from one area to another at a finite rate. But over the years, physicists have realized that the symmetries that relativity is built on give rise to a new
type of nonlocal phenomena.
Marolf and Henry Maxfield, both of Santa Barbara, explored the nonlocality suggested by the new black hole estimates this past February. They discovered that the general relativity's symmetries have
considerably more significant impacts than previously thought, which may explain why space-time has the hall-of-mirrors appearance observed in black hole investigations.
All of this supports the hypothesis of many physicists that space-time is not the fundamental unit of existence but rather develops from a deeper process that is neither spatial nor temporal. That
was the main takeaway from the AdS/CFT duality, according to many. Although they avoid endorsing either string theory or the duality, the new calculations essentially say the same thing. Wormholes
appear because they are the only way for the path integral to express how space is collapsing. They serve as geometry's way of expressing the nongeometric nature of the universe.
The conclusion of the first
Physicists who are not involved in the project or even with string theory claim to be impressed, albeit cautiously. Since those computations are extremely difficult, Daniele Oriti of the Ludwig
Maximilian University of Munich stated, "hats off to them.
However, some people find the analysis's shaky foundation of idealizations unsettling, particularly the limitation of the cosmos to less than three spatial dimensions. The 1980s enthusiasm
surrounding the path integral, which was sparked by Hawking's work, eventually died down in part due to theorists' unease with the growing body of approximations. Are modern physicists making the
same mistake? Expert on the gravitational path integral Renate Loll of Radboud University in the Netherlands remarked, "I see people making the same hand-waving arguments that were made 30 years
ago." She has claimed that in order for the integral to produce accurate findings, wormholes must be explicitly disallowed.
Additionally, some are concerned that the writers have misunderstood the replica technique. The authors go farther than earlier references to the move by proposing that copies might be
gravitationally coupled. Steve Giddings of Santa Barbara remarked, "It's unclear how that fits into the context of quantum principles, but they are postulating that any geometries linking distinct
copies are acceptable.
Some don't believe that semiclassical theory can provide a solution because of the calculation's uncertainties. If you limit yourself to quantum physics and gravity, there is no good option, Warner
stated. He has defended theories in which stringy effects stop black holes from ever forming. But the result is essentially the same: Space-time changes phases and assumes a significantly different
If for no other reason than the complexity and rawness of the current work, skepticism is justified. It will take some time for physicists to process it and determine if the arguments have a fatal
defect or whether they are sound. Even the scientists who led the project didn't think it was possible to solve the information dilemma without a complete quantum theory of gravity. In fact, they
believed that the paradox would serve as their pivot point for obtaining that more in-depth theory. The Page curve is far off, I would have said if you had asked me two years ago, Engelhardt added.
We'll require some sort of [deeper] comprehension of quantum gravity.
Do the new calculations, provided they pass examination, really put an end to the black hole information paradox? The most current research demonstrates how to compute the Page curve precisely,
demonstrating how information might escape a black hole. It appears that the information dilemma has been solved as a result. There is no longer a logical inconsistency that renders the black hole
hypothesis absurd.
But in terms of understanding black holes, this is only the beginning of the end. The exact steps used by information to spread have still not been sketched out by theorists. Raphael Bousso from
Berkeley said, "I don't know why, but we can now compute the Page curve. Astronauts who inquire as to whether they can escape a black hole can receive the affirmative from physicists. However, the
unsettling response will be "No clue" if the astronauts inquire as to how to proceed. | {"url":"https://www.thesciverse.com/2023/12/the-most-famous-paradox-in-physics.html","timestamp":"2024-11-07T03:52:49Z","content_type":"application/xhtml+xml","content_length":"477533","record_id":"<urn:uuid:4e25ba6c-6444-4110-a4b1-02f0eaf6da1c>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00510.warc.gz"} |
Glossary | STAT 200
95% Rule
On a normal distribution approximately 95% of data will fall within two standard deviations of the mean; this is an abbreviated form of the Empirical Rule
Alternative Hypothesis
The statement that there is some difference in the population(s), denoted as \(H_a\) or \(H_1\)
A relationship between variables
Bar chart
Graphical representation for categorical data in which vertical (or sometimes horizontal) bars are used to depict the number of experimental units in each category; bars are separated by space.
Penn State Fall 2017 Undergraduate Enrollments
The systematic favoring of certain outcomes.
Binomial random variable
A specific type of discrete random variable that counts how often a particular event occurs in a fixed number of tries or trials.
Procedure employed in research to prevent bias in which the participants and/or the researchers interacting with the participations do not know which treatment each case is receiving.
A resampling procedure for constructing a sampling distribution using data from a sample.
An experimental unit from which data are collected
Categorical variable
Names or labels (i.e., categories) with no logical order or with a logical order but inconsistent differences between groups, also known as qualitative.
Changes in one variable can be attributed to changes in a second variable.
Clustered Bar Chart
Each bar represents one combination of the two categorical variables (i.e., one cell in a contingency table). This is also known as a side-by-side bar chart.
The probability that the event does not occur. The complement of \(P(A)\) is \(P(A^C)\). This may also be written as \(P(A')\).
In the diagram below we can see that \(A^{C}\) is everything in the sample space that is not A.
Complement of A
Conditional Probability
The probability of one event occurring given that it is known that a second event has occurred. This is communicated using the symbol \(\mid\) which is read as "given."
For example, P(A\mid B) is read as "Probability of A given B."
Confidence Interval
A range computed using sample statistics to estimate an unknown population parameter with a stated level of confidence.
Confounding Variable
Characteristic that varies between cases and is related to both the explanatory and response variables; also known as a lurking variable or a third variable.
Continuous variable
Characteristic that varies and can take on any value and any value between values.
Control Group
A level of the explanatory variable that does not receive an active treatment; they may receive no treatment or a placebo.
Convenience Sampling
A method of obtaining a sample from a population by ease of accessibility; such a sample is not random and may not be representative of the intended population.
A measure of the direction and strength of the relationship between two variables.
An individual score minus the mean.
Discrete variable
Characteristic that varies and can only take on a set number of values.
Disjoint Events
Two events that do not occur at the same time. These are also known as mutually exclusive events.
In the Venn diagram below event A and event B are disjoint events because the two do not overlap.
Mutually Exclusive
Double-Blind Study
Research study in which neither the participants nor the researchers interacting with them know which cases have been assigned to which treatment groups.
Empirical Rule
On a normal distribution about 68% of data will be within one standard deviation of the mean, about 95% will be within two standard deviations of the mean, and about 99.7% will be within three
standard deviations of the mean.
Experimental Research Design
A study in which the researcher manipulates the treatments received by subjects and collects data; also known as a scientific study
Explanatory Variable
Variable that is used to explain variability in the response variable, also known as an independent or predictor variable, it explains variations in the response variable; in an experimental
study, it is manipulated by the researcher.
Frequency Table
A table containing the counts of how often each category occurs.
Summary Statistics
Campus Count Percent
University Park 40835 48.5%
Commonwealth Campuses 29388 34.9%
PA College of Technology 5465 6.5%
World Campus 8513 10.1%
Total 84201 100.0%
Penn State Fall 2017 Undergraduate Enrollments
Independent Events
Unrelated events. The outcome of one event does not impact the outcome of the other event.
Independent Groups
Cases in each group are unrelated to one another.
Inferential Statistics
Statistical procedures that use data from an observed sample to make a conclusion about a population.
Interquartile range (IQR)
The difference between the first and third quartiles.
The overlap of two or more events and is symbolized by the character \(\cap\).
\(P(A \cap B)\) is read as "the probability of A and B."
Intersection of A and B
Least squares method
Method of constructing a regression line which makes the sum of squared residuals as small as possible for the given data.
Left Skewed
A distribution in which the lower values (towards the left on a number line) are more spread out than the higher values. This is also known as negatively skewed.
Margin of Error
Half of the width of a confidence interval; equal to the multiplier times the standard error.
The numerical average; calculated as the sum of all of the data values divided by the number of values.
The sample mean is represented as \(\overline{x}\) ("x-bar") and the population mean is denoted as the Greek letter \(\mu\) ("mu"). The formula is the same for the sample mean and the population
The middle of the distribution that has been ordered from smallest to largest; for distributions with an even number of values, this is the mean of the two middle values.
The most frequently occurring value(s) in the distribution, may be used with quantitative or categorical variables.
Non-Response Bias
Systematic favoring of certain outcomes that occurs when the individuals who choose participate in a study differ from the individuals who choose to not participate.
Normal Distribution
One specific type of symmetrical distribution. This is also known as a bell-shaped distribution.
Null Hypothesis
The statement that there is not a difference in the population(s), denoted as \(H_0\)
Observational Research Design
A study in which the researcher collects data without performing any manipulations; also known as a non-experimental study
Express risk by comparing the likelihood of an event happening to the likelihood it does not happen. Note that the interpretation of odds is different from the interpretation of risk/probability/
Given that the null hypothesis is true, the probability of obtaining a sample statistic as extreme or more extreme than the one in the observed sample, in the direction of the alternative
Paired Groups
Cases in each group are meaningfully matched with one another; also known as dependent samples or matched pairs.
A measure concerning a population (e.g., population mean).
Proportion of a distribution less than a given value.
Pie chart
Graphical representation for categorical data in which a circle is partitioned into “slices” on the basis of the proportions of each category.
Pie Chart of Campus
Penn State Fall 2017 Undergraduate Enrollments
Placebo Group
A group that receives what, to them, appears to be a treatment, but actually is neutral and does not contain any active treatment (e.g., a sugar pill in a medication study).
Point Estimate
Sample statistic that serves as the best estimate for a population parameter.
The entire set of possible cases.
Quantitative variable
Numerical values with magnitudes that can be placed in a meaningful order with consistent intervals, also known as numerical.
The act of randomly assigning cases to different levels of the explanatory variable.
The difference between the maximum and minimum values.
Relative Risk
Relative risk compares the risk of a particular outcome in two different groups.
Representative Sample
A subset of the population from which data are collected that accurately reflects the population.
The difference between an observed y value and the predicted y value. In other words, \(y-\widehat y\). On a scatterplot, this is the vertical distance between the line of best fit and the
observation. In a sample this may be denoted as \(e\) or \(\widehat \epsilon\) ("epsilon-hat") and in a population this may be denoted as \(\epsilon\) ("epsilon").
Response Bias
Systematic favoring of certain outcomes that occurs when participants do not respond truthfully; they may do so to align with social norms or to appease the researcher.
Response Variable
Also known as the dependent or outcome variable, its value is predicted or its variation is explained by the explanatory variable; in an experimental study, this is the outcome that is measured
following manipulation of the explanatory variable.
Right Skew
A distribution in which the higher values (towards the right on a number line) are more spread out than the lower values. This is also known as positively skewed.
The probability that an event will occur. It may be written as a decimal, a fraction, or a percent.
A subset of the population from which data are collected,
Sampling Bias
Systematic favoring of certain outcomes due to the methods employed to obtain the sample.
Sampling Distribution
Distribution of sample statistics with a mean approximately equal to the mean in the original distribution and a standard deviation known as the standard error.
A graphical representation of two quantitative variables in which the explanatory variable is on the x-axis and the response variable is on the y-axis.
Segmented Bar Chart
Also known as a stacked bar chart, one categorical variable is represented on the x-axis while the second categorical variable is denoted within the bars.
Simple Random Sampling
A method of obtaining a sample from a population in which every member of the population has an equal chance of being selected.
Single Boxplot
Graph displaying data from one quantitative variable. Also known as a "box-and-whisker plot." The box represents the middle 50% of observed values. The bottom of the box is the first quartile
(25th percentile) and the top of the box is the third quartile (75th percentile). The line in the middle of the box is the median (50th percentile). The lines, also known as whiskers, extend to
the lowest and highest values that are not outliers. Outliers are symbolized using asterisks or circles.
Single-Blind Study
Research study in which the participants do not know the treatment group that they have been assigned to.
A distribution in which values are more spread out on one side of the center than on the other.
Standard Deviation
Roughly the average difference between individual data values and the mean. The standard deviation of a sample is denoted as \(s\). The standard deviation of a population is denoted as \(\sigma
Standard Error
Standard deviation of a sampling distribution.
A measure concerning a sample (e.g., sample mean).
Statistical literacy
“People’s ability to interpret and critically evaluate statistical information and data-based arguments appearing in diverse media channels, and their ability to discuss their opinions regarding
such statistical information” (Gal, as cited by Rumsey, 2002)
Rumsey, D. J. (2002). Statistical literacy as a goal for introductory statistics courses. Journal of Statistics Education, 10(3). Retrieved from: http://www.amstat.org/publications/jse/v10n3/
Statistical significance
Sample statistics vary from the specified population parameters to the extent that it is unlikely that the results obtained were due to random sampling error, rather we conclude that the
differences observed in the sample were due to actual differences in the population.
The art and science of answering questions and exploring ideas through the processes of gathering data, describing data, and making generalizations about a population on the basis of a smaller
Sum of Squared Deviations
Deviations squared and added together. This is also known as the sum of squares or SS.
Sum of squared Residuals
The sum of all of the residuals squared: \(\sum (y-\widehat{y})^2\).
Symmetrical Distribution
A distribution that is similar on both sides of the center.
Two-Way Table
A display of counts for two categorical variables in which the rows represent one variable and the columns represent a second variable. Also known as a contingency table.
Type I Error
Rejecting \(H_0\) when \(H_0\) is really true, denoted by \(\alpha\) ("alpha") and commonly set at .05.
Type II Error
Failing to reject \(H_0\) when \(H_0\) is really false, denoted by \(\beta\) ("beta").
A union contains the area in A or B and is symbolized by \(\cup\). Note that this also includes the overlap of A and B (i.e., the intersection).
\(P(A \cup B)\) is read as "the probability of A or B."
Union of A and B
Characteristic of cases that can take on different values (in other words, something that can vary).
Approximately the average of all of the squared deviations; for a sample represented as \(s^{2}\).
Venn diagram
A visual representation in which the sample space is depicted as a box and events are represented as circles within the sample space.
A bell-shaped distribution with a mean of 0 and standard deviation of 1, also known as the standard normal distribution.
Distance between an individual score and the mean in standard deviation units; also known as a standardized score. | {"url":"https://online.stat.psu.edu/stat200/glossary","timestamp":"2024-11-12T22:55:34Z","content_type":"text/html","content_length":"193623","record_id":"<urn:uuid:4ceba40d-ffcf-4a5c-bd02-5545d2907a03>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00235.warc.gz"} |
Piezoelectric actuator for micro robot used in nanosatellite
The nanosatellites of the CubeSat standard (10×10×10 cm and with mass 1-10 kg) was designed to reduce cost and development time and to maximize science return. However, the small size of the
spacecraft imposes substantial mass, volume, and power constraints. The challenge remains to be the miniaturization of the various robots for the manipulation of functional objects, such as cameras,
laser sources, mirrors and other used in nanosatellites. Therefore in particular, precision positioning of the manipulated object is important task for robots used in nanosatellites as well. In this
paper authors present the design of robot driven by the piezoelectric actuators. Investigations of the robot are presented and they prove ability to improve the accuracy of the movement for the robot
arm using two bending bimorph type piezoelectric actuators and 3DOF rotary piezoelectric motor.
1. Introduction
As a result of budget reduction and advances in microelectronics, nanosatellites weighing 1-10 kg have been actively developed. Their mission is wide-ranging: remote sensing, communications, science,
technology demonstration and military. Nanosatellites in generally are measured in dimensions of 10 cm×10 cm×10 cm. [1] Seeking to reach the aims of the missions there are used special equipment in
the nanosatellites. One of such equipment is piezoelectric robotic arm that is usually responsible for manipulated objects, such as cameras, laser beams, mirrors, optical elements and other.
Piezoelectric bending actuators are widely investigated. They are used for many different applications, such as precision movement mechatronic systems, optical devices, medicine equipment, space
technologies and other [2-4]. Precision positioning of the manipulated object is important task for robots used in nanosatellites. Very exhaustive review of robotic micro- and nano-manipulation can
be found in [5, 6, 7, 8].
Piezoelectric effect generates small deformations of piezoelectric bimorph and that is the reason why deflection angle of actuator reaches only 0.01-0.5^○ [9]. Recent advances in smart materials have
enabled to produce actuators for generation high enough force and dynamic/static deflection in millimetre-scale mechanical structures [10, 11].
In this paper authors present the design of micro robotic arm using two bending bimorph type piezoelectric actuators and 3DOF rotary piezoelectric motor that enables the robot arm to rotate 360
degrees around the $z$ axis and 170 degrees around $x$ and $y$ axis. Robot consists of piezoelectric cylinder, ferromagnetic sphere, two cantilever piezoelectric bending actuators [12, 13] and
manipulated objet (camera, laser source or other). This actuator that consists of two bending bimorphs improves the accuracy of the positioning angles for robotic arm. Such piezoelectric bending
actuator can be characterized as low price and simple design.
2. Design of piezoelectric robot for nanosatellites
Connecting the electrodes of the piezoelectric cylinder 1 to the electrical voltage according to electrode configuration shown in Fig. 1(b), a travelling or standing waves on the contacting elements
3 is created. Because of this reason ferromagnetic sphere (rotor) 4 starts the rotational movement around axis $x$, $y$ or $z$ (Fig. 1(a)). The sphere-shape rotor 4 of the robot arm can rotate 360
degrees around the $z$ axis and 170 degrees around $x$ and $y$ axis. For precision positioning of the manipulated object the arm of robot is fabricated of two piezoelectric cantilevers bending
actuators bonded together in series and is fixed on the surface of the sphere 4. First piezoelectric cantilever bimorph 5 creates bending movement around axis $y$ and second piezoelectric bimorph 6
around axis $x$ (Fig. 1(a)).
Fig. 1Piezoelectric robot: a) scheme of the robot; b) piezoelectric cylinder electrodes configuration, here: 1 – piezoelectric cylinder, 2 – permanent magnet, 3 – friction contact elements, 4 –
ferromagnetic sphere-rotor, 5 – first piezoelectric bimorph, 6 – second piezoelectric bimorph, 7 – manipulated object, 8 – configuration of the piezoelectric cylinder electrodes; c) geometrical
parameters of 2D actuator, here: 1 – first piezoelectric bimorph, 2 – second piezoelectric bimorph, l1 – length of first bimorph, l2 – length of second bimorph, w1 – width of first bimorph, w2 –
width of second bimorph, t1 – thickness of first bimorph, t2 – thickness of second bimorph
For the 2D bending actuator (Fig. 1(c)) two piezoelectric bimorphs were used. Dimensions of the actuators are presented in Fig. 2: ${l}_{1}$ – length of first bimorph (50 mm), ${l}_{2}$ – length of
second bimorph (40 mm), ${w}_{1}$ – width of first bimorph (7.8 mm), ${w}_{2}$ – width of second bimorph (2 mm), ${t}_{1}$ – thickness of first bimorph (1.8 mm), ${t}_{2}$ – thickness of second
bimorph (0.8 mm).
3. Modelling of bimorph actuators
A harmonic analysis of 2D piezoelectric actuator (Fig. 2) was made by using finite element method (FEM). FEM was used to perform numerical modelling of the actuator. It was used to carry out modal
frequency and harmonic response analysis and to calculate displacements of the free tip movements of bimorph-type cantilever. Driving force of the piezoelectric actuator is obtained from
piezoelectric ceramic’s plate. FE discretization of this element usually consists of a few layers of finite elements. Therefore nodes coupled with electrode layers have known potential values in
advance and nodal potential of the remaining elements are calculated during the analysis. Dynamic equation of piezoelectric actuator is derived from the principle of minimum potential energy by means
of variational functionals and in this case can be expressed as follows [14]:
\{u\right\}+\left[{T}_{1}\right]\text{\hspace{0.17em}}\left\{{\phi }_{1}\right\}+\left[{T}_{2}\right]\left\{{\phi }_{2}\right\}=\left\{F\right\},\\ {\left[{T}_{1}\right]}^{T}\text{\hspace{0.17em}}\
left\{u\right\}-\left[{S}_{11}\right]\text{\hspace{0.17em}}\left\{{\phi }_{1}\right\}-\left[{S}_{12}\right]\text{\hspace{0.17em}}\left\{{\phi }_{2}\right\}=\left\{{Q}_{1}\right\},\\ {\left[{T}_{2}\
right]}^{T}\text{\hspace{0.17em}}\left\{u\right\}-{\left[{S}_{12}\right]}^{T}\left\{{\phi }_{1}\right\}-\left[{S}_{22}\right]\text{\hspace{0.17em}}\left\{{\phi }_{2}\right\}=\left\{0\right\},\end
where $\left[M\right]$, $\left[K\right]$, $\left[T\right]$, $\left[S\right]$, $\left[C\right]$ are matrices of mass, stiffness, electro elasticity, capacity, damping respectively; $\left\{u\right\}$,
$\left\{F\right\}$, $\left\{Q\right\}$ are vectors of nodes displacements, structural mechanical forces and charge; $\left\{{\phi }_{1}\right\}$, $\left\{{\phi }_{2}\right\}$ are accordingly vectors
of nodal electrical potentials known in advance and calculated during numerical simulation.
Natural frequencies and modal shapes of the actuator are derived from the modal solution of the piezoelectric system [14, 15]:
$\mathrm{d}\mathrm{e}\mathrm{t}\left(\left[{K}^{*}\right]-{\omega }^{2}\text{\hspace{0.17em}}\left[M\right]\right)=\left\{0\right\},$
where $\left[{K}^{*}\right]$ is modified stiffness matrix and it depends on nodal potential values of the piezoelectric elements.
Harmonic response analysis of piezoelectric actuator is carried out applying sinusoidal varying voltage on electrodes of the piezoelectric elements. Structural mechanical loads are not used in our
case, so $\left\{F\right\}=\left\{0\right\}$. Equivalent mechanical forces are obtained, because of inverse piezoefect and can be calculated as follows [15]:
$\left\{F\right\}=\left[T\right]\left\{{\varphi }_{1}\right\},$
here $\left\{{\varphi }_{1}\right\}=\left\{U\right\}\mathrm{s}\mathrm{i}\mathrm{n}\left({\omega }_{k}t\right)$, where $\left\{U\right\}$ is vector of voltage amplitudes, applied on the nodes coupled
with electrodes.
Results of structural displacements of the piezoelectric actuator obtained from harmonic response analysis are used for determining the displacement of the tip movement.
FEM software ANSYS v.13 was used to perform numerical modelling of the two bimorph actuators. The aim of the modelling was to perform modal-frequency, deformations and harmonic response analysis of
the actuators. Finite element model was made of SOLID5 and SOLID45 finite elements [16, 17]. It was assumed that polarization direction of the piezoelectric ceramic is constant within the finite
element pale and is organized along $x$ axis for the first bimorph and along $y$ axis for the second bimorph (Fig. 2(a)). The material properties of the finite element models are given in Table 1.
One end of the bimorph actuator was clamped so all mechanical degree of freedom of corresponding nodes was set to zero. The electrodes layers on the top and bottom of the actuator were not considered
in the FEM model. Electrodes were created by grouping surface nodes of the FEM model and harmonic voltage of excitation $U=$12 V were applied.
Table 1Parameters of piezoelectric material
Material PZT-5H
Piezoelectric constants, ${d}_{31}$ –265
(m/V×10^-12) ${d}_{33}$ 585
Coupling factor, ${k}_{p}$ 0.68
Density $\rho$, kg/cm^3 7550
Dielectric constant ${K}_{3}^{T}$ 3400
Mechanical ${Q}_{m}$ 65
Young’s modulus $E$, GPa 60
Relative dielectric constant ${\epsilon }^{T}/{\epsilon }^{o}$ 3200
Fig. 2Investigated piezoelectric actuator: a) structure of 2D actuator used in FEM, b) FEM for numerical modelling
Fig. 3Directional deformations of 2D actuator: a) first piezoelectric bimorph in x direction, b) second piezoelectric bimorph in y direction
The modal shapes and the resonant frequencies of the actuators were calculated and harmonic response analysis was performed. The first bending mode out of plane is the object of interest and natural
frequency of this mode was found at 216 Hz for first actuator and at 234 Hz for the second actuator (Fig. 4(a)).
Peaks in the graphs (Fig. 4(a)) indicate the resonant vibrations at first bending mode. Resonance amplitudes are 334 µm and 645 µm accordingly for first and second actuator.
Fig. 4Numerical modelling: a) frequency response of piezoelectric actuators, b) amplitude of the tip deflection for actuators as a function of applied electric voltage at low frequency. Here 1 –
first piezoelectric bimorph, 2 – second piezoelectric bimorph
4. Experimental investigations
The prototype of 2D piezoelectric bending actuator (Fig. 4(b)) was made for experimental investigation of its dynamic and precision positioning characteristics. Two commercial piezoelectric bimorphs
were used: 1 – of type CMBP09 (Noliac A/S, Denmark) and 2 – of type 2 (Johnson Matthey Catalysts GmbH, Germany). Experimental set up is presented in Fig. 4(a). The total response of the actuator
consists of two components: a fast inertial response that influences the dynamics in the short travel and high frequency range, and a hysteretic response due to dipole domain switching in
piezoelectric materials, which resembles a nonlinear relaxation process.
Therefore during experiments there were investigated two characteristics of piezoelectric actuator: frequency responses and hysteresis of displacement. Measurements were made on the tip of second
piezoelectric bimorph, when voltage of 12 Vpp of harmonic signal is applied. The aim of these experiments was to investigate dynamics and precision positioning of the manipulated object. Fast
response is one of characteristic features of piezoelectric actuators. A rapid drive voltage changes the results in a rapid position change. This property is especially welcome in dynamic
applications of manipulator such as scanning, image stabilization, vibration cancellation systems, etc. Piezoelectric bending bimorph can reach its nominal displacement in approximately 1/3 of the
period of the resonant frequency with significant overshoot [18]. If the voltage rises fast enough to excite a resonant oscillation in the piezoelectric actuator the ringing and overshoot will occur
[19]. For the fastest settling, switched operation is not the best solution. If the input signal rise time is limited to a period of the resonant frequency, the overshoot can be reduced
significantly. Pre-shaped input signals (optimized for minimum resonance excitation) reduce the time to reach a stable position.
Experimental set up is shown in Fig. 5. It consists of: 1 – investigated 2D piezoelectric bending actuator, 2 – holder of piezoelectric actuator, 3 – laser displacement sensor LK-G82, 4 – laser
sensor controller LK-G3001PV, 5 – signal generator Agilent 33220A, 6 – voltage amplifier EPA-104, 7 – PC with analog-digital converter.
Fig. 5Experimental investigation: a) setup and b) structural scheme of experimental setup
Fig. 6Frequency response for the piezoelectric 2D bending actuator used in experiments: a) frequency responses of the actuator tip when second bimorph is exited: 1 – in y direction, 2 – in x
direction; b) frequency responses of the actuator tip when first bimorph is exited: 1 – in x direction, 2 – in y direction
From amplitude-frequency curves it can be seen that investigated actuator reaches its resonance at frequency 210 Hz (amplitude – 275 µm), when second bimorph is exited (Fig. 6(a)). In this case
movement of the tip is generated in y direction. During vibrating process of first bender, second bimorph gets movement as well. When first bender gets vibrations, the tip of second bender resonates
at frequency of 230 Hz and reaches amplitudes of 600 µm in $x$ direction. For the precision positioning a piezoelectric bending bimorph can reach its nominal displacement in approximately 1/3 of the
period of the resonant frequency. So operating frequency range of investigated actuator for precision positioning in y direction is 0-70 Hz, and in $x$ direction 0-80 Hz.
During experimental investigations of generated displacements the hysteresis of approximately 25 % for the tip of piezoelectric actuator in $xy$-plane is measured (20 Hz). Thus the control system
with feedback for precision positioning of manipulated object must be applied.
Fig. 7Flexural deflection of the actuator’s tip under a sinusoidal input voltage and frequency of 20 Hz: a) hysteresis of the first bimorph; b) hysteresis of the second bimorph
5. Conclusions
A piezoelectric-driven 2D scanning actuator using two piezoelectric bimorph cantilevers bonded in series and placed in perpendicular direction has successfully been designed, fabricated and tested.
Two bending modes of scanning operation have been investigated. By applying 12 Vpp, the tip of actuator can achieve horizontal and vertical resonant vibrational amplitudes of 275 µm and 600 µm in the
bending mode. During experimental investigation there was the hysteresis of approximately 25 % for the piezoelectric actuator in $xy$-plane measured. For precision poisoning of manipulated object
control system with feedback must be applied. Development and implementation of model-based manipulator control systems are currently ongoing research.
• Crook M. R. NPS cubesat launcher design, process and requirements. Naval Postgraduate School.
• Shaffer J. J., Fried D. L. Bender-bimorph scanner analysis. Applied Optics, Vol. 9, Issue 4, 1970, p. 933-937.
• Muralt P., Pohl D. W., Denk W. Wide-range, low-operating-voltage, bimorph STM: applications as potentiometer. Journal of Research and Development, Vol. 30, Issue 5, 1986, p. 443-450.
• Lanyi S., Ozvold M. Improved wide-range bimorph scanners. Ultramicroscopy, 1992, p. 1664-1667.
• Fukuda T., Arai F., Dong L. Assembly of nanodevices with carbon nanotubes through nanorobotic manipulations. Proc. IEEE, Vol. 91, Issue 11, 2003, p. 1803-1818.
• Xi N., Li W. Recent development in nanoscale manipulation and assembly. Transactions on Automation Science and Engineering, Vol. 3, Issue 3, 2006, p. 194-198.
• Smits J. G., Choi W. The constituent equations of piezoelectric heterogeneous bimorphs. IEEE Transactions on Ultrasonics Ferroelectrics and Frequency Control, Vol. 38, Issue 3, 1991, p. 256 270.
• Ansari M. Z., Cho C. Deflection, frequency, and stress characteristics of rectangular, triangular, and step profile microcantilevers for biosensors. Sensors, 2009, p. 6046-6057.
• Kelly Lee J. Piezoelectric bimorph optical beam scanners: anglysis and construction. Applied Optics, Vol. 18, Issue 4, 1979, p. 454-459.
• Ohtuka Y., Nishikawa H., Koumura T., Hattori T. 2-Dimensional optical scanner applying a torsional resonator with 2 degree of freedom. Proceedings of the IEEE Micro Electro Mechanical Systems
(MEMS), Amsterdam, Netherlands, 1995, p. 306-309.
• Ikeda M., Totani H., Akiba A., Goto H., Matsumoto M., Yada T. PZT thin film actuator driven micro optical scanning sensor by 3D integration of optical and mechanical devices. Proceedings of the
IEEE Micro Electro Mechanical Systems (MEMS), Orlando, USA, 1999, p. 435-440.
• Yorinaga M., Makino D., Kawaguchi K., Naito M. A piezoelectric fan using PZT ceramics. Japanese Journal of Applied Physics, Vol. 24, 1985, p. 203-205.
• Yoo J. H., Hong J. I., Cao W. Piezoelectric ceramic bimorph coupled to thin film metal plate as cooling fan for electronic devices. Sensors and Actuators A, Vol. 79, 2000, p. 8-12.
• Frangi A., Corigliano A., Binci M., Faure P. Finite element modelling of a rotating piezoelectric ultrasonic motor. Ultrasonics, Vol. 43, Issue 9, 2005, p. 747-755.
• Sitti M., Campolo D., Yan J., Fearing R. Development of PZT and PZN-PT based unimorph actuators for micromechanical flapping mechanisms constant magnet motors. USA, 2001.
• Frangi A., Corigliano A., Binci M., Faure P. Finite element modelling of a rotating piezoelectric ultrasonic motor. Ultrasonics, Vol. 43, Issue 9, 2005, p. 747-755.
• Sitti M., Campolo D., Yan J., Fearing R. Development of PZT and PZN-PT Based Unimorph Actuators for Micromechanical Flapping Mechanisms Constant magnet motors. USA, 2001.
• Uchino K. Piezoelectric Actuators and Ultrasonic Motors. Springer, 1997, p. 336.
• Barrett R. C., Quate C. F. Optical scancorrection system applied to atomic force microscopy. Review of Scientific Instruments, Vol. 62, Issue 6, 1991, p. 1393-1399.
About this article
22 December 2013
piezoelectric actuator
Postdoctoral fellowship is being funded by European Union Structural Funds project “Postdoctoral Fellowship Implementation in Lithuania” within the framework of the Measure for Enhancing Mobility of
Scholars and Other Researchers and the Promotion of Student Research (VP1-3.1-ŠMM-01) of the Program of Human Resources Development Action Plan.
Copyright © 2014 JVE International Ltd.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.extrica.com/article/14948","timestamp":"2024-11-09T00:24:10Z","content_type":"text/html","content_length":"128513","record_id":"<urn:uuid:8c81bada-8f85-4fa4-aeb1-ca243375a1cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00156.warc.gz"} |
I Need Help With Math
Chile i need help with math to you not the only one
There is 520 in Balan’s colony!
see the attached graph
Step-by-step explanation:
The axes and units for the graph are given in the problem statement. The range of the graph is also given, but figuring the domain requires a little arithmetic. We assume the walking pace is
constant, so the graph will be piecewise linear.
The vertical (y) axis is labeled with what the graph is modeling, "distance from home (in meters)." The horizontal (x) axis is labeled with what the model is in terms of "time (in minutes)."
The range of the graph is 0 to 400 meters, the range of distances Sam may be from home.
The domain of the graph is 0 to 9 minutes. The time it takes to complete the trip is 4 minutes to the shop, 2 minutes at the shop, and 3 minutes coming home, for a total of 4+2+3 = 9 minutes.
We are given that points on the graph will be (0, 0), (4, 400), (6, 400), and (9, 0). That is Sam is ...
at home at 0 minutes400 meters from home (at the shop) after 4 minutesremaining 400 meters from home for 2 more minutes (until 6 minutes)at home after 3 more minutes (at 9 minutes).
Since we assume walking is at a steady pace, we connect these dots with straight lines. The resulting graph is attached. | {"url":"https://diemso.unix.edu.vn/question/i-need-help-with-math-v7so","timestamp":"2024-11-15T00:12:25Z","content_type":"text/html","content_length":"75429","record_id":"<urn:uuid:1d575783-1935-4776-86ab-7b9138bb0457>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00431.warc.gz"} |
Topological states of quantum matter
Electrons in graphene can be described by the relativistic Dirac equation for massless fermions and exhibit a host of unusual properties. The surfaces of certain band insulators—called topological
insulators—can be described in a similar way, leading to an exotic metallic surface on an otherwise ‘ordinary’ insulator.
Figure 1: When we think of topology, we normally think of objects that cannot be simply transformed into each other, such as a rubber band and a Möbius strip (top). The metallic surface of a
topological insulator is different from an ordinary surface because its metallic nature is protected by certain symmetry invariants. In this sense, it cannot be simply transformed into the surface of
a normal insulator. The sketches (bottom) show the electronic structure (energy versus momentum) for a “trivial” insulator (left) and a strong topological insulator (right), such as ${\text{Bi}}_
{1-x}{\text{Sb}}_{x}$. In both cases, there are allowed electron states (black lines) introduced by the surface that lie in the bulk band gap (the bulk valence and conduction bands are indicated by
the green and blue lines, respectively). In the trivial case, even a small perturbation (say, changing the chemistry of the surface) can open a gap in the surface states, but in the nontrivial case,
the conducting surface states are protected. Note that in the topological insulator, the surface states are linear in momentum and meet at an odd number of points in $k$-space.
Most quantum states of matter are categorized by the symmetries they break. For example, the crystallization of water into ice breaks translational symmetry or the magnetic ordering of spins breaks
rotational symmetry. However, the discovery in the early 1980s of the integer and fractional quantum Hall effects has taught us that there is a new organizational principle of quantum matter. In the
quantum Hall state, an external magnetic field perpendicular to a two-dimensional electron gas causes the electrons to circulate in quantized orbits. The “bulk” of the electron gas is an insulator,
but along its edge, electrons circulate in a direction that depends on the orientation of the magnetic field. The circulating edge states of the quantum Hall state are different from ordinary states
of matter because they persist even in the presence of impurities. The reason for this is best expressed mathematically (it is related to the quantization of Berry’s phases, see, for example, Physics
Today August 2003 [1]), and is not intuitively obvious, but the effect—circulating current—is real and measurable.
In the last few years, a number of theorists realized that the same “robust” conducting edge states that are found in the quantum Hall state could be found on the boundary of two-dimensional band
insulators with large spin-orbit effect, called topological insulators. In these insulators, spin-orbit effects take the role of an external magnetic field, with spins of opposite sign
counter-propagating along the edge [2-5]. In 2006, my colleagues and I predicted this effect (later confirmed) on the edge of $HgTe$ quantum wells [2,3]—the first experimentally realized quantum spin
Hall state. In 2007 Liang Fu and Charles Kane of the University of Pennsylvania predicted that a three-dimensional form of the topological insulator with conducting surface states could exist in
$Bi1-xSbx$, an alloy in which spin-orbit effects are large [6]. Earlier this year, photoemission measurements of the surface of $Bi1-xSbx$ supported this picture [7], strongly suggesting that
$Bi1-xSbx$ is the first realization of a topological insulator in three dimensions and that its surface is a topological metal in two dimensions. Now, in an article appearing in the current issue of
Physical Review B [8], the same authors and Jeffrey Teo present a detailed calculation of the electronic structure of the surface states in this material that can be directly tested in future
To understand why the surface of $Bi1-xSbx$ is exotic, it helps to think about what a surface is like in a “normal” insulator. Recall that the surface and bulk states of electrons inside crystalline
solids are described by wave functions obtained from solving Schrödinger’s equation. This quantum mechanical framework predicts that there are gaps in the electronic energy spectrum where no wave
solutions are possible inside the bulk crystal. If the Fermi level lies inside this energy gap (or “band gap”), the solid is insulating. However, dangling bonds or a reorganization of atoms on the
surface can introduce states that have energies that lie within the forbidden energy gap, but are restricted to move around the two-dimensional surface. In most situations these conducting surface
states are very fragile and their existence depends on the details of the surface geometry and chemistry. In contrast, in a topological insulator, these surface states are protected, that is, their
existence does not depend on how the surface is cut or distorted. Again, the reason for this is, at its root, mathematical, and lies in the fact that the Hamiltonian describing the surface states is
invariant to small perturbations.
The concept of a topological insulator is perhaps confusing, because when we think of two objects as topologically distinct, we imagine the difference between say, a Möbius strip and a rubber band
(Fig. 1). We can’t deform one into the other. The same is true for the Hamiltonian that describes a topological insulator: the Hamiltonian permits conducting states that circulate along the edge (in
a two-dimensional insulator) or the surface (in the three-dimensional case) and no simple deformation to the edge (or surface) can destroy these conducting states. Moreover, the conducting states are
real and can be measured, and in the case of the quantum spin Hall state, are naturally spin polarized, which can have interesting applications in spintronics.
What’s special about the surface of $Bi1-xSbx$ that it has these properties? It turns out that the surface states of this alloy are similar to the two-dimensional states in graphene. Near the Fermi
level, electrons and holes in graphene are described by energy states that are linear in momentum. Electrons with a constant velocity are conveniently described by the relativistic Dirac equation for
massless fermions. (The electrons in graphene are not actually massless; the linear bands result from the atomic structure of this two-dimensional system.) In two-dimensional k-space, the dispersion
relation looks like two cones that meet at discrete (Dirac) points at the Fermi level. However, while graphene has an even number of Dirac points at the Fermi level, $Bi1-xSbx$ has an odd number.
Kramers theorem tells us that the degeneracy of states with an even number of electrons that obeys time reversal symmetry will always be lifted. For this reason, the surface states in graphene are
easily destroyed because a gap will open (they are “topologically trivial”) while the surface states of $Bi1-xSbx$ are said to be “topologically protected” (see Fig. 1). In fact, in graphene, if one
distorts the energies of the two carbon atoms in one unit cell relative to each other, the Dirac points disappear immediately. In contrast, the massless Dirac states on the surface of $Bi1-xSbx$ are
robust, even if the surface itself is slightly imperfect or possesses impurities.
In their paper, Teo et al. use a tight-binding model (a well-established method for determining the band structure in an insulator) that they solve numerically to determine the electronic structure
on a particular $Bi1-xSbx$ surface. The model reproduces the surface structure of $Bi1-xSbx$ and the authors can determine which surfaces will behave as topological metals. However, the paper also
makes general symmetry arguments that are model independent that could potentially be applied to determine if other materials are good candidates for topological insulators.
Topological quantum states of matter are very rare and until recently the quantum Hall state provided the only experimentally realized example. The application of topology to physics is an exciting
new direction that was first initiated in particle physics and quantum field theory. However, there are only a few topological effects that have been experimentally tested in particle physics.
Topological states of quantum matter now offer a new laboratory to test some of the most profound ideas in mathematics and physics. In 2007, the theoretical prediction and experimental observation of
the quantum spin Hall state—a topological insulator in two dimensions—in $HgTe$ quantum wells was highlighted as one of the top ten breakthroughs among all sciences [2,3,9].
Topological states of quantum matter are generally described by topological field theories. Readers may already be familiar with Maxwell’s field theory describing the electromagnetic fields and
Einstein’s field theory describing the gravitational fields. These field theories depend on the geometry of the underlying space. In contrast, topological field theories do not depend on the
geometry, but only on the topology of the underlying space.
One of the most striking predictions of topological field theory is the so-called topological magnetoelectric effect, where an electric field induces a magnetic field along the same direction inside
a topological insulator, with a constant of proportionality given by odd multiples of the fine structure constant [13]. Such a prediction can be readily tested in $Bi1-xSbx$ . Although the
tight-binding model that the authors use to calculate the electronic band structure for $Bi1-xSbx$ is more complicated that that for $HgTe$, and there are some quantitative disagreements with the
first principle calculations, its essential properties can be understood with a simple topological field theory.
Now that two topological states of quantum matter have been experimentally discovered—the quantum Hall and the quantum spin Hall states—one may naturally wonder about how they would fit into a bigger
unifying picture. For example, the periodic table gives an organizational principle of all elements, and symmetry principles fit all elementary particles into their right places. The paper from the
Kane group suggests that what we know about topological insulators may be just the tip of the iceberg and that other classification schemes exist as well. Once we discover the deeper organizational
principle of topological states of quantum matter, we may be able to predict many more, each with its own unique and beautiful properties. | {"url":"https://physics.aps.org/articles/v1/6","timestamp":"2024-11-05T17:06:22Z","content_type":"text/html","content_length":"41149","record_id":"<urn:uuid:503a13e9-f23b-43c5-b1f3-d7eca9a1ff6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00000.warc.gz"} |
5 Best Methods to Convert Complex Number to Polar Coordinate Values in Python
π ‘ Problem Formulation: This article addresses the conversion of complex numbers, which are expressed as a combination of a real and an imaginary part (a + bi), into polar coordinates, which
represent the same number as a radius (r) and an angle (ΞΈ). For example, converting the complex number 3 + 4i should result in a radius of 5 and an angle of 0.927 radians.
Method 1: Using ‘cmath’ module
This method utilizes Pythonβ s ‘cmath’ module which provides access to mathematical functions for complex numbers. The cmath.polar() function directly converts a complex number object into a polar
coordinate pair, returning a tuple with the magnitude (radius) and the phase (angle in radians).
Here’s an example:
import cmath
# Convert a complex number to polar coordinates
complex_num = 3 + 4j
polar_coordinates = cmath.polar(complex_num)
print("Polar Coordinates:", polar_coordinates)
Polar Coordinates: (5.0, 0.9272952180016122)
This method is straightforward and concise. In a single line, the cmath.polar() function performs the conversion, making it an excellent choice for simplicity and readability.
Method 2: Manual calculation using ‘math’ module
For those who want to understand the underlying mathematics, manual calculation using the ‘math’ module is an option. This involves computing the magnitude using Pythagoras’ theorem and the phase
using the arctangent function math.atan2().
Here’s an example:
import math
# Define the complex number
real_part = 3.0
imaginary_part = 4.0
# Calculate magnitude and phase
magnitude = math.sqrt(real_part**2 + imaginary_part**2)
phase = math.atan2(imaginary_part, real_part)
print("Polar Coordinates:", (magnitude, phase))
Polar Coordinates: (5.0, 0.9272952180016122)
This approach allows full control over the conversion process and is useful for educational purposes or environments where ‘cmath’ might not be available, though it’s a bit more involved than Method
Method 3: Using ‘numpy’ library
The ‘numpy’ library is commonly used for mathematical and scientific computing in Python. It can handle arrays of complex numbers and includes a function numpy.angle() to compute the phase and
numpy.abs() for magnitude.
Here’s an example:
import numpy as np
# Define the complex number as a numpy array
complex_num = np.array(3 + 4j)
# Calculate magnitude and phase
magnitude = np.abs(complex_num)
phase = np.angle(complex_num)
print("Polar Coordinates:", (magnitude, phase))
Polar Coordinates: (5.0, 0.9272952180016122)
This method benefits from numpy’s performance optimizations, especially when processing large arrays of complex numbers. However, it requires numpy to be installed, which might not always be
Method 4: Using ‘scipy’ library
The ‘scipy’ library extends ‘numpy’ with additional utilities, including the scipy.spatial.transform.rotation module which provides tools for working with rotations in n dimensions, although itβ s a
bit of an overkill for simple conversions.
Here’s an example:
# Scipy does not directly provide an easier method than numpy
# This is mostly included for academic interest.
Explanation of why ‘scipy’ might not be the best tool for a simple complex-to-polar conversion can go here.
Bonus One-Liner Method 5: Using Lambda Function
For a quick, on-the-fly conversion, you can define a lambda function that wraps the functionality of the cmath.polar() function into a one-liner that can be reused easily.
Here’s an example:
to_polar = lambda complex_num: cmath.polar(complex_num)
print("Polar Coordinates:", to_polar(3 + 4j))
Polar Coordinates: (5.0, 0.9272952180016122)
This one-liner is concise and Pythonic, offering the simplicity of cmath.polar() with the flexibility of a lambda function.
• Method 1: Using cmath. Straightforward and concise. Requires no extra libraries.
• Method 2: Manual calculation. Offers a deeper understanding of the mathematics. Slightly more complex and verbose.
• Method 3: Using numpy. Optimized for large arrays of complex numbers. Requires numpy to be installed.
• Method 4: Using scipy. More features than necessary for this task, and less straightforward than numpy or cmath methods.
• Method 5: Lambda function. Pythonic and convenient for quick conversions, but not as clear for beginners. | {"url":"https://blog.finxter.com/5-best-methods-to-convert-complex-number-to-polar-coordinate-values-in-python/","timestamp":"2024-11-12T05:35:02Z","content_type":"text/html","content_length":"71277","record_id":"<urn:uuid:2234c4e2-7429-4e4e-9bb0-c0a1f38d618c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00719.warc.gz"} |
Your Loan Could Be Just A
Few Steps Away!
Find A Loan In 3 Minutes.
Get A Decision Online In Minutes With No Paperwork.
Get Your Funds Now
Repayment Terms
• Our lenders give you as much as 72 months to repay your loan. View Terms Below.
• Lending Period: 61 Days to 72 months
• Payment Options: Once to twice a month
• Maximum APR: From 5.99% to 35.99%
Representative Repayment Examples
1. If you borrowed $2,000 over a 24 month period and the loan had a 8% arrangement fee ($160), your monthly repayments would be $97.69, with a total pay back amount of $2,344.58 which including the
8% fee paid from the loan amount, would have a total cost of $344.58. Effective Representative APR : 15.748%.
2. If you borrowed $3,000 over a 36 month period and the loan had a 8% arrangement fee ($240), your monthly repayments would be $101.53, with a total pay back amount of $3,655.07 which including the
8% fee paid from the loan amount, would have a total cost of $655.07. Effective Representative APR : 13.312%.
3. If you borrowed $4,000 over a 48 month period and the loan had a 8% arrangement fee ($320), your monthly repayments would be $105.46, with a total pay back amount of $5,062.26 which including the
8% fee paid from the loan amount, would have a total cost of $1,062.26. Effective Representative APR : 12.067%. | {"url":"https://myquickloan.site/","timestamp":"2024-11-06T07:03:04Z","content_type":"text/html","content_length":"7170","record_id":"<urn:uuid:ffd8d182-faa1-460b-81b1-929a933a7a1c>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00319.warc.gz"} |
DCCXVI in Hindu Arabic Numerals
DCCXVI = 716
M C X I
MM CC XX II
MMM CCC XXX III
CD XL IV
D L V
DC LX VI
DCC LXX VII
DCCC LXXX VIII
CM XC IX
DCCXVI is valid Roman numeral. Here we will explain how to read, write and convert the Roman numeral DCCXVI into the correct Arabic numeral format. Please have a look over the Roman numeral table
given below for better understanding of Roman numeral system. As you can see, each letter is associated with specific value.
Symbol Value
I 1
V 5
X 10
L 50
C 100
D 500
M 1000
How to write Roman Numeral DCCXVI in Arabic Numeral?
The Arabic numeral representation of Roman numeral DCCXVI is 716.
How to convert Roman numeral DCCXVI to Arabic numeral?
If you are aware of Roman numeral system, then converting DCCXVI Roman numeral to Arabic numeral is very easy. Converting DCCXVI to Arabic numeral representation involves splitting up the numeral
into place values as shown below.
D + C + C + X + V + I
500 + 100 + 100 + 10 + 5 + 1
As per the rule highest numeral should always precede the lowest numeral to get correct representation. We need to add all converted roman numerals values to get our correct Arabic numeral. The Roman
numeral DCCXVI should be used when you are representing an ordinal value. In any other case, you can use 716 instead of DCCXVI. For any numeral conversion, you can also use our roman to number
converter tool given above.
Current Date and Time in Roman Numerals
The current date and time written in roman numerals is given below. Romans used the word nulla to denote zero because the roman number system did not have a zero, so there is a possibility that you
might see nulla or nothing when the value is zero. | {"url":"https://romantonumber.com/dccxvi-in-arabic-numerals","timestamp":"2024-11-14T02:08:34Z","content_type":"text/html","content_length":"89573","record_id":"<urn:uuid:531c3dd8-2bb8-4e13-b3a7-86839d002b9c>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00529.warc.gz"} |
Monty Hall
Should I switch, or stay?
Let’s begin our review all the way back at the beginning, with our very first email about risk…
Everyone comes to situations of risk with pre-existing opinions. We all use our experiences to try and make sense of how the world works. These are our priors.
Priors act as a map of the territory of reality; we survey our past experiences, build abstract mental models from them, and then use those mental models to help us understand the world.
But priors can be misleading, even when they’re based on real experiences. Why?
For one, we often mistake group-indexed averages for individually-indexed averages.
For another, we often mistake uncertainty for risk.
Risk is a situation in which the variables (how likely a scenario is to happen, what we stand to lose or gain if it does) are known.
When we face risk, our best tool for decision-making is statistical analysis.
Imagine playing Russian Roulette; there’s one gun, one bullet, and 6 chambers. You can calculate your odds of success or failure.
Uncertainty is a situation in which the variables are unknown. Imagine a version of Russian Roulette where you don’t get to know how many bullets there are, or even how many chambers in the gun.
Not only can you not calculate your odds in this scenario, trying to do so will only give you a false sense of confidence.
When we face uncertainty, our best decision-making tool is game theory.
Mistaking risk for certainty is called the zero-risk illusion.
This is what happens when we get a positive result on a medical test, and convince ourselves there’s no way the test could be wrong.
Because the world is infinitely complex, we can’t always interact directly with the thing we care about (referred to as the underlying).
But there’s a more subtle (and often more damaging) illusion to think about:
Mistaking uncertainty for risk. This is known as the calculable-risk illusion.
To understand how we get to this illusion, we have to understand a bit about derivatives.
Because the world is infinitely complex, we can’t always interact directly with the things we care about (referred to as the the underlying.)
For example, we may care about the health of a company – how happy their employees are, how big their profit margin is, how much money they have in savings.
But it’s hard to really get a grip on all those variables.
To get around problems like this, we often look at some other metric (referred to as the derivative) that we believe is correlated with the thing we care about.
For example: we care about the health of the company (the underlying). But because that’s so complex, we choose to pay attention to the stock price of the company instead (the derivative). That’s
because we believe that the two are correlated: if the health of the company improves, the stock price will rise.
The relationship between the underlying and the derivative is called the basis.
If you understand the basis, you can use a derivative to understand the underlying.
But the world is complicated. We often DON’T really understand the basis. Maybe we mistook causation for correlation. Or maybe we DID understand the basis, but it changed over time.
The problem is that re-examine our assumptions about how the world works.
This puts us in a situation where we mistake uncertainty for risk. We think we have enough information to calculate the odds. We think we can use statistical analysis to figure out the right thing to
The problem is that we often don’t have enough information. This is the “Turkey Problem”: every single data point tells us the farmer treats us well.
And that’s true…right up until Thanksgiving Day.
We cruise along, comforted by seemingly-accurate mathematical models of the world…only to be shocked when the models blow up and everything falls apart.
That’s the calculable-risk illusion.
This is how our maps can stop matching our territory.
OK – so we know that when situations are uncertain (and that’s a lot of, if not most of the time), we’re supposed to use game theory.
What are some examples of using game theory to help make decisions?
One example is the Common Knowledge Game.
Common knowledge games are situations in which we act based on what we believe other people believe.
Like a beauty contest where voting for the winning contestant wins you money, it’s not about whom you like best (first-order decision making)…
Or whom you think other people like best (second-order decision making)…
But whom you think other people will think other people like best (third-order decision making).
So: how do we know what other people know?
As in the case of the eye-color tribe, a system’s static equilibrium is shattered when public statements are made.
Information is injected into the system in such a way that everyone knows that everyone else knows.
Our modern equivalent is the media. We have to ask ourselves where other people think other people get their information.
Whatever statements come from these sources will affect public behavior…
…Not because any new knowledge is being created, but because everyone now knows that everyone else heard the message.
(This, by the way, is why investors religiously monitor the Federal Reserve. It’s not because the Fed tells anyone anything new about the state of the economy. It’s because it creates “common
Whew! That’s a lot of stuff.
Let’s try to bring all these different ideas together in one fun example:
The Monty Hall Problem.
Monty Hall was famous television personality, best-known as the host of the game show Let’s Make a Deal.
Let’s Make a Deal featured a segment that became the setting for a famous logic problem…
One that excellently displays how our maps can become disconnected from the territory.
The problem was popularized Marilyn vos Savant in a British Newspaper. Here’s the problem as she formulated it:
Suppose you are on a game show, and you’re given the choice of three doors.
Behind one door is a car, behind the others, goats.
The rules are that you can pick any door you want, and you’ll also get a chance to switch if you want.
You pick a door, say number 1, and the host, who knows what’s behind the doors, opens another door, say number 3, which has a goat.
He says to you, “Do you want to pick door number 2?”
Is it to your advantage to switch your choice of doors?
Take a minute to think it through and come up with your own answer.
Let’s start by asking ourselves:
Is this a scenario of risk or uncertainty?
The answer is risk.
We know the odds, and can calculate our chances to win. That means statistical analysis is our friend.
So how do we calculate our odds?
The typical line of reasoning will go something like this:
Each door has a 1/3 probability of having the car behind it.
One door has been opened, which eliminates 1/3 of my chances.
Therefore, the car must be behind one of these two doors. That means I have a 50/50 chance of having picked the right door.
That means there’s no difference between sticking with this door or switching.
While this conclusion seems obvious (and believe me, this is the conclusion I came to)…
It turns out to be wrong. 🙂
Remember our discussion of medical tests?
To figure out how to think about our risk level, we imagined a group of 1,000 people all taking the same tests.
We then used the false positive rate to figure out how many people would test positive that didn’t have the disease.
Let’s apply a similar tool here.
Imagine three people playing this game. Each person picks a different door.
I’ll quote here from the book Risk Savvy, where I first learned about the Monty Hall Problem:
Assume the car is behind door 2.
The first contestant picks door 1. Monty’s only option is to open door 3, and he offers the contestant the opportunity to switch.
Switching to door 2 wins.
The second contestant picks door 3. This time, Monty has to open door 1, and switching to door 2 again wins.
Only the third contestant who picks door 2 will lose when switching.
Now it is easier to see that switching wins more often than staying, and we can calculate exactly how often: in two out of three cases.
This is why Marilyn recommended switching doors.
It becomes easier to imagine the potential outcomes if we picture a large group of people going through the same situation.
In this scenario, the best answer is to always switch.
Here’s an interesting twist, though:
Should you actually use this strategy on Let’s Make a Deal?
This is where the calculable-risk illusion rears it’s ugly head.
In the beginning of our discussion, I said the Monty Hall Problem was an example of risk. Our odds are calculable, and we understand the rules.
That’s why statistical analysis is helpful.
But reality is often far more complicated than any logic puzzle.
The question we need to ask in real life is: Will Monty ALWAYS give me the chance to switch?
For example, Monty might only let me switch if I chose the door with the car behind it.
If that’s the case, always switching is a terrible idea!
The real Monty Hall was actually asked about this question in The New York Times.
Hall explicitly said that he had complete control over how the game progressed, and that he used that power to play on the psychology of the contestant.
For example, he might open their door immediately if it was a losing door, might offer them money to not switch from a losing door to a winning door, or might only allow them the opportunity to
switch if they had a winning door.
Hall in his own words:
“After I showed them there was nothing behind one door, [Contestants would think] the odds on their door had now gone up to 1 in 2, so they hated to give up the door no matter how much money I
offered. By opening that door we were applying pressure.”
“If the host is required to open a door all the time and offer you a switch, then you should take the switch…But if he has the choice whether to allow a switch or not, beware. Caveat emptor. It all
depends on his mood.”
You can see this play out in this specific example, taken again from Risk Savvy:
After one contestant picked door 1, Monty opened door 3, revealing a goat.
While the contestant thought about switching to door 2, Monty pulled out a roll of bills and offered $3,000 in cash not to switch.
“I’ll switch to it,” insisted the contestant.
“Three thousand dollars,” Monty Hall repeated, “Cash. Cash money. It could be a car, but it could be a goat. Four thousand.”
The contestant resisted the temptation. “I’ll try the door.”
“Forty-five hundred. Forty-seven. Forty-eight. My last offer: Five thousand dollars.”
“Let’s open the door.” The contestant again rejected the offer.
“You just ended up with a goat,” Monty Hall said, opening the door.
And he explained: “Now do you see what happened there? The higher I got, the more you thought that the car was behind door 2. I wanted to con you into switching there, because I knew the car was
behind 1. That’s the kind of thing I can do when I’m in control of the game.“
What’s really happening here?
The contestant is committing the calculable-risk illusion.
They’re mistaking risk for uncertainty.
They think the game is about judging the probability that their door contains either car or goat.
But it isn’t.
The game is about understanding Monty Hall’s personality.
Whenever we shift from playing the game to playing the player, we have made the move from statistical analysis to game theory.
Instead of wondering what the probabilities are, we need to take into account:
1. Monty’s past actions, his personality, his incentives (to make the TV show dramatic and interesting)…
2. As well as what HE knows (which door has a car behind it)…
3. And what HE knows WE know… (that he knows which door has a car behind it)
4. And how that might change his behavior (since he knows we know he knows where the goals are, and he expects us to expect him to offer money if we picked the right door, he might do the opposite).
The map-territory problem can get us if we refuse to use statistical analysis where it’s warranted..
And when we keep using statistical analysis when it isn’t.
Now that we’ve seen some of these ideas in action, it’s FINALLY time to start addressing the root cause of all these emails:
The Coronavirus Pandemic.
We’ll be bringing all these mental models to bear on a tough problem:
How do I decide what to do, when so much is uncertain? And WHY is all of this so hard to understand?
Better Questions Newsletter
Join the newsletter to receive the latest updates in your inbox. | {"url":"https://www.betterquestions.co/monty-hall/","timestamp":"2024-11-09T13:26:25Z","content_type":"text/html","content_length":"120571","record_id":"<urn:uuid:c212a26e-b0fe-4cef-a344-460ddbf7922d>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00027.warc.gz"} |
8.2: Injective and Surjective Functions
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
We now turn our attention to some important properties that a function may or may not possess. Recall that if \(f\) is a function, then every element in its domain is mapped to a unique element in
the range. However, there are no restrictions on whether more than one element of the domain is mapped to the same element in the range. If each element in the range has a unique element in the
domain mapping to it, then we say that the function is injective. Moreover, the range of a function is not required to be all of the codomain. If every element of the codomain has at least one
element in the domain that maps to it, then we say that the function is surjective. Let’s make these definitions a bit more precise.
Definition 8.26. Let \(f:X\to Y\) be a function.
1. The function \(f\) is said to be injective (or one-to-one) if for all \(y\in \range(f)\), there is a unique \(x\in X\) such that \(y=f(x)\).
2. The function \(f\) is said to be surjective (or onto) if for all \(y\in Y\), there exists \(x\in X\) such that \(y=f(x)\).
3. If \(f\) is both injective and surjective, we say that \(f\) is bijective.
Problem 8.27. Compare and contrast the following statements. Do they mean the same thing?
1. For all \(x\in X\), there exists a unique \(y\in Y\) such that \(f(x)=y\).
2. For all \(y\in \range(f)\), there is a unique \(x\in X\) such that \(y=f(x)\).
Problem 8.28. Assume that \(X\) and \(Y\) are finite sets. Provide an example of each of the following. You may draw a function diagram, write down a list of ordered pairs, or write a formula as long
as the domain and codomain are clear.
1. A function \(f:X\to Y\) that is injective but not surjective.
2. A function \(f:X\to Y\) that is surjective but not injective.
3. A function \(f:X\to Y\) that is a bijection.
4. A function \(f:X\to Y\) that is neither injective nor surjective.
Problem 8.29. Provide an example of each of the following. You may either draw a graph or write down a formula. Make sure you have the correct domain.
1. A function \(f:\mathbb{R}\to \mathbb{R}\) that is injective but not surjective.
2. A function \(f:\mathbb{R}\to \mathbb{R}\) that is surjective but not injective.
3. A function \(f:\mathbb{R}\to \mathbb{R}\) that is a bijection.
4. A function \(f:\mathbb{R}\to \mathbb{R}\) that is neither injective nor surjective.
5. A function \(f:\mathbb{N}\times\mathbb{N}\to \mathbb{N}\) that is injective.
Problem 8.30. Suppose \(X\subseteq \mathbb{R}\) and \(f:X\to \mathbb{R}\) is a function. Fill in the blank with the appropriate word.
The function \(f:X\to \mathbb{R}\) is if and only if every horizontal line hits the graph of \(f\) at most once.
This statement is often called the horizontal line test. Explain why the horizontal line test is true.
Problem 8.31. Suppose \(X\subseteq \mathbb{R}\) and \(f:X\to \mathbb{R}\) is a function. Fill in the blank with the appropriate word.
The function \(f:X\to \mathbb{R}\) is if and only if every horizontal line hits the graph of \(f\) at least once.
Explain why this statement is true.
Problem 8.32. Suppose \(X\subseteq \mathbb{R}\) and \(f:X\to \mathbb{R}\) is a function. Fill in the blank with the appropriate word.
The function \(f:X\to \mathbb{R}\) is if and only if every horizontal line hits the graph of \(f\) exactly once.
Explain why this statement is true.
How do we prove that a function \(f\) is injective? We would need to show that every element in the range has a unique element from the domain that maps to it. First, notice that each element in the
range can be written as \(f(x)\) for at least one \(x\) in the domain. To argue that each such element in domain is unique, we can suppose \(f(x_{1})=f(x_{2})\) for arbitrary \(x_1\) and \(x_2\) in
the domain and then work to show that \(x_{1}=x_{2}\). It is important to point out that when we suppose \(f(x_{1})=f(x_{2})\) for some \(x_1\) and \(x_2\), we are not assuming that \(x_1\) and \(x_2
\) are different. In general, when we write “Let \(x_1,x_2\in X\)…", we are leaving open the possibility that \(x_1\) and \(x_2\) are actually the same element. One could approach proving that a
function is injective by utilizing a proof by contradiction, but this is not usually necessary.
Skeleton Proof 8.33. Here is the general structure for proving that a function is injective.
Assume \(f:X\to Y\) is a function defined by (or satisfying)…[Use the given definition (or describe the given property) of \(f\)]. Let \(x_1,x_2\in X\) and suppose \(f(x_1)=f(x_2)\).
\(\ldots\) [Use the definition (or property) of \(f\) to verify that \(x_1=x_2\)] \(\ldots\)
Therefore, the function \(f\) is injective.
How do we prove that a function \(f\) is surjective? We would need to argue that every element in the codomain is also in the range. Sometimes, the proof that a particular function is surjective is
extremely short, so do not second guess yourself if you find yourself in this situation.
Skeleton Proof 8.34. Here is the general structure for proving that a function is surjective.
Assume \(f:X\to Y\) is a function defined by (or satisfying)…[Use the given definition (or describe the given property) of \(f\)]. Let \(y\in Y\).
\(\ldots\) [Use the definition (or property) of \(f\) to find some \(x\in X\) such that \(f(x)=y\)] \(\ldots\)
Therefore, the function \(f\) is surjective.
Problem 8.35. Determine whether each of the following functions is injective, surjective, both, or neither. In each case, you should provide a proof or a counterexample as appropriate.
1. Define \(f:\mathbb{R}\to \mathbb{R}\) via \(f(x)=x^{2}\)
2. Define \(g:\mathbb{R}\to [0,\infty)\) via \(g(x)=x^{2}\)
3. Define \(h:\mathbb{R}\to \mathbb{R}\) via \(h(x)=x^{3}\)
4. Define \(k:\mathbb{R}\to \mathbb{R}\) via \(k(x)=x^{3}-x\)
5. Define \(c: \mathbb{R}\times \mathbb{R}\to \mathbb{R}\) via \(c(x,y)=x^{2}+y^{2}\)
6. Define \(f:\mathbb{N}\to \mathbb{N}\times \mathbb{N}\) via \(f(n)=(n,n)\)
7. Define \(g:\mathbb{Z}\to \mathbb{Z}\) via \[g(n)=\begin{cases} \frac{n}{2}, & \text{if }n\text{ is even}\\ \frac{n+1}{2}, & \text{if }n\text{ is odd}\\ \end{cases}\]
8. Define \(\ell:\mathbb{Z}\to \mathbb{N}\) via \[\ell(n)=\begin{cases} 2n+1, & \text{if }n\geq 0\\ -2n, & \text{if }n<0\\ \end{cases}\]
9. The function \(h\) defined in Problem 8.24(4).
10. The function \(k\) defined in Problem 8.24(5).
11. The function \(\ell\) defined in Problem 8.24(6).
Problem 8.36. Suppose \(X\) and \(Y\) are nonempty sets with \(m\) and \(n\) elements, respectively, where \(m\leq n\). How many injections are there from \(X\) to \(Y\)?
Problem 8.37. Compare and contrast the definition of “function" with the definition of “injective function". Consider the vertical line test and horizontal line test in your discussion. Moreover,
attempt to capture what it means for a relation to not be a function and for a function to not be an injection by drawing portions of a digraph.
The next two theorems should not come as as surprise.
Theorem 8.38. The inclusion map \(\iota:X\to Y\) for \(X\subseteq Y\) is an injection.
Theorem 8.39. The identity function \(i_X:X\to X\) is a bijection.
Problem 8.40. Let \(A\) and \(B\) be nonempty sets and let \(S\) be a nonempty subset of \(A\times B\). Define \(\pi_{1}:S\to A\) and \(\pi_{2}:S\to B\) via \(\pi_{1}(a,b)=a\) and \(\pi_{2}(a,b)=b\).
We call \(\pi_{1}\) and \(\pi_{2}\) the projections of \(S\) onto \(A\) and \(B\), respectively.
1. Provide examples to show that \(\pi_{1}\) does not need to be injective nor surjective.
2. Suppose that \(S\) is also a function. Is \(\pi_{1}\) injective? Is \(\pi_{1}\) surjective? How about \(\pi_{2}\)?
The next theorem says that if we have an equivalence relation on a nonempty set, the mapping that assigns each element to its respective equivalence class is a surjective function.
Theorem 8.41. If \(\sim\) is an equivalence relation on a nonempty set \(A\), then the function \(f:A\to A/\mathord\sim\) defined via \(f(x)=[x]\) is a surjection.
The function from the previous theorem is sometimes called the canonical projection map induced by \(\sim\).
Problem 8.42. Under what circumstances would the function from the previous theorem also be injective?
Let’s explore whether we can weaken the hypotheses of Theorem 8.41.
Problem 8.43. Let \(R\) be a relation on a nonempty set \(A\).
1. What conditions on \(R\) must hold in order for \(f:A\to Rel(R)\) defined via \(f(a)=rel(a)\) to be a function?
2. What additional conditions, if any, must hold on \(R\) in order for \(f\) to be a surjective function?
Given any function, we can define an equivalence relation on its domain, where the equivalence classes correspond to the elements that map to the same element of the range.
Theorem 8.44. Let \(f:X\to Y\) be a function and define \(\sim\) on \(X\) via \(a\sim b\) if \(f(a) = f(b)\). Then \(\sim\) is an equivalence relation on \(X\).
It follows immediately from Theorem 7.59 that the equivalence classes induced by the equivalence relation in Theorem 8.44 partition the domain of a function.
Problem 8.45. For each of the following, identify the equivalence classes induced by the relation from Theorem 8.44 for the given function.
1. The function \(f\) defined in Example 8.2.
2. The function \(c\) defined in Problem 8.35(5). Can you describe the equivalence classes geometrically?
If \(f\) is a function, the equivalence relation in Theorem 8.44 allows us to construct a bijective function whose domain is the set of equivalence classes and whose codomain coincides with the range
of \(f\). This is an important idea that manifests itself in many areas of mathematics. One such instance is the First Isomorphism Theorem for Groups, which is a fundamental theorem in a branch of
mathematics called group theory. When proving the following theorem, the first thing you should do is verify that the description for \(\overline{f}\) is well defined.
Theorem 8.46. Let \(f:X\to Y\) be a function and define \(\sim\) on \(X\) as in Theorem 8.44. Then the function \(\overline{f}:X/\mathord\sim\to \range(f)\) defined via \(\overline{f}([a]) = f(a)\)
is a bijection.
Here is an analogy for helping understand the content of Theorem 8.46. Suppose we have a collection airplanes filled with passengers and a collection of potential destination cities such that at most
one airplane may land at each city. The function \(f\) indicates which city each passenger lands at while the function \(\overline{f}\) indicates which city each airplane lands at. Moreover, the
codomain for the function \(\overline{f}\) consists only of the cities that an airplane lands at.
Example 8.47. Let \(X=\{a,b,c,d,e,f\}\) and \(Y=\{1,2,3,4,5\}\) and define \(\varphi:X\to Y\) via \[\varphi=\{(a,1),(b,1),(c,2),(d,4),(e,4),(f,4)\}.\] The function diagram for \(\varphi\) is given in
Figure 8.2(1), where we have highlighted the elements of the domain that map to the same element in the range by enclosing them in additional boxes. We see that \(\range(\varphi)=\{1,2,4\}\). The
function diagram for the induced map \(\overline{\varphi}\) that is depicted in Figure 8.2(2) makes it clear that \(\overline{\varphi}\) is a bijection. Note that since \(\varphi(a)=\varphi(b)\) and
\(\varphi(d)=\varphi(e)=\varphi(f)\), it must be the case that \([a]=[b]\) and \([d]=[e]=[f]\) according to Theorem 7.42. Thus, the vertices labeled as \([a]\) and \([d]\) in Figure 8.2(2) could have
also been labeled as \([b]\) and \([c]\) or \([d]\), respectively. In terms of our passengers and airplanes analogy, \(X=\{a,b,c,d,e,f\}\) is the set of passengers, \(Y=\{1,2,3,4,5\}\) is the set of
potential destination cities, \(X/\mathord\sim=\{[a],[c],[d]\}\) is the set of airplanes, and \(\range(\varphi)=\{1,2,4\}\) is the set of cities that airplanes land at. The equivalence class \([a]\)
is the airplane containing the passenger \(a\), and since \(a\) and \(b\) are on the same plane, \([b]\) is also the plane containing the passenger \(a\).
Figure 8.2: Example of a visual representation of Theorem 8.46.
Problem 8.48. Consider the equivalence classes you identified in Parts (a) and (b) of Problem 8.45.
1. Draw the function diagram for the function \(\overline{f}\) as defined in Theorem 8.46, where \(f\) is the function defined in Example 8.2.
2. Geometrically describe the function \(\overline{c}\) as defined in Theorem 8.46, where \(c\) is the function defined in Problem 8.35(5).
While perhaps not surprising, Problem 8.48(2) tells us that there is a one-to-one correspondence between circles centered at the origin and real numbers.
Problem 8.49. Let \(Y=\{0,1,2,3\}\) and define the function \(f:\mathbb{Z}\to Y\) such that \(f(n)\) equals the unique remainder obtained after dividing \(n\) by 4. For example, \(f(11)=3\) since \
(11=4\cdot 2+3\) according to the Division Algorithm (Theorem 6.7). This function is sometimes written as \(f(n)=n \pmod{4}\), where it is understood that we restrict the output to \(\{0,1,2,3\}\).
It is clear that \(f\) is surjective since 0, 1, 2, and 3 are mapped to 0, 1, 2, and 3, respectively. Figure 8.3 depicts a portion of the function diagram for \(f\), where we have drawn the diagram
from the top down instead of left to right.
1. Describe the equivalence classes induced by the relation given in Theorem 8.44.
2. What familiar set is \(\mathbb{Z}/\mathord\sim\) equal to?
3. Draw the function diagram for the function \(\overline{f}\) as defined in Theorem 8.46.
Figure 8.3: Function diagram for the function described in Problem 8.49.
4. The function diagram in Figure 8.3 is a bit hard to interpret due to the ordering of the elements in the domain. Can you find a better way to lay out the vertices in the domain that makes the
function \(f\) easier to interpret?
Consider the function \(h\) defined in Problem 8.24(d).
1. Draw the function diagram for \(h\).
2. Identify the equivalence classes induced by the relation given in Theorem 8.44.
3. Draw the function diagram for the function \(\overline{h}\) as defined in Theorem 8.46. | {"url":"https://math.libretexts.org/Bookshelves/Mathematical_Logic_and_Proof/An_Introduction_to_Proof_via_Inquiry-Based_Learning_(Ernst)/08%3A_New_Page/8.02%3A_New_Page","timestamp":"2024-11-09T17:03:58Z","content_type":"text/html","content_length":"147118","record_id":"<urn:uuid:303e5e79-8922-4302-8a1f-11f475c933c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00887.warc.gz"} |
1. Introduction2. A Brief Review on the Projective Approach3. An Exact Projective Solution to the Nonautonomous NLS System4. Summary and Discussion5. AcknowledgementsREFERENCESNOTESReferences
JMPJournal of Modern Physics2153-1196Scientific Research Publishing10.4236/jmp.2012.38095JMP-21680ArticlesPhysics&Mathematics Exact Projective Excitations of Nonautonomous Nonlinear Schrödinger
System in (1 + 1)-Dimensions ianfengYe^1ChunlongZheng^1^*Department of Physics, Shaoguan University, Shaoguan, Guangdong, China* E-mail:clzheng@yahoo.cn(CZ);140820120308702708May 8, 2012June 1, 2012
June 28, 2012© Copyright 2014 by authors and Scientific Research Publishing Inc. 2014This work is licensed under the Creative Commons Attribution International License (CC BY). http://
With the aid of a direct projective approach, a general transformation solution for the nonautonomous nonlinear Schr?dinger (NLS) system is derived. Based on certain known exact solutions of the
projective equation, some periodic and localized excitations with novel properties are correspondingly revealed by entrancing appropriate system parameters. The integrable constraint conditions for
the nonautonomous NLS system derived naturally here are consistent with the compatibility condition via the Painlevé analysis in other literature.
Nonautonomous NLS System; Projective Approach; Exact Solution; Localized Excitation
Nonlinear science is believed, by many outstanding scientists, to be the most deeply important frontier for understanding nature and applications in reality. For example, nonlinear optical solitons
are regarded as the natural data bits and an important alternative for the next generation of ultrahigh speed optical telecommunication systems. It is known that the propagation of electromagnetic
waves in nonlinear optical waveguides and the ground state wave functions of Bose-Einstein condenstates (BECs) can be described by nonlinear Schrödinger (NLS) system [1], which is actually one of the
fundamental dynamical models in nonlinearity [2,3]. The nonautonomous NLS system in (1 + 1)-dimensional form is
where 4]. The subscripts x and t denote the spatial and temporal partial derivatives. These coefficients are often assumed to be real. In the contexts of many physical fields, the BECs and nonlinear
optics provide excellent proving grounds for exploring nonlinear systems with distributed coefficients. It has been reported that specific dependencies of the equation coefficients on time variables
can enhance the stabilities of the solutions [5,6]. Moreover, timemodulated nonlinearities and/or dispersions can facilitate the manipulation of the soliton behaviors. These facts have greatly
enlarged our knowledge on nonlinear excitations and given an origin to some important concepts such as nonautonomous soliton [7], and Feschbach resonance, which has been used to control some
nonlinearities of matter waves by manipulating the scattering length either in time or space, and have led to certain proposals of many novel nonlinear phenomena. Dispersion management (DM) for BBCs
has also proposed recently and has induced plenty of consequent studies. In nonlinear optics, nonlinear management (NM) and DM are also both used for experiments and theories with temporal or spatial
optical solitons, soliton lasers, ultrafast soliton switches [7]. Furthermore, some recent progresses on inhomogeneous nonlinear media have generated novel concepts such as the optical similariton
[8]. However, the nonautonomous NLS system (1) and/or its similar versions are very difficult to be solved because of the presence of the time-dependent dispersion, nonlinear interaction managements
and external potential. Up to now, a general exact solution to the nonautonomous NLS system (1) has been rarely found although the knowledge of such exact solutions is very valuable for various
purposes. Certainly, some special exact solutions have been obtained by the Lax par method [7], the similarity transformation [8] and so on. In the short note, we try to give a general exact solution
to the nonautonomous NLS system via a selfsimilarity projective approach (SPA), which can convert all exact solutions of selfsimilar well-known models to corresponding solutions of the nonautonomous
NLS system. In the following section, we first briefly describe the projective approach. In Section 3, a general exact solution to the nonautonomous NLS system will be derived via the SPA. A brief
summary and discussion is given in the last section.
In recent decades, many powerful approaches have been devised, such as inverse scattering theory, Bäcklund transformation, Hirota’s bilinear method, Darboux transformation, the hyperbola function
method, the mixed exponential method, homogeneous balance method, and multilinear variable separation approach et al. [9]. Besides these methods, one can also obtain solutions of a nonlinear partial
differential equations (NLPDEs) by the technical of establishing and making full advantage of a direct projective relation between the given NLPDE and other NLPDE and its known solutions sometimes.
For example, using a deformation projective method first presented by Lou and his coworkers [10], some scholars [11-13] obtained many soliton solutions and periodic solutions of nonlinear models
through finding some relations between the exact solutions of the given models and those of the cubic nonlinear Klein-Gordon (NKG) system which has been studied widely in the previous literature
[10]. The basic idea of the algorithm is as follows: For a general nonlinear physical system
where 14], one may derive more travelling wave solutions of system (2) than previous ones.
Recently, the above projective transformation approach is further extended for finding novel localized excitations of a physical model [15-17]. With the help of projective transformation idea and
based on the general reduction theory, the projective algorithm is extended that: For the system (2) we assume its solution in an extended symmetric form
Motivated by the above ideas, one may assume a selfsimilar family model as the projective equation, which has been extensively studied in other literature [18,19]. For instant, when discussing a
nonautonomous Korteweg-de Vries (KdV) system, we may use the classical (1 + 1)- dimensional KdV equation as a projective equation since the autonomous KdV equation has been widely explored. In the
following part of the paper, the (1 + 1)-dimensional nonautonomous NLS system is selected to illustrate the selfsimilarity projective approach, and a general exact solution to the nonautonomous NLS
system is derived, which can convert all exact solutions of self-similar well-known standard nonlinear Schröding equation to the corresponding exact solutions of the nonautonomous NLS system.
The standard nonlinear Schrödinger (NLS) equation can be used as a self-similar projective equation of the nonautonomous NLS system assumed, the dimensionless autonomous NLS form is
For more detailed information, one may refer to a review in reference [20]. Actually, in (1 + 1) dimensions, it has been proven when a physical system can be expressed by a partial differential
equation, then under some suitable approximations, one can always find nonlinear Schrödinger type equation [21,22]. This is why the (1 + 1)-dimensional nonlinear Schrödinger system can be
successfully used in almost all the physical branches.
In order to build a direct projective relation between the nonautonomous NLS system (1) and the standard NLS Equation (5), we construct an ansatz to the nonautonomous NLS system as follows
After some careful and direct algebra, one can obtains the amplitude, self-similarity variables and phase of the complex wave pulse
and with a constraint condition
It is interesting to note that when the net gain coefficient
which is just the completely integrable compatibility condition via the Painlevé analysis [4,7], i.e., a subtle balance condition to keep the nonautonomous NLS system integrable.
From the management viewpoint of the solitons, Equation (18) provides an effectively way to manipulate soliton dynamics. When any three parameters among 7]. However, the general selfsimilarity
projective transformations (13)-(16) were not reported in the previous literature. Such transformations are quite systematic in obtaining the exact solutions of the nonautonomous NLS system. For a
given nonautonomous NLS system or its similar versions, we first check if the coefficients satisfy the constrain condition Equation (18). If it is true, then the nonautonomous NLS system can be
reduced to the standard NLS Equation (5). All allowed exact solutions, including canonical solitons, of the standard NLS Equation (5) can be converted into the corresponding exact solutions of the
nonautonomous NLS system. In the sense, the canonical soliton of the standard NLS equation can be naturally viewed as a seed solution of the corresponding localized solutions of Equation (1) under
the compatibility condition Equation (18). For example, when
and the integrable constants
with the constraint condition (18). As a special case, if
Once the nonlinear parameter
Firstly, as a special situation, if
Secondly, for more general cases, if
then we can rederive fundamental canonical solitons, which distinctly indicates the influence of the dispersion, nonlinear managements and net gain to the localized excitation behaviors.
Finally, it is also interesting to mention that the external trap potential
In summary, the direct self-similarity projective approach is successfully applied to the nonautonomous nonlinear Schrödinger system. In terms of the known exact solutions of the self-similarity
projective equation, i.e., the standard nonlinear Schröding equation, some significant types of localized excitations with novel properties are correspondingly revealed by entrancing appropriate
system parameters. The present analysis can be applied to all exact solutions of the nonautonomous nonlinear Schrödinger system. The self-similarity projective approach provides an effective and a
systematical way to investigate the nonlinear dynamics of the nonautonomous nonlinear Schrödinger system. By the way, as a comparison it is helpful to mention some techniques to find the localized
excitation solutions of the nonautonomous NLS equation in previous literature. The Lax pair analysis is very useful in discussing integrability conditions. And a widely used approach is the
deformation projective method, which introduces some explicit transformation parameters. These parameters are determined by a set of partial differential equations, which in general case are little
solved analytically as emphasized in reference [23]. Another similarity transformation reducing the nonautonomous nonlinear Schrödinger equation to a stationary NLS one has also been introduced [24].
Alternatively, by the Lie point symmetry group analysis, the nonautonomous nonlinear Schrödinger system or its similar versions can be classified into different classes and each one can be converted
into the corresponding representative equation by some allowed transformations. As a result, some exact solutions of the representative equation can be transformed into the corresponding solutions of
the equations in the same class. However, it was also pointed out in [25] that in most cases it is still difficult to obtain the exact solutions of these representative equations and the
integrability of certain representative equations is not clear. Quite different from the above mentioned techniques, the present work builds a direct connection between the nonautonomous NLS equation
and its autonomous counterpart, which provides a more systematical way to find exact solutions of the nonautonomous NLS equation. The corresponding transformation formulas are explicit and
straightforward. Furthermore, one can naturally derive the integrable constrain condition (18) via the SPA rather than some integrable conditions via Painlevé analysis first in previous discussions.
From the control viewpoint, the self-similarity projective approach provides an effective and a powerful way to control the soliton dynamics as mentioned above. In addition, the SPA that we use to
solve the nonautonomous nonlinear Schrödinger system will pave the way to new methods for solving high-dimensional partial differential equations.
The authors are in debt to Professors Y. M. Liu and J. F. Zhang, Doctors C. Q. Dai and T. T. Jia for their fruitful discussions. The work was supported by the National Natural Science Foundation of
China under Grant No.11172181, the Natural Science Foundation of Guangdong Province of China under Grant No. 101512005- 01000008, the Special Foundation of Talent Engineering of Guangdong Province of
China, and the Scientific Research Foundation of Key Discipline of Guangdong Shaoguan University.
K. E. Strecker, G. B. Partridge, A. G. Truscott and R. G. Hulet, “Formation and Propagation of Matter-Wave Soli- ton Trains,” Nature, Vol. 417, 2002, pp. 150-153. doi:10.1038/nature747L. F.
Mollenauer, R. H. Stolen and J. P. Gordon, “Experimental Observation of Picosecond Pulse Narrowing and Solitons in Optical Fibers,” Physics Review Letter, Vol. 45, No. 13, 1980, pp. 1095-1098.
doi:10.1103/PhysRevLett.45.1095C. Q. Dai, X. G. Wang and J. F. Zhang, “Nonautonomous Spatiotemporal Localized Structures in the Inhomogene- ous Optical Fibers: Interaction and Control,” Annals of
Physics, Vol. 326, No. 3, 2011, pp. 645-656. doi:10.1016/j.aop.2010.11.005D. Zhao, X. G. He and H. G. Luo, “Transformation from the Nonautonomous to Standard NLS Equations,” Physics and Astronomy,
Vol. 53, No. 2, 2009, pp. 213-216. doi:10.1140/epjd/e2009-00051-7I. Towers and B. A. Malomed, “Stable (2+1)-Dimen- sional Solitons in a Layered Medium with Sign-Alter- nating Kerr Nonlinearity,”
Journal of the Optical Society of America B, Vol. 19, 2002, pp. 537-543. doi:10.1364/JOSAB.19.000537Y. Gao and S. Y. Lou, “Analytical Solitary Wave Solutions to a (3+1)-Dimensional Gross-Pitaevskii
Equation with Variable Coefficients,” Communications in Theore- tical Physics, Vol. 52, 2009, pp. 1030-1035.V. N. Serkin, A. Hasegawa and T. L. Belyaeva, “No- nautonomous Solitons in External
Potentials,” Physics Review Letter, Vol. 98, No. 7, 2007, Article ID: 074102. doi:10.1103/PhysRevLett.98.074102S. A. Ponomarenko and G. P. Agrawal, “Do Solitonlike Self-Similar Waves Exist in
Nonlinear Optical Media?” Physics Review Letter, Vol. 97, No. 1, 2006, Article ID: 013901. doi:10.1103/PhysRevLett.97.013901S. Y. Lou, H. C. Hu and X. Y. Tang, “Interactions among Periodic Waves and
Solitary Waves of the (N+1)-Di- mensional Sine-Gordon Field,” Physics Review E, Vol. 71, No. 3, 2005, Article ID: 036604. doi:10.1103/PhysRevE.71.036604S. Y. Lou and G. J. Ni, “The Relations among a
Special Type of Solutions in Some (D+1)-Dimensional Nonlinear Equations,” Journal of Mathematical Physics, Vol. 30, No. 7, 1989, pp. 1614-1620. doi:10.1063/1.528294H. M. Li, “Searching for (3+1)
-Dimensional Painlevé Integrable Model and Its Solitary Wave Solution,” Chi- nese Physics Letters, Vol. 19, No. 6, 2002, pp. 745-747. doi:10.1088/0256-307X/19/6/301C. L. Zheng, H. P. Zhu and L. Q.
Chen, “Exact Solution and Semifolded Structures of Generalized BroerKaup System in (2+1)-Dimensions,” Chaos, Solitons & Fractals, Vol. 26, No. 1, 2004, pp. 181-194. doi:10.1016/
j.chaos.2004.12.017Sirendaoreji and S. Jiong, “Auxiliary Equation Method for Solving Nonlinear Partial Differential Equations,” Physics Letters A, Vol. 309, No. 5-6, 2003, pp. 387-396. doi:10.1016/
S0375-9601(03)00196-8E. G. Fan, “An Algebraic Method for Finding a Series of Exact Solutions to Integrable and Nonintegrable Nonlinear Evolution Equations,” Journal of Physics A: Mathematical and
General, Vol. 36, No. 25, 2003, p. 7009. doi:10.1088/0305-4470/36/25/308C. L. Zheng and L. Q. Chen, “Some Novel Evolutional Behaviors of Localized Excitations in the Boiti-Leon- Martina-Pempinelli
System,” International Journal of Modern Physics B, Vol. 22, No. 6, 2008, pp. 671-682. doi:10.1142/S0217979208038879H. Y. Wu, J. X. Fei and C. L. Zheng, “Self-Similar Solutions of
Variable-Coefficient Cubic-Quintic Nonlinear Schr?dinger Equation with an External Potential,” Communications in Theoretical Physics, Vol. 54, 2010, pp. 55-59.J. X. Fei and C. L. Zheng, “Chirped
Self-Similar Solutions of a Generalized Nonlinear Schr?dinger Equation,” Verlag der Zeitschrift für Naturforschung, Vol. 66, 2011, pp. 1-5.C. Q. Dai, Y. Y. Wang and X. G. Wang, “Ultrashort
Self-Similar Solutions of the Cubic-Quintic Nonlinear Schr?dinger Equation with Distributed Coefficients in the Inhomogeneous Fiber,” Journal of Physics A: Mathematical and Theoretical, Vol. 44, No.
15, 2011, Article ID: 155203. doi:10.1088/1751-8113/44/15/155203C. Q. Dai, R. P. Chen and G. Q. Zhou, “Spatial Solitons with the Odd and Even Symmetries in (2+1)-Dimensional Spatially Inhomogeneous
Cubic-Quintic Nonlinear Media with Transverse W-Shaped Modulation,” Journal of Physics B: Atomic, Molecular and Optical Physics, Vol. 44, No. 14, 2011, Article ID: 145401. doi:10.1088/0953-4075/44/14
/145401C. Sulem and P. L. Sulem, “The Nonlinear Schr?dinger Equation: Self-focusing and Wave Collapse,” Springer- Verlag, New York, 1991.F. Calogero, A. Degasperis and J. Xiaoda, “Nonlinear Schr?
dinger-Type Equations from Multiscale Reduction of PDEs. I. Systematic Derivation,” Journal of Mathematical Physics, Vol. 41, No. 9, 2000, p. 6399. doi:10.1063/1.1287644F. Calogero and A. Degasperis,
“Nonlinear Schr?dinger- type Equations from Multiscale Reduction of PDEs. II. Necessary Conditions of Integrability for Real PDEs,” Journal of Mathematical Physics, Vol. 42, No. 6, 2001, pp.
2635-2652. doi:10.1063/1.1366296J. He and Y. Li, “Designable Integrability of the Variable Coefficient Nonlinear Schr?dinger Equations,” Studies in Applied Mathematics, Vol. 126, No. 1, 2011, pp.
1-15. doi:10.1111/j.1467-9590.2010.00495.xJ. B. Beitia, V. M. Perez-Garcia and V. Brazhnyib, “Solitary Waves in Coupled Nonlinear Schr?dinger Equations with Spatially Inhomogeneous Nonlinearities,”
Commu- nications in Nonlinear Science and Numerical Simulation, Vol. 16, No. 1, 2011, Article ID: 158172. doi:10.1016/j.cnsns.2010.02.024L. Gagnon and P. Winternitz, “Symmetry Classes of Variable
Coefficient Nonlinear Schr?dinger Equations,” Journal of Physics A: Mathematical and General, Vol. 26, No. 23, 1993, p. 7061. doi:10.1088/0305-4470/26/23/043 | {"url":"https://www.scirp.org/xml/21680.xml","timestamp":"2024-11-11T11:36:40Z","content_type":"application/xml","content_length":"44345","record_id":"<urn:uuid:5b71dde9-9a71-4027-a32e-324cd38a33ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00459.warc.gz"} |
On a maximal function on compact lie groups
Suppose that G is a compact Lie group with finite centre. For each positive number j we consider the Ad(G)-invariant probability measure μ[s]carried on the conjugacy class of exp([s]H[p]) in G. This
one-parameter family of measures is used to define a maximal function Mf, for each continuous function f on G. Our theorem states that there is an index p[0]in (1, 2), depending on G, such that the
maximal operator J? is bounded on LP(G) when p is greater than p[0]. When the rank of G is greater than one, this provides an example of a controllable maximal operator coming from averages over a
family of submanifolds, each of codimension greater than one.
Dive into the research topics of 'On a maximal function on compact lie groups'. Together they form a unique fingerprint. | {"url":"https://researchers.mq.edu.au/en/publications/on-a-maximal-function-on-compact-lie-groups","timestamp":"2024-11-09T23:09:54Z","content_type":"text/html","content_length":"50389","record_id":"<urn:uuid:7332e20a-2050-423f-8d10-afc5ad62e250>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00682.warc.gz"} |
Teacher access
Request a demo account. We will help you get started with our digital learning environment.
Student access
Is your university not a partner? Get access to our courses via
Pass Your Math
independent of your university. See pricing and more.
Or visit
if jou are taking an OMPT exam. | {"url":"https://app.passyourmath.com/courses/theory/38/336/4124/en","timestamp":"2024-11-09T06:50:23Z","content_type":"text/html","content_length":"75828","record_id":"<urn:uuid:9ea71b97-f377-4bd7-85e1-7430d2bdc019>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00618.warc.gz"} |
Unusual Article Uncovers the Deceptive Practices of What Is a Radical in Math
Negative numbers don’t have any square roots in the actual number system. There’s no longer a very clear division between what is foreign and what’s domestic. On the other hand, the second condition
isn’t how to write a dissertation met since we’ve got a radical in the denominator. This sort of radical is often referred to as the square root. Simplify the subsequent radical expression.
Fractions are done just like normal number Coefficients. Math works just like anything else, if you’d like to become good at it, then you want to practice it. When you learn Algebra, it is not quite
something similar.
This phenomenon appears to plague thousands and thousands of people around the world. If you would like to elevate everybody’s attention to detail, teach them of the visual arts. Otherwise you’re
passing up the fertilizer where the whole area of singularitarian SF, as well as posthuman thought, is rooted.
Whatever They Told You About What Is a Radical in Math Is Dead Wrong…And Here’s Why
However, rhinos are an endangered species and there’s a fair chance that by 2031 they’ll be extinct. There’s no longer a very clear division between what is foreign and what’s domestic. On the other
hand, the second condition isn’t met since we’ve got a radical in the denominator. This sort of radical is often referred https://paramountessays.com/buy-essay to as the square root. Simplify the
subsequent radical expression.
The Do’s and Don’ts of What Is a Radical in Math
Most calculators cannot take care of this dilemma inside this form. In that instance, the range is simply that one and only value. Massive graphs that exist in communication or computation share a
lot of similarities with sparse random graphs, but in addition, there are differences.
Throughout this site, we link to different outside sources. There’s a business that supplies road data for satnav systems. Here’s a hyperlink to the handout difficulties and their solutions.
Choosing Good What Is a Radical in Math
The educational facet of discrete mathematics is just as important and deserves extended coverage alone. The expression finite mathematics can be applied to regions of the area of discrete
mathematics that addresses finite sets, particularly those areas applicable to business. Utilize our service to work out a trigonometry tutor.
So let’s square either side of the equation. It’s the mathematics of computing. Writing the radical this way may arrive in handy when working with an equation with a huge number of exponents. When
expressing a good or quotient, it’s important to state the excluded values. We want to lessen our fractions when we’re likely to have our final answer.
So let’s square either side of the equation. Now, among the things you’re likely to observe whenever we do these radical equations is we wish to isolate a minumum of one of the radicals. In this
we’ll see how to do addition operation on the exponents. In the other direction, it is exceedingly desirable in order to construct random-like graphs. Then regroup the aspects to make fractions
equivalent to one.
Getting the Best What Is a Radical in Math
Throughout this site, we link to different outside sources. Numerical analysis offers an important example. Statistical software is going to be used.
The Upside to What Is a Radical in Math
It’s inevitable, and I am not anticipating it! You may tell by viewing the leftover terms. I will live there too.
The point isn’t to pick on Mx. Fine. When you get an excellent ending, it ought to be protected. So, it’s essential to present your eyes rest for some time by taking http://cs.gmu.edu/~zduric/day/
how-to-write-thesis-quickly.html breaks after specific time intervals.
Here’s What I Know About What Is a Radical in Math
The last area of the course addresses the tuning of musical scales and describing symmetries that come up in musical composition. While grading can be dependent on the degree of mastery of the
technique the authentic education comes in exploring the deeper meaning of producing something. These things perhaps ensure it is harder to pick out particular examples.
My intention isn’t to confuse you. On the other hand, the crucial concept is there. Our very brains revolt at the notion of randomness.
Byju’s classes unique method of solving the maths problem will force you to learn the method by which the equation was created, which is way superior than memorizing and applying the formula. Now,
among the things you’re likely to observe whenever we do these radical equations is we wish to isolate a minumum of one of the radicals. Trigonometry is about triangles. When expressing a good or
quotient, it’s important to state the excluded values. Then regroup the aspects to make fractions equivalent to one.
The Book Deliver its very best knowledge so you can achieve what u want. The website is organized by chapter. Ok, I think you’re prepared to commence this tutorial. It will be helpful to have a
superior eBook reader to be able to truly have a fantastic reading experience and superior superior eBook display. An excellent eBook reader ought to be installed.
WTAMU and Kim Seward are not accountable for how a student does on any test for virtually any reason including not having the ability to access the website as a result of any technology issues. In
truth, it appears to be a requirement of the contemporary public intellectual to have a very long list of credentials and an equally long collection of statements which are obviously erroneous. As
soon as we reach the goal I will get rid of all advertising from the website. | {"url":"https://iskygroupinc.com/unusual-article-uncovers-the-deceptive-practices-of-what-is-a-radical-in-math/","timestamp":"2024-11-12T03:27:18Z","content_type":"text/html","content_length":"30275","record_id":"<urn:uuid:580bccec-0a1b-49da-8fdb-5a506d8572d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00133.warc.gz"} |
: Assessment items for CAS usage
Several examples where Sage has been integrated into WeBWorK are available at http://math.mc.edu/webwork2/sage_demos/ . Log in as a guest. I have not looked at these in a while but most should
still work.
There are others but the main idea on many of them is to have Sage provide a nice picture (as WW can't) and to perhaps create the correct answer to check against based upon the WW input. I have a
few which actually incorporate a live Sage cell inside the problem with which the student can actually complete their calculations or do whatever.
Many of these questions are available in github and are in the OPL. Directly from github: https://github.com/openwebwork/webwork-open-problem-library/tree/master/OpenProblemLibrary/MC/Sage . You
can follow the director search in the library to find them as well so long as your system is up-to-date. You will need the sage.pl macro which is also available at https://github.com/openwebwork/pg/
tree/master/macros . | {"url":"https://webwork.maa.org/moodle/mod/forum/discuss.php?d=3260&parent=8296","timestamp":"2024-11-03T16:51:30Z","content_type":"text/html","content_length":"67599","record_id":"<urn:uuid:7e42a7ad-4fb4-4f96-a394-d98cd89d2e96>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00520.warc.gz"} |
Alternatives to Gini coefficient
While the Gini coefficient is widely used to measure income inequality, there are other methods available. Researchers explore diverse approaches like the Palma ratio, which compares the income share
of the top 10% to the bottom 40%. Another alternative is the 80/20 ratio, focusing on how much of the national income the top 20% earn. The Atkinson index is also meaningful, emphasizing income
distribution among different sections of society. These alternatives offer valuable insights into inequality beyond the limitations of the Gini coefficient. Each method has its strengths and
weaknesses, providing a more comprehensive understanding of economic disparities.
Table of Contents
(Gini Coefficient and Lorenz Curve)
The Gini coefficient, widely used to measure income inequality, has its limitations. Luckily, several alternatives offer a more comprehensive understanding of inequality. The Palma ratio, a simple
method, compares the income share of the top 10% to the bottom 40%. Similarly, the 20/20 ratio compares the top 20% to the bottom 20%, emphasizing the middle earners. The Theil index breaks down
inequality into within-group and between-group components, providing a nuanced perspective. Atkinson index integrates societal preferences to weigh income distribution, offering a socially conscious
approach. Additionally, the Hoover index focuses on the extent of income distribution rather than just its equality, giving a multifaceted view. Despite these alternatives, each metric has its
strengths and weaknesses, highlighting the complexity of measuring inequality. Therefore, a combination of different indices may offer a more holistic understanding of wealth distribution. By
utilizing these diverse tools, policymakers and researchers can gain deeper insights into income inequality, facilitating the development of targeted interventions that address societal disparities
Atkinson Index
When delving into measures of income inequality, the Atkinson Index emerges as a compelling alternative to the widely-known Gini coefficient. Picture this: while the Gini paints a broad stroke by
summing up disparities in one value, the Atkinson hones in on distribution nuances that often go overlooked.
The beauty of the Atkinson lies in its sensitivity to changes at different points along the income spectrum. Imagine you’re watching a delicate dance where each move matters – that’s what this index
does. It assigns varying weights to different parts of income distribution, capturing not just how unequal things are but also where those inequalities hit hardest.
Let me paint you a clearer picture: think of society as an intricate tapestry woven with threads of varying thicknesses and colors. The Gini might give you a bird’s eye view, noting some areas are
darker or lighter than others. But switch over to the Atkinson lens, and suddenly you’re zooming in, noticing subtle shifts and gradients invisible before.
Emotions can run high when discussing income inequality – it’s about people’s lives after all! The Atkinson steps onto this stage like a skilled actor ready to embody nuance and depth. It says, “Yes,
there is inequality here” but then adds whispers like “Look closer… see how deeply it cuts for some.”
And let’s talk numbers for a moment without losing our emotional thread – because these figures matter! Unlike the Gini which sums up everything into one value hanging above us like Damocles’ sword –
ominous yet detached – the Atkinson breaks down complexities into digestible bites. Each slice tells us something crucial about who bears the brunt of economic disparities.
Imagine being an economist trying to understand more than just surface-level inequalities; imagine wanting insights that could inform policies addressing real struggles faced by real people every
day. That’s where the magic lies with our friend, Mr.Atkinson!
So next time someone mentions measuring income disparity don’t settle for black-and-white snapshots—dive deeper with an ally like our nuanced hero,the Atkinson Index!
Hoover Index
When delving into alternatives to the Gini coefficient, one crucial measure that stands out is the Hoover Index. Picture this: you’re navigating through a maze of income inequality indicators, and
suddenly, you stumble upon an intriguing concept—the Hoover Index. It’s like finding a hidden gem among statistical measures.
The Hoover Index offers a unique perspective on income distribution by focusing on relative differences at various points along the income spectrum. Imagine looking at society through a different
lens where not just extremes matter but every step within the economic ladder holds significance.
Unlike its counterparts, the Hoover Index doesn’t stop at merely comparing rich versus poor; instead, it investigates how disparities change across all income levels. It’s as if you are zooming in
with precision to uncover nuances that other metrics might overlook—a closer examination revealing patterns and intricacies of wealth distribution dynamics.
As you immerse yourself in understanding this index, emotions may stir within you—perhaps curiosity piqued by its intricate nature or awe inspired by its ability to unveil subtleties in societal
wealth gaps. The allure lies in deciphering these numerical representations into real-world implications for individuals across different income brackets.
What sets the Hoover Index apart is its granularity—it dissects income discrepancies with surgical precision, painting a vivid picture of economic disparity beyond broad strokes and averages. This
detailed approach can evoke empathy as you visualize how each percentile grapples with financial challenges or advantages compared to their neighbors on this socioeconomic scale.
Engulfed in data yet captivated by its implications, exploring the Hoover Index feels akin to unfolding layers of complexity within our social fabric. With each calculation and analysis, there
emerges a deeper appreciation for the multifaceted nature of wealth distribution—a tapestry woven from myriad individual circumstances and choices.
In conclusion, embracing the Hoover Index means venturing into uncharted territory within inequality measurement—an odyssey marked by revelations and reflections on how we perceive economic fairness.
So next time statistics beckon your attention, consider delving into this captivating metric that sheds light on inequalities lurking beneath surface-level assessments.
Lorenz Curve
When diving into the realm of measuring income inequality, there’s a visual tool called the Lorenz Curve that grabs attention. It’s like an artist’s brushstroke across economic landscapes, painting a
picture of distribution disparities. Picture this: a graph where on one axis you have cumulative percentage of households from lowest to highest income, and on the other axis, you see cumulative
percentage of total income those households receive.
The curve starts at the bottom left corner – representing the poorest segment with close to zero percent of total income. As it ascends, outlining various sections along its journey towards the top
right corner – which signifies 100% of households holding 100% wealth – every twist and turn tells a tale of societal riches and gaps.
Imagine peering closely at this line bending ever so gracefully or starkly depending on how equally (or unequally) money flows within society. Sometimes it may hug tight to equality dreams with each
percentile mirroring their fair share; other times it might veer boldly away indicating immense discrepancies in who holds the purse strings.
In moments when economists analyze these curves under academic lenses or policymakers study them for social reforms, emotions stir beneath the surface. There’s intrigue as they trace each dip and
rise seeking answers to questions about poverty traps or wealth hoarding among elite echelons. Frustration simmers when glaring inequalities jump out waving red flags calling for urgent actions
toward just distributions.
It’s not merely lines intersecting grids but profound narratives etched onto charts revealing lives impacted by economic fluxes – stories untold yet screaming silently through mathematical
constructs. Behind every plotted point lies families struggling to make ends meet while others bask in abundance without batting an eye.
The Lorenz Curve isn’t just about numbers; it carries weighty significance reflecting real-world complexities where some thrive luxuriously while others drown in destitution-filled waters. Its gentle
curves speak volumes echoing struggles and triumphs interwoven intricately within societal fabrics awaiting decipherment by compassionate hearts willing to bridge divides through policies rooted in
empathy rather than cold calculations alone.
(Understanding the Gini Coefficient)
Palma Ratio
When it comes to assessing income inequality, the Palma Ratio emerges as a compelling alternative to the traditional Gini coefficient. Picture this: instead of crunching numbers and getting lost in
complex formulas, you can grasp the essence of economic disparities through a simple yet powerful concept – the Palma Ratio.
Imagine a society where wealth is stacked up like layers on a cake, with the top 10% holding significantly more than everyone else combined. That’s essentially what the Palma Ratio unveils – by
comparing the share of national income held by the top 10% against that of the bottom 40%, we get an immediate sense of how skewed or balanced wealth distribution truly is.
Think about it emotionally for a moment. As you ponder these statistics, there’s an undeniable tug at your heartstrings. You start visualizing families struggling to make ends meet while others swim
in opulence without batting an eye. This emotional response underscores why understanding income inequality isn’t just about cold hard data; it’s about empathy and justice too.
Now, let’s delve deeper into how this ratio operates beyond mere numbers. The beauty of Palma lies in its intuitive nature – anyone can glance at it and instantly gauge whether a society tilts
towards equity or privilege. It cuts through jargon and reveals stark truths with elegant simplicity.
Consider two countries: Country A has a Palma Ratio of 2, indicating extreme inequality favoring the wealthy elite, while Country B boasts a snug ratio near 1 denoting relative parity among its
citizens. These figures aren’t just abstract symbols; they narrate profound stories of societal structures and values shaping people’s lives every day.
As you contemplate these scenarios further, you may find yourself grappling with conflicting emotions – anger at unjust systems perpetuating disparity but also hope sparked by nations striving for
fairness amid challenges. This rollercoaster ride mirrors our collective journey towards building inclusive societies where every individual thrives regardless of their background or circumstances.
In conclusion, embracing the Palma Ratio opens doors to meaningful dialogues on wealth distribution that resonate with both our minds and hearts. It transcends mathematical models to become a potent
tool for igniting social change grounded in compassion and solidarity among all members of our global community.
Theil Index
When diving into alternatives to the popular Gini coefficient, one impactful metric that surfaces is the Theil index. Unlike Gini which leans towards overall equality or inequality within a
population, Theil digs deeper by examining how individual units influence the distribution dynamics.
Picture this – you have a group of friends sharing a box of assorted chocolates. Some friends grab handfuls while others take just one piece at a time. The Gini coefficient would give you an overview
of how fair or lopsided the chocolate-sharing scenario is across all your pals. But if you want to pinpoint who’s having more than their fair share compared to others in the group, that’s where the
Theil index comes in handy.
The emotional resonance embedded in understanding such disparities can be quite profound. Imagine being that friend who always finds themselves with merely crumbs left at dessert time while others
revel in lavish treats unabashedly; it stirs up feelings of frustration and envy.
Let’s break down what makes this statistical indicator so intriguing! By splitting the population into segments and comparing each segment with every other, we get insights on not just societal
inequalities but also on how various groups interact within larger frameworks.
Think about it as dissecting a complex puzzle where each piece reveals its unique contribution to the bigger picture. This nuanced approach illuminates nuances often overlooked by broader strokes
like Gini calculations.
While society strives for fairness, these metrics show us areas where cracks may form – hinting at potential rifts that could widen over time if left unaddressed. It becomes more personal when we
realize our own standing amidst these shifting tides of wealth and opportunity allocation.
So next time you’re pondering economic distributions or even divvying up responsibilities among friends, remember — there’s more than meets the eye beyond simple percentages and averages!
The beauty lies in uncovering those hidden stories beneath raw numbers; tales woven from hopes, dreams, struggles, and triumphs intertwined within every data point analyzed through lenses like Theil
Index offers us newfound empathy towards understanding ourselves and our world better.§
External Links | {"url":"https://info.3diamonds.biz/alternatives-to-gini-coefficient/","timestamp":"2024-11-08T02:44:35Z","content_type":"text/html","content_length":"102040","record_id":"<urn:uuid:70c54165-a0fe-421f-a9dc-fc3d43b6aa91>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00131.warc.gz"} |
Cosine Similarity
In this blog post I’m going to answer the question I know has been burning in your mind: What is Cosine Similarity and how does it affect your life?
Yes, I know you’ve all been hearing about Cosine Similarity everywhere! You can hardly go 15 minutes before someone brings it up in casual conversation acting like you should know what it is. You
feel stupid that you don’t know what they are taking about. You are too ashamed to admit your ignorance.
Well, fear not dear reader! After reading this article you’ll be able to end those feelings of foolishness and be able to join in on the conversation!
Wikipedia defines Cosine Similarity as:
There you go! Now you’ve been educated! Do you feel better? No?
Okay, let’s use an example to make it easier to make sense of.
Let’s say we want to measure how similar two sentences are. Let’s use the following sentence:
“This blog post sucks”
“This blog post is awesome”
There are various ways we might represent this sentence mathematically so that a Machine Learning model can make sense of it; but let’s stick with something pretty simple. We’ll imagine the following
numbered list:
1. This
2. Blog
3. Post
4. Is
5. Sucks
6. Awesome
So, let’s imagine a ‘vector’ (which is really just an ordered list) where we put a 1 if the word exists in the sentence and a zero if it doesn’t. So:
This blog post sucks = [1, 1, 1, 0, 1, 0]
This blog post is awesome = [1, 1, 1, 1, 0, 1]
Here it should be obvious we’re only comparing words in the sentence and not the order between the words. This should be obvious if we realize this:
Post this blog sucks = [1, 1, 1, 0, 1, 0] just like “This blog post sucks”
If we were doing this for real, perhaps we might want to take order into consideration since clearly the order of words matters to the meaning of a sentence. When we drop the order of the words like
this, we call this a “Bag of Words”. For this simple example, we’re going to ignore order of words because otherwise the math becomes too difficult to keep it simple.
So, now we have two ‘vectors’:
[1, 1, 1, 0, 1, 0] and [1, 1, 1, 1, 0, 1]
How ‘similar’ are these two vectors? You could probably come up with some sort of measure if you stopped and thought about it, and there is no one ‘True’ way to measure the similarity and difference
of these two vectors.
But here is a clever idea: Let’s treat this vector as if it was a geometrical vector and then measure the angle of difference between them.
To see why this works, let’s imagine two vectors on a 2D plane (since it is hard to imagine six-dimensional space like our bag of words example – though a computer doesn’t care how many dimensions it
is calculating in).
Let’s imagine two unit vectors (basically two line segments of length 1) on a grid. The first is at 45 degrees and the second is at 76 degrees:
How might we measure the ‘similarity’ between these two lines?
One obvious idea is to measure how far they are rotated from each other. That is to say:
75-45 = 30
Now take the Cosine of that to place it as a value between 0 and 1:
Cos(30) = 0.8660254
Or in other words these lines are 86.7% the same. (1)
Just to prove the point, let’s try this again with 45 degrees and 100 degrees or:
Cos(100-45) = 0.57357644
So those are not as similar.
Okay, but how can we use this same idea with comparing sentences?
Well to the computer a vector can be treated as a line in, in this case, 6-dimensional space. So, we just calculate the cosine between those two vectors, and it will effectively be a rating of how
similar the two vectors are.
Let’s now break down that intimidating Wikipedia formula and work this out for our vectors so that we can compare sentences. Here is some python code that does the job: import NumPy as np
def cosine_similarity(x, y):
assert x.shape[0] == y.shape[1], "Dimension mismatch: x vector size should match the number of columns in y"
dot_products = np.dot(y, x)
x_magnitude = np.linalg.norm(x)
y_magnitudes = np.linalg.norm(y, axis=1)
cosine_similarities = dot_products / (x_magnitude * y_magnitudes)
return cosine_similarities
Let’s walk through this one part at a time. This from the Wikipedia formula:
Is equivalent to this from the python code:
dot_products = np.dot(y, x)
We want to take the dot product of the two vectors. You can look up, but it’s a function built-in to NumPy, so don’t worry too much about what it is. It’s a fairly standard matrix operation. Note
that this dot operation will only work if the number of rows of x matches the number of columns of y. Thus, our assertion checking for that. Then this from the Wikipedia formula:
It is saying take the of the vectors and multiply them together. Magnitude (Do you recall how to do that from high school geometry? Magnitude is exactly the same as finding the length of a line
except you are doing it in any number of dimensions). Again, this is built-in to NumPy so:
x_magnitude = np.linalg.norm(x)
y_magnitudes = np.linalg.norm(y, axis=1)
Finally, we take that dot product and divide it by the multiplied magnitudes:
Which is this in the python code:
cosine_similarities = dot_products / (x_magnitude * y_magnitudes)
If you really wanted to not use the built-in functions in NumPy and calculate it out here is what the revised python code would look like:
import math
def cosine_similarity(x, y):
assert len(x) == len(y), "Dimension mismatch: Vectors must have the same length"
dot_products = sum(xi * yi for xi, yi in zip(x, y))
x_magnitude = math.sqrt(sum(xi ** 2 for xi in x))
y_magnitude = math.sqrt(sum(yi ** 2 for yi in y))
cosine_similarity = dot_products / (x_magnitude * y_magnitude) if x_magnitude * y_magnitude != 0 else 0
return cosine_similarity
And there you go- we now have a function by which to calculate the cosine between two vectors. Let’s actually run it on our simple example. Recall our two vectors were:
[1, 1, 1, 0, 1, 0] and [1, 1, 1, 1, 0, 1]
So take the dot product:
11 + 11 + 11 + 01 + 10 + 01 = 3
And take the magnitudes:
Sqrt(1^2 + 1^2 + 1^2 + 0^2 + 1^2 + 0^2) = Sqrt(4) = 2
Sqrt(1^2 + 1^2 + 1^2 + 1^2 + 0^2 + 1^2) = Sqrt(5) = 2.24
The result:
3 / (2 * 2.24) = 3 / (4.48) = 0.6696
And our python code calculates: 0.67 due to rounding errors, but is basically the same.
So, these two sentences (as measured via a bag of words) are 67% the same.
The beauty of cosine similarity is that you can use it on anything that you can represent as vectors! This will turn out to be really useful in our next post where we tackle using cosine similarity
with Large Language Models (LLMs).
(1) Note that I’m actually sort of lying here. To be considered 86.7% the same I’m making an additional assumption that we’re specifically talking about Cosine similarity for text. Cosines actually
range not from 0 to 1 but from -1 to 1. So, a Cosine of 0.867 can’t really be considered to mean 86.7% ‘the same.’ However, text frequency can’t be negative. So, we’d expect the results to range from
0 to 1. So, within that context we can think of results as a percentage of similarity.
Also be sure to checkout my next article addressing how to use Cosine Similarity for Semantic Searches.
To stay in the loop, make sure to follow us on LinkedIn, and also be sure to have a look at our other articles here on the Mindfire Blog. | {"url":"https://www.mindfiretechnology.com/blog/archive/cosine-similarity/","timestamp":"2024-11-09T17:20:13Z","content_type":"text/html","content_length":"60205","record_id":"<urn:uuid:e86e4a42-69e6-4dae-a928-6fb1b0ac8779>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00817.warc.gz"} |
I am an assistant professor in the Computer and Information Science department at the University of Pennsylvania. I do research in machine learning, with a focus scalable probabilistic machine
learning methods and Bayesian machine learning. Recently, I've also been interested in how machine learning techniques can be applied to large-scale, high dimensional optimization problems in the
natural sciences. In 2022, I received an NSF CAREER award funding work on these kinds of optimization problems.
Before I joined Penn, I was a research scientist at Uber AI Labs. Before this, I was a postdoctoral associate in Operations Research and Information Engineering at Cornell University. I received my
Ph.D. in Computer Science from Cornell University, where I was advised by Kilian Weinberger. | {"url":"https://jacobrgardner.github.io/","timestamp":"2024-11-11T18:16:34Z","content_type":"text/html","content_length":"16464","record_id":"<urn:uuid:91d8518e-3f02-478e-9c9a-539ca89fd3d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00132.warc.gz"} |
Optimization in Formula Electric
I spent the first semester in Berkeley Formula Electric Autonomous working on a RRT* based approach to solving the path planning problem. I soon discovered the is a much faster, more efficient, more
accurate, and more customizable way through optimization. This is my explanation of how optimization works, and how it is used in the team.
There are 3 main parts of the autonomous pipeline:
• Perception
• SLAM
• Path planning
• Model Predictive Control (MPC)
Optimization can be used to solve SLAM (graph SLAM), path planning, and MPC.
I will be mostly talking about the path planning solution.
Here are explanations for the math behind convex optimization. EECS127 course reader This Tutorial
Here is Reid’s explanation.
The essence of this approach is phrasing the path planning problem in a specific standard way so that we can use a solver to solve the problem.
There are several libraries in python that specialize in solving optimization problems in standard form. So if we can get the problem into such a form, we can plug it into a solver and get an answer.
Lets look at one of the forms:
We chose either minimize or maximize (lets say minimize), and chose the quantity we want to minimize (lets say f(x)=x^2 in this case).
Then we specify any constraints we want on the variables such as x>=0 and x<=9.
We have now an optimization problem!
Now we can input it into a solver like cvxpy(what I got started with and only does convex optimization) or casadi(what the team uses and can solve non linear programs) to solve the problem.
In our example, our solution would be 0 because the smallest x^2 such that 0<= x <= 9 is 0. | {"url":"https://markogata.com/posts/2023/formulaelectricdocs/formulaelectric2/","timestamp":"2024-11-08T07:58:00Z","content_type":"text/html","content_length":"10029","record_id":"<urn:uuid:5f101e80-7451-4e89-8786-430286d8418f>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00666.warc.gz"} |
On March 7, 2003, MSN Gaming Zone hosted the first HALSCRIB vs. the Rest Of The World match. Hal Mueller took one side of the table, using a prototype of HALSCRIB 5 to play the cards dealt him by
the Zone. A team of human players, which included myself, Lois Fosdal, Paul Gregson and many others, took the other side and played by consultation, discussing our discarding and pegging choices on
the chat lines. It was great fun, and an opportunity to play a couple of instructive games while seeing the world's most powerful computer cribbage program in action.
The following game is taken from that session. Grab your cribbage board and a deck of cards, and follow along. To get the full instructional value, I recommend going through the game twice, once
from the perspective of the humans and once from the perspective of HALSCRIB. On each deal, try to decide what you'd have done with the same cards. Then read on to see what the actual players did,
and why.
Deal 1
PONE (0):
^9 [9 (18-2) ]^J (29-1)[ ]^[9][ ]^6^ (15-2) [8 ]^7 (30-5)[ ]^[7][ (7-1)]
crib: 3-Q cut: 6
7-8-9-9 (2-K)
DEALER (0*):
The Rest Of The World (ROTW) deals first, and has an easy discard of 2-K from 2-7-8-9-9-K. When HALSCRIB leads a 9, pairing it is clearly right. The bot would need the fourth 9 to retaliate, and at
any rate there is no good alternative. When HALSCRIB gets a go with a J, the humans have a difficult lead to make. The 9 is the percentage play mathematically, with only three losers (since the bot
is obviously out of 9s). But it leaves us trapped if the bot does play a 6, which led some of us to fear the sort of outcome that actually materialized in the game. The alternative is to lead the
8, which is more likely to give up two immediate points but less likely to get into deeper trouble afterwards. It was a close call, with a slight consensus favoring the 9. Many of us, myself
included, expected that with a 6 cut showing, the bot would have led a 6 from 6-9.
But HALSCRIB surprised us by showing up with 6-7-9-J after all. The 9 is, in fact, the standard lead from this hand. Whenever you have 6-7-x and the fourth card is a 7, 8 or 9, it's usually right
to lead the fourth card, hoping to catch dealer with 5-5-x-x:
^9^ [Q ]^J^ (29-1)[ ][][Q][ ]^6^ [5 ]^7^ (28-4)[ ][][5][ (5-1)]
In this case the bot's stealth 6 let it peg eight points in all, compared to our three. Chalk one up for the bot.
HALSCRIB's keep of 6-7-9-J from 3-6-7-9-J-Q returns both the highest average hand and the highest expected average. Some players might be tempted to keep 3-6-7-9 instead, on the theory that low
cards are "better" than high cards. But this hand actually fetches ¼ point less. Remember, a J is worth roughly ¼ point on the possibility of His Nobs alone, and adding in the potential for a run
on a 10 cut, this more than offsets the two extra points that 3-6-7-9 scores on a 3 or 6 cut.
Another try is 3-6-9-J, safer in the crib and pegging (and therefore best when defense is indicated), but retaining too little offensive potential to appeal to the bot in this position.
Average Opponent's crib: Expected Pegging (Schempp): Pegging (Mueller):
hand average
Keep Toss Static Delta Dynamic Net (pone/dealer) Net (pone/dealer)
3-6-7-9 J-Q 4.17 5.46 +0.01 5.47 -1.30 -1.3 (1.9/3.3) -1.0 (1.4/2.4)
3-6-9-J 7-Q 3.89 4.32 +0.04 4.36 -0.47 -1.1 (2.1/3.2) -0.8 (1.3/2.1)
6-7-9-J 3-Q 4.41 4.59 +0.08 4.67 -0.26 -1.6 (1.9/3.5) -0.8 (1.5/2.3)
6-9-J-Q 3-7 3.97 5.01 +0.16 5.17 -1.20 -2.0 (2.2/4.2) -1.0 (1.6/2.5)
Deal 2
The humans receive A-2-4-5-7-8. At +15 to the bot's -4, defense is called for, so we toss the relatively safe A-7. Other reasonable plays would be to toss 4-7 or 4-8. No serious consideration was
given to tossing A-4, which would be the optimal play at most scores.
Average Opponent's crib: Expected Pegging (Schempp): Pegging (Mueller):
hand average
Keep Toss Static Delta Dynamic Net (pone/dealer) Net (pone/dealer)
A-2-5-7 4-8 5.17 4.94 -0.17 4.77 0.40 -1.0 (2.8/3.8) -0.6 (1.7/2.2)
A-2-5-8 4-7 4.74 4.88 -0.25 4.63 0.11 -0.1 (3.5/3.6) -0.3 (1.9/2.2)
2-4-5-8 A-7 5.13 4.89 -0.21 4.68 0.45 -0.6 (2.8/3.3) -0.7 (1.7/2.4)
2-5-7-8 A-4 6.96 5.72 +0.02 5.74 1.22 -2.0 (2.0/4.0) -0.7 (1.7/2.4)
DEALER (14*):
10-10-K-K (4-7)
crib: 4-7 cut: 10
[4][ ]^10^ [2 ]^10^ [5 (31-2) ][]^K [8][ ]^K (28-1)
2-4-5-8 (A-7)
PONE (23):
The 4 is the best defensive lead. If it's paired, we can get the count over 15 with our 8. The 4 also forms a magic eleven with the 2 and 5, allowing us to peg a 31-2 if dealer's first two cards
are ten-cards. Remember that as dealer you can thwart opponent's ten-cards with either a two-card magic eleven like 5-6 or 2-9 or — if you're looking for more offense — a three-card magic eleven
that includes a couple of low cards (A-A-9 or 2-2-7). As pone, however, only a three-card magic eleven is effective against ten-cards, since you're the one who has to make the first play.
HALSCRIB starts with 4-7-10-10-K-K and tosses the 4-7, a choice it would only make playing on. 10-10-K-K is a horrible defensive pegging hand, but HALSCRIB reckons it's only a little worse on
offense than 4-7-10-10 or 4-7-K-K. Since 10-10-K-K returns the most combined points between hand and crib, and since the bot is mainly concerned with offense at this score, it gets the nod. Note
how a pair gains relatively little added value when you send it to your crib (the exception being a pair of 5s).
The bot leads a 10, saving its run-proof Ks for later. On our 2 reply it continues with the second 10, saving its K-K for last in case we started with four low cards or with a stray K, which the
bot could triple on the second play series.
Average Own crib: Expected Pegging (Schempp): Pegging (Mueller):
hand average
Keep Toss Static Delta Dynamic Net (pone/dealer) Net (pone/dealer)
4-7-10-10 K-K 3.61 4.58 +0.07 4.65 8.26 +1.5 (1.8/3.3) +0.9 (1.6/2.5)
4-7-K-K 10-10 3.43 4.76 +0.16 4.92 8.35 +1.4 (1.9/3.2) +1.0 (1.4/2.4)
10-10-K-K 4-7 5.04 3.72 +0.17 3.89 8.93 -0.3 (2.7/2.4) -0.7 (2.6/2.0)
Deal 3
PONE (29):
4-6-Q-Q (7-10)
^Q [9 ]^Q [2][ (31-2) ][]^6 [Q][ ]^4^ [4 (24-3)]
crib: 7-10 cut: 4
2-4-9-Q (8-8)
DEALER (29*):
As pone HALSCRIB gets dealt 4-6-7-10-Q-Q, and must choose between 4-6-Q-Q, 6-7-Q-Q or the super-aggressive 4-10-Q-Q. Going into the deal the bot is -5 to our +11, a position that normally calls for
playing on. But HALSCRIB's initial six cards aren't terribly promising, so it's loathe to give up on defense entirely. This rules out 4-10-Q-Q, which averages ¼ point more in the hand but pegs
worse than the alternatives (according to Mueller's statistics anyway) and gives up two points more in the crib. The choice is thus narrowed down to 4-6-Q-Q or 6-7-Q-Q. This is pretty much a
tossup, but since the bot has to pick something, it picks 4-6-Q-Q.
HALSCRIB leads a Q, hoping we'll be trapped with four ten-cards and feel compelled to pair it (the 4 would have been a more defensive lead). We play the 9 from our 2-9 magic eleven. Dropping the 2
instead is an interesting ploy when you have a 4 to cover a 3 reply. But that's more of an offensive play, and at this score with a decent (if sub-average) hand and good prospects in the crib, a
little more discretion is called for. Naturally, pairing the Q was not given serious consideration.
The bot makes an interesting play on the second play series. Instead of leading the 4, which is matched by the starter, it leads the 6, wary that our last two cards might be 6-6 or 7-8.
Average Opponent's crib: Expected Pegging (Schempp): Pegging (Mueller):
hand average
Keep Toss Static Delta Dynamic Net (pone/dealer) Net (pone/dealer)
4-6-Q-Q 7-10 3.74 4.31 +0.13 4.44 -0.70 -2.3 (2.0/4.3) -1.1 (1.2/2.2)
4-10-Q-Q 6-7 4.00 6.42 +0.19 6.61 -2.61 -1.7 (1.6/3.3) -1.7 (1.0/2.7)
6-7-Q-Q 4-10 3.83 4.53 +0.11 4.64 -0.81 -2.1 (1.9/4.0) -1.1 (1.2/2.4)
Deal 4
At the start of this deal we humans were feeling pretty good about our prospects, up +12 to the bot's -11. But cribbage is a fickle game, and this would be the deal where HALSCRIB, aided by a
perfect cut, starts its comeback.
Both side have discarding decisions to make. HALSCRIB starts with 2-2-6-6-8-Q. There are three reasonable alternatives, from which the bot selects 2-2-6-6 for its offensive pegging potential. It
surprises me a little to see the bot commit to offense with such a weak looking hand (6-6-8-Q would have been its choice for balanced play). Lest you harbor any suspicions, keep in mind that the
Zone, not HALSCRIB, is dealing out the cards, so no one can accuse the bot of knowing that a 7 cut is coming up!
Average Own crib: Expected Pegging (Schempp): Pegging (Mueller):
hand average
Keep Toss Static Delta Dynamic Net (pone/dealer) Net (pone/dealer)
2-2-6-6 8-Q 6.26 3.19 +0.04 3.23 9.49 +2.2 (1.5/3.7) +1.1 (1.8/2.9)
2-2-8-Q 6-6 3.83 5.76 -0.00 5.76 9.59 +1.6 (1.9/3.5) +1.4 (1.4/2.7)
6-6-8-Q 2-2 4.17 5.72 +0.04 5.76 9.93 +1.2 (2.1/3.3) +1.2 (1.3/2.5)
The ROTW starts with 2-4-9-Q-K-K and chooses to keep 2-4-K-K, which saves ½ a point in the crib over 2-Q-K-K and 4-Q-K-K at a cost of only . A potential double-run combo like Q-K-K is less powerful
than it looks because it is one-sided and converts only on a J cut. Combos like this are worth about ½ point in added value (whereas a two-sided potential double-run combo like J-Q-Q would
contribute about 1 point in added value). By keeping both the 2 and the 4 in our hand, we improve on any low card cut, and have the option of leading a low card without having to worry about the
bot making the count 11 with its reply.
Average Opponent's crib: Expected Pegging (Schempp): Pegging (Mueller):
hand average
Keep Toss Static Delta Dynamic Net (pone/dealer) Net (pone/dealer)
2-4-9-K Q-K 3.83 4.49 +0.03 4.52 -0.69 -1.8 (1.7/3.6) -0.9 (1.2/2.1)
2-4-K-K 9-Q 3.87 4.09 +0.16 4.27 -0.40 -1.9 (1.6/3.5) -1.5 (1.2/2.7)
2-Q-K-K 4-9 4.00 4.68 +0.11 4.79 -0.79 -1.9 (1.3/3.1) -1.7 (1.0/2.7)
4-Q-K-K 2-9 4.00 4.70 +0.10 4.80 -0.80 -1.7 (1.5/3.2) -1.7 (1.0/2.7)
DEALER (33*):
2-2-6-6 (8-Q)
crib: 8-Q cut: 7
[4][ ]^2^ [K ]^2 [2][ (20-2) ]^6 (26-1)[ ][K ]^6 (16-1)
2-4-K-K (9-Q)
PONE (46):
If HALSCRIB 4.90 was in our shoes, it would keep 2-4-9-K due to a perceived advantage in pegging. I don't know why Mueller's numbers rate 2-4-9-K so much higher than 2-4-K-K, particularly when
Schempp's pegging numbers show no real difference between them. Perhaps I'm missing something, or perhaps this a glitch in the version 4.90 pegging averages. In general, I think that the current
generation of bots does a more accurate job of estimating the pegging potential of dealer hands, since pone's pegging performance is heavily dependent on making the right opening lead, a skill that
the bots are still learning. One reason I cite two different sources of pegging averages is to show whether there is a convergence of opinion. If so, then the bots are probably pretty close to the
truth. If not, you will have to use your judgment interpreting the results. Either way, what matters is how each bot rates the hands relative to one another, not how high or low they rate in
absolute numbers, since the latter quantity is more subject to statistical and systematic distortions. Mueller's numbers, for example, are consistently lower than Schempp's due to differences in
how the averages are calculated.
The ROTW leads the 4. If it's paired we can then get the count over 15 with a K. HALSCRIB replies with a 2, preferring the risk of a 9-3 to that of a 15-5. We refuse to pair it, but do pair the
bot's second 2, since the chance of HALSCRIB having the fourth 2 is statistically rather small, and was deemed worth the risk when we were starting with such a weak hand ourselves.
Deal 5
PONE (54):
2-2-5-10 (A-K)
^10 [K ]^2 [9][ (31-2) ][]^2^ [Q][ ]^5^ [J][ (27-1)]
crib: A-K cut: 9
9-J-Q-K (2-3)
DEALER (50*):
Now we're +6, and HALSCRIB is -6. The bot keeps 2-2-5-10, eschewing the aggressive alternative of tossing 5-K to keep A-2-2-10 (with a shot at fourteen points). That might have been appropriate
playing on, but the fact that HALSCRIB's positional deficit is the same as the humans' positional surplus indicates that its front-end and back-end winning chances are roughly the same. It doesn't
make sense to give up on the possibility of winning through defense just to grab a 4-in-46 chance of cutting a 3.
After cutting a 9, the bot stumbles a bit in the pegging. Since its hand is worth only four points, HALSCRIB should focus on defense, hoping to erode our positional surplus by limiting our pegging
and perhaps sticking us with a lousy crib (the bot's A-K toss doesn't combine favorably with the starter). It should therefore lead one of the 2s. Instead it leads a 10, presumably trying to entice
a pairable 5. On our K reply, HALSCRIB plays a 2 instead of dumping its 5, even though the 5 ought to be safe (since we failed to score a 15-2) and would give the bot an opportunity to either run
its 2s for a pair and a go or maybe even triple a stray 2 for a big 31-8. The bot is hoping to catch us holding all ten-cards:
^10^ [K ]^2^ ^2^ (24-3)[ ][][K][ ]^5^ (15-2) [Q (25-1) ][][J][ (10-1)]
Deal 6
The humans are +5, HALSCRIB is -12. From A-2-8-9-10-K we toss A-K, which is a little safer in the crib than 2-K while returning the same average hand.
Average Opponent's crib: Expected Pegging (Schempp): Pegging (Mueller):
hand average
Keep Toss Static Delta Dynamic Net (pone/dealer) Net (pone/dealer)
A-8-9-10 2-K 5.33 4.45 +0.10 4.55 0.78 -1.7 (2.0/3.7) -0.9 (1.2/2.1)
2-8-9-10 A-K 5.33 4.30 +0.07 4.37 0.96 -1.1 (1.9/3.1) -0.9 (1.3/2.2)
DEALER (58*):
5-J-Q-Q (A-4)
crib: A-4 cut: 10
[8][ ]^Q^ [2 ]^J[ ][9 ]^Q [10 (29-1) ][]^5 (5-1)
2-8-9-10 (A-K)
PONE (65):
There was a bit of discussion about whether to lead the 2, 8 or 10. A 2 lead is least likely to give up an immediate score, but it gets us trapped on an 8, 9 or 10 reply. On an 8 lead, we'd be
willing to take a 24-3 after a 7 reply, whereas leading the 9 or 10 we'd have no opportunity to retaliate if dealer scores. So we lead the 8. We play low on HALSCRIB's Q reply, hoping to trap a 10
or J on the second play series. Dumping the 2 also keeps it from getting trapped into a run or triple after a go, and if it's paired we can retaliate with our 9.
HALSCRIB has its own decision to make on card one holding 5-x-x-x. When I hold this hand, I'll usually play the 5 when pone leads an 8 or 9. This keeps it from getting trapped into a 30-4 or 31-5
if pone also has 3-4. If my ten-cards include a pair or touching cards, I'll often get lucky on the second play series: pone gets an early go with a high card, then has one or two more cards
squeezed out leaving me with a three-on-one or three-on-none. However, dumping the 5 early does give up two points to a 2 or 5, so playing desperation defense I'll just drop a ten-card and hope for
the best.
In this position the bot is understandably reluctant to give up any score, even a two-pointer, so it forgoes dumping the 5 early. The question then is which ten-card to play. Clearly a Q would be
the safest choice. Pone is ordinarily more likely to have retained a J than a Q, especially given the 8 lead (a J can form a four-card run with the 8, but a Q cannot). And since HALSCRIB has two
Qs, it's even less likely that we'll have the third one lurking in our hand. So at this score, it's a pretty easy choice. Even playing on, however, it's probably still right to play a Q now, saving
the J-Q combo for last. After pone's go, you can lead the Q and hopefully trap a 10 or K into a 30-4, or a J into a 30-3. This seems a tad more promising than saving the Q-Q and hoping to end up
either running the pair for a 20-3 (if pone has A-A-6-8 and plays the 6 second, for example) or, on a lucky day, tripling a stray Q for a 30-7. In general when you're playing on as dealer and have
three ten-cards, try to keep 10-J or J-Q together even if it means breaking up a pair. J-Q is especially powerful if pone has ten-cards, while 10-J is attractive if pone opened with a mid-card and
may have a 9 to trap. If neither of these combos is an option, try to save a pair for last, particularly a pair of Js. Of course, if your position requires you to specifically peg four points,
always keep your touching cards together. If you need to peg six or seven points, always keep a pair together and hope to triple your opponent.
Deal 7
PONE (86):
2-Q-Q-Q (6-10)
^2[K ]^Q[3 4 2 (31-5) ][]^Q Q (20-3)
crib: 6-10 cut: 10
DEALER (74*):
HALSCRIB's big hand and crib propel it into marginal position, right at par (0). We are technically -22, but effectively +4 if we can hold down the bot's scoring.
Obviously we need defense, and fortunately we're dealt a powerful enough scoring hand that we can afford to play off in the pegging without worry. There's no way we're pairing the bot's 2 lead!
Instead we break with our "out" card, the K, and end up pegging a 31-5. The bot runs two of its Qs at the end for three precious points. When the dust settles, the bot's position has slipped
slightly to -1.
Deal 8
At a critical score we are dealt 2-3-4-5-10-Q. Since there is no combination of four cards available that could possibly get us near the game hole this deal, our discarding strategy will emphasize
defense. An ideal toss in this position would be something like 10-K, 9-Q or 6-10. We lack those, but we can throw 2-Q or 4-Q, either of which gives up less in the crib than the average 42-Q, since
it leaves us with 3-4-5-10, worth five points, instead of 2-3-5-10, worth only four. The fact that we have a 3 in our hand makes the 2-Q toss a little safer than usual. In fact here it's just as
safe as 4-Q even though 4-Q normally gives up .1 point less.
Average Opponent's crib: Expected Pegging (Schempp): Pegging (Mueller):
hand average
Keep Toss Static Delta Dynamic Net (pone/dealer) Net (pone/dealer)
2-3-4-5 10-Q 8.57 4.61 -0.13 4.48 4.09 -0.6 (3.9/4.4) -0.5 (2.0/2.5)
2-3-5-10 4-Q 7.13 4.46 -0.11 4.35 2.78 -1.2 (2.5/3.6) -0.6 (2.0/2.6)
3-4-5-10 2-Q 7.96 4.56 -0.18 4.38 3.58 -1.3 (1.9/3.2) -0.7 (1.6/2.3)
DEALER (95*):
crib: 2-10 cut: 8
[4]^6[10]^9[ 3]^5[5 (13-2) ]^7
PONE (98):
Note that in a more offensive position, 2-3-4-5 would be our clear choice. It gets the highest average hand — despite being worth only four points going in — and is an unusually strong offensive
pegging hand, racking up five points against dealer's x-x-x-x. Unfortunately it also gives up a lot of pegging points, since there is no way to disengage if dealer has matching cards. As a result
there was little interest expressed in holding it here.
After the 8 cut we lead the 4 following the defensive principle of playing from the middle of a three-card run. This spaces out our remaining cards as much as possible. Playing on, we'd do the
opposite, leading the 10 to keep our touching cards together (and to entice a 5 that we could pair).
HALSCRIB plops down a 6, provoking a lively discussion over whether to take the 15-5. A few gung-ho humans wanted to go for it, but the majority felt that playing off with the 10 was the percentage
play. To understand why, consider what would happen in a best-case offensive scenario. We score five points with our 5. Dealer plays a 7, and we then score another five points with our 3, making
the count 25. In a perfect world this will also be a go, so we end up pegging eleven points to accompany our seven-point hand:
[4 ]^6[ ][5][ (15-5) ]^7^ (22-4) [3 (25-6) ][]^8[ ][10][ ]^J^ (28-1)
Unfortunately this sequence only gets us to 116*, so assuming the bot hasn't counted out first, we'll still need to peg five points as dealer next hand to win. That's probably a three-to-one shot
at best, if everything has gone according to plan beforehand. These aren't very encouraging odds. The bot is starting this deal -1, and even though the bot's 6 looks pretty ominous in combination
with the 8 cut, we surely have a better chance of winning by playing off. The fact that HALSCRIB is offering us a 15-5 suggests that it is either trapped or else assumes that an exchange of runs
would be to its strategic advantage. We decide to play our 10, and put the pressure on the bot to get out cleanly in three counts.
In the event, HALSCRIB is holding a 5-6-7-9 flush, and is entirely justified in trying to egg us on in the pegging. The play of the 5 on our 3 in the second play series is odd though. The bot seems
to have been fooled by our failure to play our 5 for 15-5, and is assuming we don't have one — an apparent glitch in this version of the program. It was the bot's only blunder of the game.
Deal 9
The humans did their job by holding the bot to two pegs plus two more in the crib. But fate is smiling on the bot, and it follows up its thirteen-point flush by holding a pat twelve points in
3-3-4-5 (improved to twenty by the cut). If HALSCRIB had started this deal within six points of home, it likely would have kept 3-3-5-7 instead for safer pegging.
PONE (112):
3-3-4-5 (6-7)
^3[8 ]^4 (15-2) [4 (19-2) ]^5 [6 (30-4) ][]^3 [2 (5-1)]^
crib: 6-7 cut: 5
2-4-6-8 (10-J
DEALER (107*):
The humans get dealt 2-4-6-8-10-J and toss 10-J. On HALSCRIB's 3 lead we considered whether to play the 6 or the 8. A measurement I call objective risk often helps in endgame pegging situations. It
is calculated by multiplying the number of losers by the amount of the score given up. Our 6 has three losers (the other three 6s) which each score four points, so its objective risk is 3 · 4 = 12.
The 8 has six losers (4s and 8s) that each score two points, for an objective risk of 6 · 2 = 12. Now when pone is seven, eight or nine points away from home, I like to make a special adjustment
and consider a four-point peg, such as a 15-4, to be no worse than three points in my objective risk calculations (I've found that this better handles the particularities of certain hand types at
these scores). This gives the 6 an adjusted objective risk of 9, which is ostensibly safer than the 8.
But objective risk shouldn't be considered in a vacuum. You must also gauge your opponent's chance of counting out based on what you know of his or her hand. Given the 5 cut and the 3 lead, how
would the bot's prospects look if its second card was one of the three losers we're concerned with? Well, if it's a 6 or 8, it's not at all clear that the bot has enough points to go out, so giving
up two or four extra pegs could well be fatal. But if HALSCRIB's second card is a 4, then it's likely — though not certain — that the game is already over. To fall short, the bot would need to have
started with a disconnected hand like 3-4-6-9 or 3-4-10-Q (we discounted the possibility that the bot was holding hands like A-3-4-x from which it was unlikely to have led a 3). Ultimately we
concluded that a 4 wasn't really much of a loser after all, and that the 8 was thus a slightly safer play. HALSCRIB 4.90, when placed in our position, agrees that the 8 is a trifle better, though
our winning chances either way are only about 35%.
In the event it doesn't matter. HALSCRIB prevails 121-114*.
This was a well-played game, with both sides showing a good understanding of board strategy and how that affects pegging and discarding tactics. The ROTW's only questionable move was leading the 9
on the second play series of Deal One. HALSCRIB seemed a bit confused about its pegging strategy in Deal Five, and committed a clear misstep in Deal Eight, but otherwise played error-free cribbage.
Here are a few points worth taking away from this game:
• When playing off, tend to lead from the middle of a three-card run. When playing on, try to keep the run intact until you can score with it
• Three-card combinations like 3-4-4 or J-Q-Q that improve to a double run on either of two starter ranks add about 1 point to the average value of a hand. Combinations like 2-2-4 or Q-Q-K that
improve to a double run on only one starter rank add about ½ point to the average value of a hand
• Lead your highest mid-card from 6-7-7-x, 6-7-8-x or 6-7-9-x to keep your 5 trap alive
• If you hold 5-x-x-x as dealer and pone leads an 8 or 9, usually play your 5 immediately to keep it from getting trapped into a run later
• If you're playing on as dealer and must choose which of three ten-cards to play first, generally try to keep a 10-J or J-Q combo together for later. If this isn't possible, try to save a pair
for later
• When playing defensively in the endgame, assess your opponent's opening lead in relation to the cut and the number of points she needs to win. For each of your candidate plays, consider the
objective risk as well as how likely it is that your opponent can benefit from scoring on it
Average scoring (excluding last deal)
HALSCRIB ROTW
Pegging 3.75 1.75
Hand 4.00 5.25
Pegging 1.75 4.00
Hand 12.25 9.50
Crib 6.25 6.25
Total 28.00 26.75
Future HALSCRIB vs. the Rest Of The World matches will be announced on the Cribbage Forum main page. If you'd like to participate as a member of the World team, just show up at the scheduled time
and place and join the fun!
- April 2003
Pegging averages may have arithmetic discrepancies due to rounding. Mueller pegging averages were obtained from HALSCRIB 4.90. Click here for a guide to cribbage notation and symbols. | {"url":"http://cribbageforum.com/HC-ROTW01.htm","timestamp":"2024-11-08T04:26:34Z","content_type":"text/html","content_length":"123652","record_id":"<urn:uuid:4e776d5a-7ffe-4a90-badd-c026d112f18a>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00061.warc.gz"} |
Flow Through Ducts
Friction is present in all real flow passages. There are many practical flow situations where the effect of wall friction is small compared to the effect produced due to other driving potential like
area, transfer of heat and addition of mass.
Flow Through Ducts
Friction is present in all real flow passages. There are many practical flow situations where the effect of wall friction is small compared to the effect produced due to other driving potential like
area, transfer of heat and addition of mass. In such situations, the result of analysis with assumption of frictionless flow does not make much deviation from the real situation. Nevertheless; there
are many practical cases where the effect of friction cannot be neglected in the analysis in such cases the assumption of frictionless flow leads to unrealistic influence the flow. In high speed flow
through pipe lines for long distances of power plants, gas turbines and air compressors, the effect of friction on working fluid is more than the effect of heat transfer ,it cannot be neglected An
adiabatic flow with friction through a constant area duct is called fanno flow when shown in h-s diagram, curves ,obtained are fanno lines. Friction induces irreversibility resulting in entropy
increase. The flow is adiabatic since no transfer of heat is assumed.
Fanno Flow
A steady one-dimensional flow in a constant area duct with friction in the absence of Work and heat transfer is known as “fanno flow”.
Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail
Mechanical : Gas Dynamics and Jet Propulsion : Flow Through Ducts : Flow Through Ducts | | {"url":"https://www.brainkart.com/article/Flow-Through-Ducts_5087/","timestamp":"2024-11-12T19:31:43Z","content_type":"text/html","content_length":"29527","record_id":"<urn:uuid:0a646c05-e788-4115-a0cb-af2ca3571bf7>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00174.warc.gz"} |
3872 -- K-equivalence
Time Limit: 2000MS Memory Limit: 65536K
Total Submissions: 250 Accepted: 39
Consider a set K of positive integers.
Let p and q be two non-zero decimal digits. Call them K-equivalent if the following condition applies:
For every n
For example, when K is the set of integers divisible by 3, the digits 1, 4, and 7 are K-equivalent. Indeed, replacing a 1 with a 4 in the decimal notation of a number never changes its divisibility
by 3.
It can be seen that K-equivalence is an equivalence relation (it is reflexive, symmetric and transitive).
You are given a finite set K in form of a union of disjoint finite intervals of positive integers.
Your task is to find the equivalence classes of digits 1 to 9.
The first line contains n, the number of intervals composing the set K (1 <= n <= 10 000).
Each of the next n lines contains two positive integers ai and bi that describe the interval [ai, bi] (i. e. the set of positive integers between ai and bi, inclusive), where 1 <= ai <= bi <= 10^18.
Also, for i
Represent each equivalence class as a concatenation of its elements, in ascending order.
Output all the equivalence classes of digits 1 to 9, one at a line, sorted lexicographically.
Sample Input
Sample Input #1:
Sample Input #2:
Sample Output
Sample Output #1:
Sample Output #2:
Northeastern Europe 2009 | {"url":"http://poj.org/problem?id=3872","timestamp":"2024-11-05T06:39:51Z","content_type":"text/html","content_length":"6728","record_id":"<urn:uuid:e9d99bb1-98c1-4833-b92c-a12a7b226342>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00583.warc.gz"} |
Unscramble EARTHEN
How Many Words are in EARTHEN Unscramble?
By unscrambling letters earthen, our Word Unscrambler aka Scrabble Word Finder easily found 116 playable words in virtually every word scramble game!
Letter / Tile Values for EARTHEN
Below are the values for each of the letters/tiles in Scrabble. The letters in earthen combine for a total of 16 points (not including bonus squares)
• E [1]
• A [1]
• R [5]
• T [3]
• H [4]
• E [1]
• N [1]
What do the Letters earthen Unscrambled Mean?
The unscrambled words with the most letters from EARTHEN word or letters are below along with the definitions.
• earthen (a.) - Made of earth; made of burnt or baked clay, or other like substances; as, an earthen vessel or pipe.
• hearten (v. t.) - To encourage; to animate; to incite or stimulate the courage of; to embolden. | {"url":"https://www.scrabblewordfind.com/unscramble-earthen","timestamp":"2024-11-07T01:24:49Z","content_type":"text/html","content_length":"55996","record_id":"<urn:uuid:623f3164-5300-4814-8084-74da40738f9e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00382.warc.gz"} |
March 17th, 2011
by Alex Gurvich
Homemade Absolute Return Strategy
Things are just not getting better with the disaster in Japan and the Middle East turmoil is still have not come to any conclusion. So where would a prudent investor invest to safeguard his/her nest
egg? There is fear in the market and safety is the issue.
I have spoken several times about Absolute Return strategies on these pages. This is the “holy grail” of a prudent investor, where you achieve a superior risk adjusted return. Not the
best returns relative to a raging bull market, but steady 1% per month returns no matter what market is going through, ups, downs or sideways.
So we want to achieve these results, how do we go about it? You can find some ETFs out there that do this, but there are few choices, so I have a suggestions for you. In fact what I suggest is create
your own absolute return strategy. Something that is straightforward to understand and simple to execute.
What I am suggesting is to take a long position in a good quality fund (in this case a mutual fund, because we don’t have enough active ETFs with long enough history) and short an equivalent
dollar amount with an index fund (in this case we can actually use an ETF, because they do have enough history).
Let me illustrate with the following example. I simply went to Yahoo and picked funds from their top performing mutual funds in the Large Blend, Large Growth and Large Value categories. These funds
are NVORX, RBCGX and YAFFX respectively. I took their five year monthly returns and subtracted S&P 500 returns. I actually took the S&P 500 ETF which is SPY. The SPY is a more accurate representation
of the S&P 500 index return, because you can actually own it, you can short it and you have to pay out dividends on holding a security short.
The results are very interesting.
Here are the results for fund total return, annualized return, volatility, Sharpe Ratio and correlation:
Fund Return NOVRX RBCGX YAFFX SPY
71.67% 87.71% 69.83% 13.96%
5 Year Total Return
11.41% 13.42% 11.17% 2.65%
5 Year Annualized Return
18.99% 14.47% 19.84% 17.78%
0.601 0.928 0.563 0.149
Modified Sharpe Ratio
0.840 0.655 0.884 1.000
Correlation to SPY
As reminder, the Sharpe Ratio measures the risk adjusted return, i.e. return you earn relative to the risk you take. In the table above I use “modified” Sharpe Ratio, because in the
formal calculation you would need to subtract risk free rate from the fund return, but to simplify this exercise I simply divided annualized return by volatility.
Now here are the results of the homemade absolute return strategy, where in my hypothetical portfolio I have a long position in the fund and I have a short position in the SPY:
Fund Return less SPY NOVRX RBCGX YAFFX
48.73% 53.32% 48.43%
5 Year Total Return
8.26% 8.92% 8.22%
5 Year Annualized Return
10.46% 13.74% 9.26%
0.790 0.649 0.888
Modified Sharpe Ratio
Clearly in these hypothetical Absolute Return funds the returns go down, as you would expect, because you are taking away the SPY contribution (i.e. subtracting SPY return from the fund return). But
what is highly important is that volatility decreased significantly for the two (NOVRX, YAFFX) of the three funds. In those two hypothetical Absolute Return funds, the volatility went down on the
average by about fifty percent, while the returns went down on the average by about a third. This is the result we were looking for — the volatility decreased faster than returns.
One possible explanation why the RBCGX hypothetical Absolute Return fund did not improve Sharpe Ratio is because this fund is poorly correlated to the S&P 500 index, which we used as a hedge, so the
comparison here is not apples to apples.
So here you go, start investing like a hedge fund with fairly straightforward and simple analysis. Now you can still get good returns, with much lower volatility and the result is that you can sleep
better at night.
You must be logged in to post a comment. | {"url":"http://rockledgeadvisors.com/alex/homemade-absolute-return-strategy/","timestamp":"2024-11-04T17:00:25Z","content_type":"text/html","content_length":"57824","record_id":"<urn:uuid:fab59c77-d21a-479a-9db8-1a21f470afc5>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00463.warc.gz"} |
Physics of Josephson junctions - Critical current, qubit, array and graphene
A Josephson Junction (JJ) is a specialized electronic construct, composed of a pair of superconducting materials divided by a delicate insulating barrier. This unique design is underscored by its
inherent property known as the Josephson effect. In essence, even a minuscule voltage application can induce an oscillatory electric current across the junction, remarkably without any resistive
losses. This phenomenon gives rise to a synchronized quantum state within the junction, commonly recognized as the Josephson supercurrent. The intriguing attributes of the Josephson effect have found
applications in diverse domains. For instance, it has paved the way for the creation of ultra-sensitive magnetic field detectors. Furthermore, it has been instrumental in the conceptualization and
realization of Superconducting Quantum Interference Devices (SQUIDs), which have proven invaluable in advanced sectors like medical imaging and material sciences. On the forefront of technological
evolution, Josephson junctions have also become a pivotal element in the architecture of superconducting quantum computers. These revolutionary machines harbor the promise of vastly surpassing
traditional computers in performing specific computational tasks.
The wondrous behavior of a Josephson junction is deeply rooted in the realm of quantum mechanics. The core players in this phenomenon are Cooper pairs—paired electrons that exist in superconductors
at temperatures close to absolute zero. When these pairs encounter the insulating barrier of the junction, they undergo a process called quantum tunneling. Instead of being blocked by the barrier,
they seamlessly “pass through” it, a counterintuitive action that defies classical physics. This movement of Cooper pairs, without the scattering and energy losses typically associated with electron
movement in conventional conductors, is what gives rise to the supercurrent.
Beyond the immediate realm of fundamental physics and materials science, the implications of the Josephson junction’s properties are vast. One significant application is in voltage standards. By
exploiting the relationship between the frequency of the oscillating current and the applied voltage in a Josephson junction, precise voltage standards can be established, aiding in metrology.
Additionally, because of their sensitivity to magnetic fields, these junctions are pivotal in the design of ultrasensitive magnetometers. With the burgeoning interest in quantum computing, Josephson
junctions are also being eyed as potential building blocks for future quantum circuits, holding promise for the next revolution in computational power.
The year 1962 was when British scientist Brian Josephson came up with a theory about the tunneling of electrons across thin insulator layers that was sandwiched in between superconductors (such the
structure is now referred to as the Josephson junction). The theory was confirmed experimentally fairly quickly in the same way that Josephson himself was recognized with the Nobel Prize in Physics
for it in 1973. Since that time it has been a major factor in the development of science. Josephson junction has been an important technological structure that is that is used, for instance, for
superconducting quantum interference devices (SQUIDs) that are those devices that are the most sensitive used for monitoring magnetic field induction that, based on their design, employ either one
and/or 2 Josephson junctions.
In the year 2006, Hans Mooji and Yuli Nazarov from Delft University in the Netherlands published a paper that was a theoretical study of the phenomenon of quantum tunneling that occurs when magnetic
energy through the superconductor layer, which is sandwiched between two different materials. The phenomenon, known as the coherent quantum phase slip was claimed to be the result of quantum phase
slip by Mooji as well as Nazarov to be comparable in magnitude to that of that of the Josephson effect. But, in over six years after the study was published nobody has been able to prove
experimentally the existence of this phenomenon in superconductors.
Physics of josephson junctions
Superconducting electronic devices are built on Josephson junctions which is the mathematical study of which we’ll be presenting. Equations describing the tunneling process of Cooper couples through
the barriers of potential a junction created by a thin insulating component produce changes of the Ψα wavefunction (α = 1 or 2) across both covers of the junction, according to the formula:
where the subscripts α = 1 when β = 2 and α = 2 when β = 1 refer to the left and right covers of the junction shown in Figure 4, while μ is the chemical potential. The parameter c determines the
mutual coupling of the wave functions in the two covers.
Let’s write the wave function in composite form:
Where nα is, respectively, the concentration of current carriers and thus Cooper pairs in both parts of the junction, while Φ α is the phase of the wave function on the left and right covers of the
Let’s write the solution of equation 1 in the form:
Where ΔΦ = Φ 2 – Φ 1 is the time-dependent phase difference on the two connector covers:
h is the reduced Planck constant. Since the change in the concentration of current carriers over time implies a flow of charge, so ultimately the expression for Josephson current flow I takes the
Where I0 is the maximum Josephson current of the junction, while the change in chemical potentials associated with the voltage V applied to the junction, determined by the relation:
It is based on the equation. 4 that with no applied voltage, a direct current is flowing through the junction. However, at 0V, an alternating current flows across the joint, with e representing the
charge of electron. There is also an alternation Josephson result that occurs with the application of voltage to connector covers as shown in Eq. 4. This now takes the form of:
Where V is the DC voltage, while υ is the perturbing microwave field. The solution of equation (7) describing the phase change of the wave function when passing through the potential barrier takes
the form:
The current flowing through the tunnel junction is described in terms of a series:
Where J n are nth-order Bessel functions, while θ and φ1 are fixed parameters.
The most important conclusion from equation (9) is the occurrence of superconducting DC at voltage:
The various current spikes that are associated with the variation in the index n are known as Shapiro steps. We can derive the conclusion that as long as you apply an equal voltage V to the Josephson
superconducting junction junction emits electromagnetic radiation with an amplitude of 77.03 millivolts, for an n of 1. This phenomenon is utilized in a variety of instruments for measuring as well
as the standard volt created from its base. Let’s apply the relation (5) to the system consisting of 2 parallel Josephson junctions, such as those found for superconducting quantum interferometers
(SQUIDs) as well as the equation for the maximum Josephson current is dependent on the magnetic field that is applied as per the equation:
Where Imax is the current amplitude, which is a function of the magnetic induction flux φ passing through the surface of the interferometer:
Interference dependance (12) on the Josephson current of a SQUID based on the magnetic field is illustrated in Figure 1. The magnetic characteristics of the current in the characteristics of a SQUID
composed of superconductors with high temperatures with the d-type wave function symmetry are additionally highlighted in this document.
In reality, Josephson junctions are not point-like, but they do have some size. Therefore, let’s take a look at the second extreme scenario of the long Josephson junction that is placed in a magnetic
field that is parallel. The long Josephson junction, in contrast to one-to-10 Nm junction that was previously discussed it is described using the sine-Gordon equation
The wave attenuation ratio C is the capacity of the unit cross-sectional area at the junction. g is it is the electrical conductivity, and J is The Josephson the current density and the current I0 is
described in equation 5.
The answer to Equation 13 is the presence of a singularity, or a soliton carrying a quantum flux, which is an appearance similar to a vortex thread. Photolithography also creates Josephson connectors
that have different designs that meet the requirements of the instrument as well as affecting the parameters of its electromagnetic field, like having an expotentially narrowed width and is commonly
known as Eiffel tower connectors. The changes in connector shape are a result of different propagation conditions for the solitons (vortex threads) which is the solution to Eq. 13. The boundary
condition shown in this figure. For an annular joint, as an instance, there exists an underlying boundary condition. The propagation of vortices is emitting electromagnetic radiation. Based on their
shape, the connectors release electromagnetic radiation with frequencies of between 100 and 1000 GHz. That’s the frequency used in radio astronomy, high-speed electronics as well as satellite
communication. Eiffel-type connectors are also utilized in DC/AC converters.
Critical current Josephson junction
An Josephson junction can be described as a common tunnel junction wherein layers of superconducting materials are separated by an extremely small (about 10A) dielectric layer. When the dielectric
layer is properly polarized at the junction the tunneling of Cooper pairings through the conductive layers may occur. In the real world even without an polarization voltage, an unspecified amount of
current – the Josephson current is flowing through the junction, connected to the phase change in the parameter for order of wave effects of superconductors. When the voltage of polarization limit Ug
exceeds, there will be an increase in the amount of the current passing into the junction. To polarize the junction by an external voltage the tunneling current can be calculated by following the
following equation:
in which I – the current flowing through the junction, I0 – the critical current (junction constant), V – the junction voltage, e/h – the ratio of the electron charge to Planck’s constant.
Josephson junction array
Josephson junction arrays stand as remarkable feats of quantum engineering, representing a lattice-like arrangement of multiple Josephson junctions in two-dimensional spaces. As their fundamental
constituents, these arrays utilize superconducting materials, with each junction separated by insulating layers to ensure efficient functioning. When current traverses through these junctions, they
synchronize to produce a unified quantum state. This collective phenomenon is emblematic of the Josephson junction and is aptly termed the “Josephson junction supercurrent.” Such arrays, due to their
intricate construction, serve as a testament to the advancements in quantum electronics.
One of the most pivotal attributes of these arrays is the precise control they offer over the quantum states of their constitutive qubits. By adeptly modulating the voltage applied across each
individual junction, it becomes feasible to dictate and modify the quantum state of every qubit housed within the array. This capability not only underlines the potential of Josephson junction arrays
in quantum computations but also signifies their role in the broader spectrum of quantum technologies. With each qubit acting as a quantum bit of information, the ability to control and alter its
state with precision is paramount in the progression of quantum computing.
Why are Josephson Junction Arrays Promising for Quantum Computing?
Josephson junction arrays’ main advantage is their capacity to scale. Since each junction is an individual qubit and increases the amount of junctions can increase the number of qubits they can have,
allowing bigger quantum processors with more qubits to be built. Additionally, their design can be tuned to allow precise control of interactions between qubits and, consequently, reduce the risk of
Josephson junction arrays are characterized by very low rates of decoherence. Decoherence is the term used to describe an absence of coherence in quantum systems caused by interactions with their
surroundings. Josephson junctions are constructed of superconducting materials that have low dissipation levels, which maintain quantum coherence for long durations and are therefore ideal to be used
to process quantum information.
Applications of Josephson Junction Arrays
Josephson junction arrays can be used for a variety of uses in quantum computation. One of the most promising applications for Josephson junction arrays could be making quantum annealing devices,
that use quantum mechanics to tackle optimization issues. Additionally, Josephson junction arrays may be utilized to create two-dimensional Ising models that could aid in solving optimization issues
Josephson junction arrays can also be utilized to serve as quantum simulators. Quantum simulators employ quantum mechanics to model complex systems that would be difficult to analyze on conventional
computers. Josephson junction arrays could be used as quantum simulators, by creating Hubbard models that explain the interactions between quantum particles.
Graphene Josephson junction
Graphene, a two-dimensional material made up of carbon atoms arranged hexagonally, has become one of the most promising materials for future electronic devices. One of its unique properties is its
ability to form Josephson junctions – key components in superconducting electronic devices with great potential applications in quantum computing and spintronics.
What Is a Graphene Josephson Junction?
A graphene Josephson junction is a type of Josephson junction formed using graphene sheet sandwiched between two superconducting electrodes and created through creating narrow constrictions in its
surface, acting as weak links between electrodes allowing electrons to tunnel through, creating Josephson Junction supercurrents.
How Do Graphene Josephson Junctions Work?
Graphene Josephson junctions work according to the same principles as traditional Josephson junctions: when voltage is applied across them, current can flow freely without resistance, creating what’s
known as Josephson supercurrent – with graphene serving as an effective weak link connecting two superconducting electrodes.
Josephson junctions made of graphene can exhibit spin-polarized transport. Because graphene itself exhibits spin polarity, supercurrent flowing through graphene Josephson junctions can display
spin-dependent behavior. This makes graphene Josephson junctions an appealing platform for spintronics research: exploiting electron spin to develop new electronic devices.
Applications of Graphene Josephson Junctions
Graphene Josephson junctions offer immense promise for use in quantum computing and spintronics applications. Their spin-dependent transport characteristics make them a prime platform for creating
spin-based qubits – an alternative type of quantum bit used in quantum computing that may be less sensitive to environmental noise than other qubit types, making them potentially more robust
Graphene Josephson junctions can serve as building blocks for other types of spintronic devices, including spin valves and magnetic tunnel junctions. Such devices could be utilized for applications
ranging from data storage to magnetic sensors.
Attributes of Josephson Junctions
One of the primary challenges in creating graphene Josephson junctions lies in controlling their size and shape of constriction in graphene sheet, as this must allow electrons to tunnel through while
not becoming so narrow as to become unstable for the junction. Another difficulty lies in making sure graphene stays in contact with its superconducting electrodes since even small gaps between
sheets could disrupt supercurrent flow.
Josephson Junction Qubit
Quantum computing is an exciting field with the potential to completely change our world of computing, and one promising avenue is Josephson junction qubits.
What is a Josephson Junction Qubit?
A Josephson junction qubit is a quantum bit that uses the Josephson effect, a quantum mechanical phenomenon that occurs in superconducting materials, to store and manipulate information. The qubit is
made up of two superconducting electrodes separated by a thin insulating layer, which acts as the Josephson junction.
How Do Josephson Junction Qubits Work?
In a Josephson junction qubit, information is stored in the form of the phase difference between the superconducting electrodes. When a small voltage is applied across the Josephson junction, a
supercurrent can flow without any resistance. By manipulating the voltage, the phase difference between the two electrodes can be controlled, allowing for the storage and manipulation of quantum
One of the key advantages of Josephson junction qubits is their potential for scalability. Because they can be fabricated using standard microfabrication techniques, they can be easily integrated
into larger quantum computing systems. Additionally, they have been shown to have long coherence times, meaning that they can retain their quantum state for relatively long periods of time. | {"url":"https://911electronic.com/physics-of-josephson-junctions-critical-current-qubit-array-and-graphene/","timestamp":"2024-11-13T06:23:30Z","content_type":"text/html","content_length":"1049137","record_id":"<urn:uuid:36eb3c5b-df0e-4b11-98f9-118f3cc7397f>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00398.warc.gz"} |
45 in Words - Write 45 in Words | 45 Spelling
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
45 in Words
45 in Words can be written as Forty Five. If you have saved 45 dollars, then you can write, “I have just saved Forty Five dollars.” Forty Five is the cardinal number word of 45 which denotes a
• 45 in Words = Forty Five
• Forty Five in Numbers = 45
Let us write the given number in the place value chart.
We see that there are 5 ‘ones’, 4 ‘tens’. Now read the number from right to left along with its place value. 45 in words is written as Forty Five.
How to Write 45 in Words?
Using the place value chart we identify the place for each digit in the given number and write the number name. For 45 we see that the digits in units = 5, tens = 4. Therefore 45 in words is written
as Forty Five.
Problem Statements:
│How to Write 45 in Words? │Forty Five │
│Is 45 a Perfect Cube? │No │
│Is 45 a Composite Number? │Yes │
│Is 45 an Even Number? │No │
│What is 45 Decimal to Binary? │(45)₁₀ = (101101)₂ │
│Is 45 a Perfect Square? │No │
│What is the Square Root of 45? │6.708204 │
│Is 45 a Prime Number? │No │
│Is 45 an Odd Number? │Yes │
FAQs on 45 in Words
How do you Write 45 in Words?
Using the place value chart, we can identify the value of each digit in 45 and convert the numerals to words. 45 in words is written as Forty Five.
What are the Rules to Write 45 in Words?
Let us fill all the digits of 45 in the place value chart.
We see that there are 5 ‘ones’, 4 ‘tens’.
• Read the number from right to left along with its place value.
• 45 in words is written as Forty Five.
Find the Value of 40 + 5. Write the Answer in Words.
Simplifying 40 + 5 gives 45. And 45 in words is written as Forty Five.
What is the Value of Forty Five Minus Forty?
Forty Five in numerals is written as 45. Forty in numerals is written as 40, Now Forty Five Minus Forty means subtracting 40 from 45, i.e. 45 - 40 = 5 which is read as Five.
☛ Also Read:
Math worksheets and
visual curriculum | {"url":"https://www.cuemath.com/numbers/45-in-words/","timestamp":"2024-11-04T11:22:48Z","content_type":"text/html","content_length":"201942","record_id":"<urn:uuid:6c4e1d20-9f62-4019-b7e7-14437f8c2eeb>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00599.warc.gz"} |
Exact solution of a many-fermion system and its associated boson field for Journal of Mathematical Physics
Journal of Mathematical Physics
Exact solution of a many-fermion system and its associated boson field
View publication
Luttinger's exactly soluble model of a one-dimensional many-fermion system is discussed. We show that he did not solve his model properly because of the paradoxical fact that the density operator
commutators [ρ(p), ρ(-p′)], which always vanish for any finite number of particles, no longer vanish in the field-theoretic limit of a filled Dirac sea. In fact the operators ρ(p) define a boson
field which is ipso facto associated with the Fermi-Dirac field. We then use this observation to solve the model, and obtain the exact (and now nontrivial) spectrum, free energy, and dielectric
constant. This we also extend to more realistic interactions in an Appendix. We calculate the Fermi surface parameter n̄k, and find: ∂n̄ k/∂k|kF = ∞ (i.e., there exists a sharp Fermi surface) only in
the case of a sufficiently weak interaction. | {"url":"https://research.ibm.com/publications/exact-solution-of-a-many-fermion-system-and-its-associated-boson-field","timestamp":"2024-11-15T01:35:38Z","content_type":"text/html","content_length":"65934","record_id":"<urn:uuid:0f067e75-59ca-4b49-9964-7330b3cdaf3e>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00160.warc.gz"} |
How do you solve the following system using substitution?: y=x-5, y=2x | Socratic
How do you solve the following system using substitution?: #y=x-5, y=2x#
1 Answer
You simply have to substitute $x$ or $y$ from the first equation to $x$ or $y$ in the second equation.
So in the first equation, the value of $y$ is $x - 5$. You just have to substitute this value into $y$ in the second equation.
$\left[1\right] \textcolor{w h i t e}{X X} y = 2 x$
$\left[2\right] \textcolor{w h i t e}{X X} x - 5 = 2 x$
$\left[3\right] \textcolor{w h i t e}{X X} - 5 = 2 x - x$
$\left[4\right] \textcolor{w h i t e}{X X} \textcolor{b l u e}{x = - 5}$
Now that we have the value of $x$, we can plug it into either of the two equations to solve for $y$.
$\left[1\right] \textcolor{w h i t e}{X X} y = x - 5$
$\left[2\right] \textcolor{w h i t e}{X X} y = \left(- 5\right) - 5$
$\left[3\right] \textcolor{w h i t e}{X X} \textcolor{red}{y = - 10}$
Impact of this question
2348 views around the world | {"url":"https://socratic.org/questions/how-do-you-solve-the-following-system-using-substitution-y-x-5-y-2x","timestamp":"2024-11-02T11:52:37Z","content_type":"text/html","content_length":"34380","record_id":"<urn:uuid:e34085ae-8e45-43df-bd78-d7032e2bbd4d>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00034.warc.gz"} |
I am a Morrill Professor and the Barbara J. Janson Professor in the Department of Mathematics at Iowa State University. Prior to coming to Iowa State I did a three year NSF PostDoc under the
supervision of Benny Sudakov at UCLA. Before that I earned my doctorate degree in mathematics at UC San Diego under the supervision of Fan Chung. In addition I worked extensively with Ron Graham and
am current proprietor of Ron's archival material.
My primary mathematical interests are spectral graph theory, enumerative combinatorics, mathematics of juggling, discrete geometry, and generally mathematics of fun things. Among other things, I can
do eight perfect faro shuffles in a row, and is willing to teach this to anyone who stops by my office.
From Fall of 2018 through Fall of 2020 I was the calculus coordinator for Iowa State University. Many of my lectures during that time were recorded and are available online.
One of my main areas of research is spectral graph theory. A graph looks at the connections (edges) between objects (vertices). One way to understand a graph is by storing it as an array. A linear
algebraist sees an array and says "Hey, let's call it a matrix" (matrix = array with benefits), and then a whole new world of exploration opens up by looking at the eigenvalues of the matrix, and
this is the area of spectral graph theory.
Other math related talks I have given:
I have worked in mathematics of juggling (currently one of only a few mathematicians who have published more papers about the mathematics of juggling than the number of balls that they can juggle).
Coming in the near(?) future from Princeton University Press -- Juggling Counts
I have a lot of fun working with students; and a few of the students have had a lot of fun and been creative with me. You can find a few of the videos online:
A lot of fun and interesting mathematics can be done and explored by using decks of cards and perfect shuffles. I have a few videos that talk about some of these (aimed at young people in STEM):
Fan Chung (my PhD advisor) and me in December 2020. | {"url":"https://www.stevebutler.org/","timestamp":"2024-11-13T09:29:43Z","content_type":"text/html","content_length":"125525","record_id":"<urn:uuid:40b06c91-ecb5-4519-81f3-b057599ae929>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00541.warc.gz"} |
13.1: Math Talk: How Far? (10 minutes)
The purpose of this Math Talk is to elicit strategies and understandings students have for computing the distance between two points on the number line, in particular thinking about distance as the
absolute value of a difference of two numbers. These understandings will be helpful later in this lesson when students learn about the absolute value function.
Display one problem at a time. Give students quiet think time for each problem, and ask them to give a signal when they have an answer and a strategy. Keep all problems displayed throughout the talk.
Follow with a whole-class discussion.
Student Facing
Evaluate mentally: How far away is each house from the school?
Activity Synthesis
Ask students to share their strategies for each problem. Record and display their responses for all to see. To involve more students in the conversation, consider asking:
• “Who can restate ___’s reasoning in a different way?”
• “Did anyone have the same strategy but would explain it differently?”
• “Did anyone solve the problem in a different way?”
• “Does anyone want to add on to _____’s strategy?”
• “Do you agree or disagree? Why?”
13.2: $a$ and $b$ (10 minutes)
In this activity students subtract 2 values and recall that the distance between the values on the number line is given by the lesser value subtracted from the greater value.
Student Facing
1. For each pair of values, find \(b - a\). Be prepared to explain your reasoning.
1. \(a = 28, b = 57\)
2. \(a = \frac{4}{5}, b = \frac{1}{2}\)
3. \(a = 27, b = \text{-}17\)
4. \(a = \text{-}35, b = \text{-}19\)
5. \(a = 19, b = 35\)
6. \(a = \text{-}106, b = 43\)
2. For which pairs of values does the subtraction give the distance between the numbers on the number line?
1. What do you notice about these pairs of numbers?
3. Given 2 numbers, how can you find the distance between them on the number line?
Activity Synthesis
The purpose of the discussion is to find an algorithm for finding the distance between two numbers on the number line. Select students to share their solutions and methods. Ask students,
• “When you have two values, \(a\) and \(b\), what is the difference between the solution to \(a - b\) and \(b-a\)?” (The value is the same, but one of them will be negative.)
• “Is there a way to write a formula to find the distance between \(a\) and \(b\) on the number line if you don’t know which is greater?” (There are no ways that I remember right now.)
13.3: It’s That Far Away (20 minutes)
In this activity, students find 2 numbers that are a given distance from a number. In the associated Algebra 1 lesson students find the positive difference between 2 values by figuring out how far
away a guess is from an actual value.
Student Facing
1. Find 2 numbers that are \(d\) away from \(a\) on the number line.
1. \(a = 14, d = 6\)
2. \(a = \text{-}7, d = 16\)
3. \(a = 103, d = 56\)
4. \(a = 4, d = 138\)
2. Use \(d\) and \(a\) to write 2 expressions that find the values that are \(d\) away from \(a\).
3. Kiran is looking at some old work where he did problems like this and found an answer that was marked correct. The answer is -18 and 46. Could Kiran figure out the values of \(a\) and \(d\) from
the problem based on these values? If so, what are the values? If not, what additional information would help? Explain or show your reasoning.
4. In a planned neighborhood along Stepford Street, all of the houses are identical and equally distant from one another. The house at 102 Stepford Street is 2,250 feet from the house at 84 Stepford
Street. Is there enough information to find the address of another house that is that same distance away from 84 Stepford Street? Explain your reasoning.
Activity Synthesis
The purpose of the discussion is for students to think more deeply about distances on the number line. Select students to share their responses. Ask students,
• “How far apart are 85 and 31 on the number line? What is another number that is the same distance away from 31?” (They are 54 apart. -23 is also 54 away from 31.)
• “In most places in the United States, houses on the same side of the street are either all even numbers or odd numbers. Does this affect the answer to the question about the planned neighborhood?
” (No. It would affect how I might draw the picture (for example, there may only be 9 houses between 84 and 102 Stepford Street), but the same numbering would happen on the other side of the
house at 84 Stepford.) | {"url":"https://curriculum.illustrativemathematics.org/HS/teachers/4/4/13/index.html","timestamp":"2024-11-11T07:04:54Z","content_type":"text/html","content_length":"86679","record_id":"<urn:uuid:33e975bc-62b9-458b-8e52-991f7acf3351>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00090.warc.gz"} |
Fibonacci Sequence - Melody Dean Dimick: Writer/Speaker
Fibonacci Sequence
If like most of us, you consider the Fibonacci sequence a series of numbers in which each number is the sum of the two preceding numbers, you are right. An example of a Fibonacci sequence: the
numbers 0, 1, 1, 2, 3, 5, 8, 13, …
The Fibonacci sequence can be used to create poetry. Gregory K. Pincus, an author who will join me for a Florida Writer Association Youth Workshop in October at the Florida Writer Conference, wrote a
book titled The 14 Fibs of Gregory K. The book mixes fibs, a math-loving family, friends, and the Fibonacci sequence.
I’m a math slug, so I’ve only written one Fibonacci sequence poem in my life. The key to writing a Fibonacci (Fib) sequence poem is to remember that the total number of syllables in each line must
equal the total number of syllables in the preceding two lines. My sample Fibonacci poem from my soon-to-be published play Ain’t It a Shame based on my book of poetry title Backpack Blues: Ignite the
Fire Within appears below:
I inflict each wound
A bracelet of tears mars my wrist
Mother, with each slit I try to steal you from cancer.
Recently, I’ve decided to grow bonsai trees. Imagine my surprise when I learned that this Fibonacci sequence is important to what is known as the golden rule for proportion, and is a good basis for
composing a bonsai. Wow! Math knowledge sure can come in handy in life. | {"url":"https://www.melodydeandimick.com/fibonacci-sequence/","timestamp":"2024-11-11T21:00:50Z","content_type":"text/html","content_length":"32837","record_id":"<urn:uuid:a959b4c6-5500-4861-9492-e260f4885a8b>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00369.warc.gz"} |
Killer Sudoku
As with Sudoku, the objective of Killer Sudoku is to fill the grid with the numbers 1 to 9, such that each row, column and nonet (3x3 group of cells) contains each number only once.
In addition to this, a Killer Sudoku grid is divided into cages, shown with dashed lines. The sum of the numbers in a cage must equal the small number in its top-left corner.
The same number cannot appear in a cage more than once.
Greater-Than Killer Sudoku
Greater-Than Killer Sudoku, or Comparison Killer Sudoku, has exactly the same rules as Killer Sudoku, except that not every cage has a sum in its top-left corner. Instead, some cages are linked
together with symbols:
• The left cage's sum is less than the right cage's sum.
• The left cage's sum is less than or equal to the right cage's sum.
• The left cage's sum is equal to the right cage's sum.
• The left cage's sum is greater than or equal to the right cage's sum.
• The left cage's sum is greater than the right cage's sum.
The Strategies page describes strategies for solving Killer Sudoku puzzles. | {"url":"https://www.dailykillersudoku.com/rules","timestamp":"2024-11-12T02:42:27Z","content_type":"text/html","content_length":"16723","record_id":"<urn:uuid:ee53134f-fbfb-4ba1-8ad6-268e9f37b6d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00667.warc.gz"} |
Mean Reversion
Updated for Python 3.9, January 2023
I'd like to thank Dr. Tom Starke for providing the inspiration for this article series. The code below is a modification of that which used to be found on his website leinenbock.com, which later
became drtomstarke.com.
So far on QuantStart we have discussed algorithmic trading strategy identification, successful backtesting, securities master databases and how to construct a software research environment. It is now
time to turn our attention towards forming actual trading strategies and how to implement them.
One of the key trading concepts in the quantitative toolbox is that of mean reversion. This process refers to a time series that displays a tendency to revert to its historical mean value.
Mathematically, such a (continuous) time series is referred to as an Ornstein-Uhlenbeck process. This is in contrast to a random walk (Brownian motion), which has no "memory" of where it has been at
each particular instance of time. The mean-reverting property of a time series can be exploited in order to produce profitable trading strategies.
In this article we are going to outline the statistical tests necessary to identify mean reversion. In particular, we will study the concept of stationarity and how to test for it.
Testing for Mean Reversion
A continuous mean-reverting time series can be represented by an Ornstein-Uhlenbeck stochastic differential equation:
\begin{eqnarray} d x_t = \theta (\mu - x_t) dt + \sigma dW_t \end{eqnarray}
Where $\theta$ is the rate of reversion to the mean, $\mu$ is the mean value of the process, $\sigma$ is the variance of the process and $W_t$ is a Wiener Process or Brownian Motion.
In a discrete setting the equation states that the change of the price series in the next time period is proportional to the difference between the mean price and the current price, with the addition
of Gaussian noise.
This property motivates the Augmented Dickey-Fuller Test, which we will describe below.
Augmented Dickey-Fuller (ADF) Test
Mathematically, the ADF is based on the idea of testing for the presence of a unit root in an autoregressive time series sample. It makes use of the fact that if a price series possesses mean
reversion, then the next price level will be proportional to the current price level. A linear lag model of order $p$ is used for the time series:
\begin{eqnarray} \Delta y_t = \alpha + \beta t + \gamma y_{t-1} + \delta_1 \Delta y_{t-1} + \cdots + \delta_{p-1} \Delta y_{t-p+1} + \epsilon_t \end{eqnarray}
Where $\alpha$ is a constant, $\beta$ represents the coefficient of a temporal trend and $\Delta y_t = y(t)-y(t-1)$. The role of the ADF hypothesis test is to consider the null hypothesis that $\
gamma=0$, which would indicate (with $\alpha = \beta = 0$) that the process is a random walk and thus non mean reverting.
If the hypothesis that $\gamma=0$ can be rejected then the following movement of the price series is proportional to the current price and thus it is unlikely to be a random walk.
So how is the ADF test carried out? The first task is to calculate the test statistic ($DF_{\tau}$), which is given by the sample proportionality constant $\hat{\gamma}$ divided by the standard error
of the sample proportionality constant:
\begin{eqnarray} DF_{\tau} = \frac{\hat{\gamma}}{SE(\hat{\gamma})} \end{eqnarray}
Dickey and Fuller have previously calculated the distribution of this test statistic, which allows us to determine the rejection of the hypothesis for any chosen percentage critical value. The test
statistic is a negative number and thus in order to be significant beyond the critical values, the number must be more negative than these values, i.e. less than the critical values.
A key practical issue for traders is that any constant long-term drift in a price is of a much smaller magnitude than any short-term fluctuations and so the drift is often assumed to be zero ($\beta=
0$) for the model.
Since we are considering a lag model of order $p$, we need to actually set $p$ to a particular value. It is usually sufficient, for trading research, to set $p=1$ to allow us to reject the null
To calculate the Augmented Dickey-Fuller test you will need to obtain a csv of Google Open-High-Low-Close-Volume (GOOG OHLCV) data from the 1st September 2004 to 31st August 2020. This can be
obtained freely from Yahoo Finance. We can then make use of the pandas and statsmodels libraries. The former provides us with a straightforward method of working with OHLCV data, while the latter
wraps the ADF test in a easy to call function.
We will carry out the ADF test on a sample price series of Google stock, from 1st September 2004 to 31st August 2020.
Google price series from 2004-09-01 to 2020-08-31
Here is the Python code to carry out the test:
import pandas as pd
import statsmodels.tsa.stattools as ts
def create_dataframe(data_csv):
Read pricing data csv download for Google (GOOG)
OHLCV data from 01/09/2004-31/08/2020 into a DataFrame.
data_csv : `csv`
CSV file containing pricing data
A DataFrame containing Google (GOOG) OHLCV data from
01/09/2004-31/08/2020. Index is a Datetime object.
# Create a pandas DataFrame containing the Google OHLCV data
goog = pd.read_csv(data_csv, index_col="Date")
# Convert index to a Datetime object
goog.index = pd.to_datetime(goog.index)
return goog
def augmented_dickey_fuller(goog):
Carry out the Augmented Dickey-Fuller test for Google data.
goog : `pd.DataFrame`
A DataFrame containing Google (GOOG) OHLCV data from
01/09/2004-31/08/2020. Index is a Datetime object.
# Output the results of the Augmented Dickey-Fuller test for Google
# with a lag order value of 1
adf = ts.adfuller(goog['Adj Close'], 1)
if __name__ == "__main__":
data_csv = "/Path/To/Your/GOOG.csv"
goog_df = create_dataframe(data_csv)
goog_adf = augmented_dickey_fuller(goog_df)
Here is the output of the Augmented Dickey-Fuller test for Google over the period. The first value is the calculated test-statistic, while the second value is the p-value. The fourth is the number of
data points in the sample. The fifth value, the dictionary, contains the critical values of the test-statistic at the 1, 5 and 10 percent values respectively. Full documentation for this
implementation of the Augmented Dickey-Fuller test can be found here.
{'1%': -3.4319753040982497,
'5%': -2.8622581704258483,
'10%': -2.56715228954666},
Since the calculated value of the test statistic is larger than any of the critical values at the 1, 5 or 10 percent levels, we cannot reject the null hypothesis of $\gamma=0$ and thus we are
unlikely to have found a mean reverting time series.
An alternative means of identifying a mean reverting time series is provided by the concept of stationarity, which we will now discuss.
Testing for Stationarity
A time series (or stochastic process) is defined to be strongly stationary if its joint probability distribution is invariant under translations in time or space. In particular, and of key importance
for traders, the mean and variance of the process do not change over time or space and they each do not follow a trend.
A critical feature of stationary price series is that the prices within the series diffuse from their initial value at a rate slower than that of a Geometric Brownian Motion. By measuring the rate of
this diffusive behaviour we can identify the nature of the time series.
We will now outline a calculation, namely the Hurst Exponent, which helps us to characterise the stationarity of a time series.
Hurst Exponent
The goal of the Hurst Exponent is to provide us with a scalar value that will help us to identify (within the limits of statistical estimation) whether a series is mean reverting, random walking or
The idea behind the Hurst Exponent calculation is that we can use the variance of a log price series to assess the rate of diffusive behaviour. For an arbitrary time lag $\tau$, the variance is given
\begin{eqnarray} {\rm Var}(\tau) = \langle |\log(t+\tau)-\log(t)|^2 \rangle \end{eqnarray}
Since we are comparing the rate of diffusion to that of a Geometric Brownian Motion, we can use the fact that at large $\tau$ we have that the variance is proportional to $\tau$ in the case of a GBM:
\begin{eqnarray} \langle |\log(t+\tau)-\log(t)|^2 \rangle \sim \tau \end{eqnarray}
The key insight is that if any autocorrelations exist (i.e. any sequential price movements possess non-zero correlation) then the above relationship is not valid. Instead, it can be modified to
include an exponent value "$2H$", which gives us the Hurst Exponent value $H$:
\begin{eqnarray} \langle |\log(t+\tau)-\log(t)|^2 \rangle \sim \tau^{2H} \end{eqnarray}
A time series can then be characterised in the following manner:
• $H<0.5$ - The time series is mean reverting
• $H=0.5$ - The time series is a Geometric Brownian Motion
• $H>0.5$ - The time series is trending
In addition to characterisation of the time series the Hurst Exponent also describes the extent to which a series behaves in the manner categorised. For instance, a value of $H$ near 0 is a highly
mean reverting series, while for $H$ near 1 the series is strongly trending.
To calculate the Hurst Exponent for the Google price series, as utilised above in the explanation of the ADF, we can use the following Python code:
from numpy import cumsum, log, polyfit, sqrt, std, subtract
from numpy.random import randn
def hurst(ts):
Returns the Hurst Exponent of the time series vector ts
ts : `numpy.array`
Time series upon which the Hurst Exponent will be calculated
The Hurst Exponent from the poly fit output
# Create the range of lag values
lags = range(2, 100)
# Calculate the array of the variances of the lagged differences
tau = [sqrt(std(subtract(ts[lag:], ts[:-lag]))) for lag in lags]
# Use a linear fit to estimate the Hurst Exponent
poly = polyfit(log(lags), log(tau), 1)
# Return the Hurst exponent from the polyfit output
return poly[0]*2.0
# Create a Gometric Brownian Motion, Mean-Reverting and Trending Series
gbm = log(cumsum(randn(100000))+1000)
mr = log(randn(100000)+1000)
tr = log(cumsum(randn(100000)+1)+1000)
# Output the Hurst Exponent for each of the above series
# and the price of Google (the Adjusted Close price) for
# the ADF test given above in the article
print("Hurst(GBM): %s" % hurst(gbm))
print("Hurst(MR): %s" % hurst(mr))
print("Hurst(TR): %s" % hurst(tr))
# Assuming you have run the above code to obtain 'goog'!
print("Hurst(GOOG): %s" % hurst(goog['Adj Close'].values))
The output from the Hurst Exponent Python code is given below:
Hurst(GBM): 0.5031756326748011
Hurst(MR): 0.0003405749602341958
Hurst(TR): 0.9610746103704354
Hurst(GOOG): 0.4149039167976803
From this output we can see that the Geometric Brownian Motion posssesses a Hurst Exponent, $H$, that is almost exactly 0.5. The mean reverting series has $H$ almost equal to zero, while the trending
series has $H$ close to 1.
Interestingly, Google has $H$ also approaches 0.5 indicating that it is extremely close to a geometric random walk (at least for the sample period we're making use of!).
While we now have a means of characterising the nature of a price time series, we have yet to discuss how statistically significant this value of $H$ is. We need to be able to determine if we can
reject the null hypothesis that $H=0.5$ to ascertain mean reverting or trending behaviour.
In subsequent articles we will describe how to calculate whether $H$ is statistically significant. In addition, we will consider the concept of cointegration, which will allow us to create our own
mean reverting time series from multiple differing price series. Finally, we will tie these statistical techniques together in order to form a basic mean reverting trading strategy.
Related Articles | {"url":"https://www.quantstart.com/articles/Basics-of-Statistical-Mean-Reversion-Testing/?ref=blog.clipper.exchange","timestamp":"2024-11-03T22:01:01Z","content_type":"text/html","content_length":"27359","record_id":"<urn:uuid:bc213ccc-66a1-4e8d-b60f-c0165087670b>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00446.warc.gz"} |
Long Division Method : Definition and Solved Example
Introduction to Long Division Problems
The division is one of the four basic operations of arithmetic, the ways that numbers are combined to make new numbers. The other operations are addition, subtraction, and multiplication.
Long division is one method in which the concept of division, multiplication, and subtraction is used simultaneously.
Not only is this about division, but students must also understand the use of long division, and the meaning of divisors, dividends, and multiples.
In Mathematics, long division problems are the mathematical method for dividing large numbers into smaller groups or parts. It helps to break down a problem into simple and easy steps.
For example, 4 people having pizza now using this application can be divided up into 8 equal parts. Now with this, each person gets 2 slices.
Dividing pizza among 4 people.
Now with this, each person gets 2 slices.
These symbols are used in different forms for example:
In this article, long division worksheets and long division problems have been given.
Elements of Division
Here’s one by one element of long division with the following examples.
• Dividend- The dividend is the number you are dividing up with
• Divisor- The divisor is the number you are dividing by
• Quotient- It is the answer
• The Remainder- If the answer to the division problem is not a whole number the rest of the number is called the remainder.
Dividend ÷ Divisor = Quotient
Elements of the Division
Steps to Carry Out in Long Division
We will carry out 5 steps to solve every long division with ease.
• Subtract
• Bring down
• Remainder
Steps Involved in Long Division
Properties of Long Division
• Division by 1 Property: If we divide a number by 1 the quotient is the number itself. Or in other words, when any number is divided by 1, we always get the number itself as the answer/quotient.
For Example:
(i) 7592 ÷ 1 = 7592
(ii) 5247 ÷ 1= 5247
• Division by the Same Number Property: If we divide a number by the number itself then the quotient is always 1.
For Example:
(i) 275 ÷ 275 = 1
(ii) 105 ÷ 105 = 1
• Division of Any Number by 0 Property is meaningless
For Example:
(i) 35 ÷ 0 = no meaning
(ii) 65 ÷ 0 = no meaning
• Division of 0 by Any Number Property: If 0 is divided by a number gives 0 as the quotient or answer. In other words, when 0 is divided by any number, we always get 0 as an answer.
For example:
(i) 0 ÷ 25 = 0
(ii) 0 ÷ 100 = 0
Solved Example of Long Division Worksheet
Example 1: Below is the worksheet to solve the questions by long division methods
Question on Long Division
Solution of long division
Example 2: Find the value of quotient and remainder when 75 is divided by 3 ? Verify using the Long division method.
Ans: Here, we divided 75 by 3. So, the dividend is 75 and the divisor is 3.
Division of 75 by 3
Hence, we get the quotient as 25 & remainder as 0.
To check division we will put the value.
Dividend = (Divisor × Quotient) + Remainder.
Therefore, 75 = 3 × 25 + 0 = 75
Example 3. Divide 9.24 by 7
Division with decimal
Practice Problem Related to Long Division
Q1. Divide the 852.8 ÷ 6 and give the exact answer.
Ans: 142.133
Q2. Find the quotient when 7859 is divisible by 76.
Ans: 103.4
Q3. Find the quotient when 91 is divided by 9 using the long division method.
Ans: 10.11
Q4. Divide 324 by 2.
Ans. 162
All above it is described that long division is a separation of something into different parts. It is a tool that helps to divide things into different parts and it includes elements such as
Dividend, Divisor, Quotient, and Remainder. We dealt with the different properties of division and some solutions with solved examples.
FAQs on Long Division Method
1. Why is Division by zero undefined?
Division by zero is not defined because we cannot divide any number by 0. This is because when any number is multiplied by 0 the answer is also 0. Now if we do the opposite of it like 1 is divided by
0 then we will get the infinite value which is not quantifiable in mathematics.
2. What is the division symbol and what are some strategies for long division?
There are two signs of long division which are ÷ & /
The steps are more or less the same, except for new addition:
• Divide the tens column dividend by divisor.
• Multiply the divisor by the quotient in the tens place column.
• Subtract the product from the divisor.
• Bring down the dividend in one column and repeat.
3. Mention the divisibility rules of 6 and 8.
The rules of divisibility of 6 and 8 are:
• Divisibility by 6: Numbers divisible by 6 can also be divided by both 3 and 2. Students should test the number with both rules for 3 and 2. If the number passes both tests, it can be divided by
6. If it fails just one test it cannot.
For instance:
308 ends in an even digit, so it’s divisible by 2. However, 3 + 0 + 8 = 11, which cannot be divided evenly by 3. As such, 308 is not divisible by 6. | {"url":"https://www.vedantu.com/maths/long-division-method","timestamp":"2024-11-13T05:30:11Z","content_type":"text/html","content_length":"273226","record_id":"<urn:uuid:2712b90b-90e8-43d3-b7c3-2bc80136240a>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00612.warc.gz"} |
SAT Math Prep: Solving Systems Of Linear Equations
The SAT is one of the most important tests you'll take in your academic career, and folks often dread the math section in particular. If solving systems of linear equations is your idea of a
nightmare and finding a best-fit equation for a scatter plot makes you feel scatter-brained, this is the guide for you. The SAT math sections are a challenge, but they're easy enough to master if you
handle your preparation right.
Get to Grips with the SAT Math Test
The math SAT questions are broken up into a 25-minute section that you can't use a calculator for and a 55 minute section that you can use a calculator for. There are 58 questions in total and 80
minutes to complete them in, and most are multiple choice. The questions are loosely ordered by least difficult to most difficult. It's best to familiarize yourself with the structure and format of
the question paper and the answer sheets (see Resources) before you take the test.
On a larger scale, the SAT Math Test is divided into three separate content areas: Heart of Algebra, Problem Solving and Data Analysis, and Passport to Advanced Math.
Today we'll look at the first component: Heart of Algebra.
Heart of Algebra: Practice Problem
For the Heart of Algebra section, the SAT covers key topics in algebra and generally relate to simple linear functions or inequalities. One of the more challenging aspects of this section is solving
systems of linear equations.
Here is an example system of equations. You need to find values for x and y:
3&x+ &\;&y = 6 \
4&x-&3&y = -5
And potential answers are:
a) (1, −3)
b) (4, 6)
c) (1, 3)
d) (−2, 5)
Try to solve this problem before reading on for the solution. Remember, you can solve systems of linear equations using the substitution method or the elimination method. You could also test each
potential answer in the equations and see which one works.
The solution can be found using either method, but this example uses elimination. Looking at the equations:
3&x+ &\;&y = 6 \
4&x-&3&y = -5
Note that y appears in the first and −3y appears in the second. Multiplying the first equation by 3 gives:
This can now be added to the second equation to eliminate the 3y terms and leave:
\((4x + 9x) + (3y-3y) = (– 5 + 18)\)
This is easy to solve. Dividing both sides by 13 leaves:
This value for x can be substituted into either equation to solve. Using the first gives:
\((3 × 1) + y = 6\)
\(3 + y = 6\)
\(y = 6 – 3 = 3\)
So the solution is (1, 3), which is option c).
Some Useful Tips
In math, the best way to learn is often by doing. The best advice is to use practice papers, and if you make a mistake on any questions, work out exactly where you went wrong and what you should have
done instead, rather than simply looking up the answer.
It also helps to work out what your main issue is: Do you struggle with the content, or do you know the math but struggle to answer the questions in time? You can do a practice SAT and give yourself
extra time if needed to work this out.
If you get the answers right but only with extra time, focus your revision on practicing solving problems quickly. If you struggle with getting answers right, identify areas where you're struggling
and go over the material again.
Check Out for Part II
Ready to tackle some practice problems for Passport to Advanced Math and Problem Solving and Data Analysis? Check out Part II of our SAT Math Prep series.
Cite This Article
Johnson, Lee. "SAT Math Prep: Solving Systems Of Linear Equations" sciencing.com, https://www.sciencing.com/sat-math-prep-solving-systems-of-linear-equations-13716516/. 14 January 2019.
Johnson, Lee. (2019, January 14). SAT Math Prep: Solving Systems Of Linear Equations. sciencing.com. Retrieved from https://www.sciencing.com/
Johnson, Lee. SAT Math Prep: Solving Systems Of Linear Equations last modified August 30, 2022. https://www.sciencing.com/sat-math-prep-solving-systems-of-linear-equations-13716516/ | {"url":"https://www.sciencing.com:443/sat-math-prep-solving-systems-of-linear-equations-13716516/","timestamp":"2024-11-09T13:20:22Z","content_type":"application/xhtml+xml","content_length":"75475","record_id":"<urn:uuid:1b88eb61-050f-42d1-9fad-58093e0b7b37>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00318.warc.gz"} |
The zeta Functions of Moduli Stacks of $G$-Zips and Moduli Stacks of Truncated Barsotti--Tate Groups
The zeta Functions of Moduli Stacks
-Zips and Moduli Stacks
of Truncated Barsotti–Tate Groups
Milan Lopuha¨a-Zwakenberg
Received: July 18, 2018 Revised: October 10, 2018 Communicated by Takeshi Saito
Abstract. We study stacks of truncated Barsotti–Tate groups and the G-zips defined by Pink, Wedhorn & Ziegler. The latter occur naturally when studying truncated Barsotti–Tate groups of height 1 with
additional structure. By studying objects over finite fields and their automorphisms we determine the zeta functions of these stacks. These zeta functions can be expressed in terms of the Weyl group
of the reductive group G and its action on the root system. The main ingredients are the classification of G-zips over algebraically closed fields and their automorphism groups by Pink, Wedhorn &
Ziegler, and the study of truncated Barsotti-Tate groups and their automor-phism groups by Gabber & Vasiu.
2010 Mathematics Subject Classification: 11G10, 11G18, 14K10, 14L30
Keywords and Phrases: G-zips, Barsotti–Tate groups, moduli stacks, zeta functions
1 Introduction
Throughout this article, let p be a prime number. Over a field k of characteristic p, the truncated Barsotti–Tate groups of level 1 (henceforth BT1) were first
varieties A over k. As such, these results (independently obtained) were used in [12] to define a stratification on the moduli space of polarised abelian varieties. In [9] the first step was made
towards generalising this relation to Shimura varieties of PEL type, by classifying Barsotti–Tate groups of level 1 with the action of a fixed semisimple Fp-algebra and/or a polarisation. The
of these BT1 with extra structure over an algebraically closed field ¯k turned
out to be related to the Weyl group of an associated reductive group over ¯
k. These BT1 with extra structure were then generalised in [11] to so-called
F -zips, that generalise the linear algebra objects that arise when looking at the Dieudonn´e modules corresponding to BT1. Over an algebraically closed
field the classification of these F -zips is also related to the Weyl group of a certain reductive group that depends on the chosen extra structure. In [14] and [13] this was again generalised to
so-called ˆG-zips, taking the (not necessarily connected) reductive group ˆG as the primordial object.1 [For certain choices]
of ˆG these ˆG-zips correspond to F -zips with some additional structure. Again their classification over an algebraically closed field is expressed in terms of the Weyl group of ˆG.
These classifications suggest two possible directions for further research. First, one could try to study ˆG-zips over non-algebraically closed fields; the first step would then be to understand the
classification over finite fields. Another direction would be to study BTn for general n, either over finite fields or over
algebraically closed fields. One may approach both these problems by looking at their moduli stacks. For a reductive group ˆG over a finite field k, a cocharacter χ : Gm,k′ → ˆG[k]′ defined over some
finite extension k′ of k, and a subgroup
scheme Θ ⊂ π0( ˆGk′) one can consider the stack ˆG-Zipχ,Θ
k′ of ˆG-zips of type
(χ, Θ) (see Section 4); it is an algebraic stack of finite type over k′[. Similarly, for]
two nonnegative integers h ≥ d one can consider the stack BTh,dn of truncated
Barsotti–Tate groups of level n, height h and dimension d; this is an algebraic stack of finite type over Fp(see [19, Prop. 1.8]). One way to study these stacks
is via their zeta function. For an algebraic stack of finite type X over a finite field Fq, and an integer v ≥ 1, the Fqv-point count of X is defined as
#X(Fqv) =
1 #Aut(x),
where [C] denotes the set of isomorphism classes of a category C. The zeta 1[Here we follow the notation of [14] and [13] in writing ˆ]G[for the reductive group, and G] for its identity component.
function of X is defined to be the element of QJtK given by Z(X, t) = exp X v≥1 qv v #X(Fqv) .
By definition the zeta function encodes information about the point counts of X. Furthermore, the zeta function is related to the cohomology of ℓ-adic sheaves on X (see [1] and [17]). As a power
series in t, it defines a meromorphic function that is defined everywhere (as a holomorphic map C → P1[(C)), but]
it is not necessarily rational; the reason for this is that for stacks, contrary to schemes, the ℓ-adic cohomology algebra is in general not finite dimensional (see [17, 7.1]).
The aim of this article is to calculate the zeta functions of stacks of the form ˆ
G-Zipχ,Θ[k]′ and BT
n . The results are stated below. In the statement of Theorem
1.1, the finite set Ξχ,Θ[classifies the set of isomorphism classes in ˆ][G-Zip]χ,Θ[(¯][F] q);
this classification turns out to be related to the Weyl group of ˆG (see Proposi-tion 4.5). For ξ ∈ Ξχ,Θ[, let a(ξ) be the dimension of the automorphism group]
of the corresponding object in ˆG-Zipχ,Θ(¯Fq), and let b(ξ) be the minimal
in-teger b such that this object has a model over Fqb. It turns out that Ξχ,Θ
has a natural action of Γ := Gal(¯Fq/Fq), and that the functions a and b are
Γ-invariant. In the statement of Theorem 1.2 the notation is the same, applied to the group ˆG = GLh,Fp (with suitable χ; as a subgroup of π0( ˆG)k′ the group
Θ is necessarily trivial for connected ˆG).
Theorem 1.1. Let q0be a power of p, and let ˆG be a reductive group over Fq0.
Let q be a power of q0, let χ : Gm,Fq → ˆGFq be a cocharacter, and let Θ be a
subgroup scheme of the group scheme π0(Cent[G]ˆ
Fq(χ)). Let Ξ
χ,Θ [and Γ be as in]
Section 4 and let a, b : Γ\Ξχ,Θ[→ Z]
≥0 be as in Notation 5.6. Then Z( ˆG-Zipχ,Θ[F][q] , t) = Y ¯ ξ∈Γ\Ξχ,Θ 1 1 − (q−a( ¯ξ)[t)]b( ¯ξ).
Theorem 1.2. Let h, n > 0 and 0 ≤ d ≤ h be integers. Let Ξ and a : Ξ → Z≥0
be as in Notation 6.1. Then Z(BTh,dn , t) = Y ξ∈Ξ 1 1 − p−a(ξ)[t].
In particular the zeta function of the stack BTh,dn does not depend on n.
As we will see later on, for split groups we have b( ¯ξ) = 1 for all ¯ξ ∈ Γ\Ξχ,Θ[=]
Ξχ,Θ[. In particular, the zeta function of BT]h,d
coincides with that of GLh-Zipχ as determined in Theorem 1.1 (for a suitable
All the terminology used in the statements above will be introduced in due time. For now let us note that the functions a and b can also be expressed in terms of the action of the Weyl group of ˆG on
the root system, and are readily calculated for a given ( ˆG, χ, Θ) (see Example 4.7). Furthermore, [13, §8] shows how to construct isomorphisms (on categories of k-points for perfect k) between
moduli stacks of G-zips, and moduli stacks of F -zips and BT1with
additional structure. One can use this and Theorem 1.1 to calculate the zeta functions of the latter.
We will spend some time developing theory about nonconnected algebraic groups, and much of the discussion would be simplified considerably when only considering connected ˆG. However, we choose to
tackle the problem in this gen-erality because the nonconnected case is interesting in its own right: ˆG-zips for nonconnected ˆG appear, for instance, when studying F -zips with symmetric bi-linear
forms (see [13, §8.5]), which in turn appear when considering reductions of Shimura varieties attached to orthogonal groups.
Acknowledgements: The research of which this paper is a result was car-ried out as part of a Ph.D. project at Radboud University supervised by Ben Moonen, to whom I am grateful for comments and
guidance. I also thank Johan Commelin, Torsten Wedhorn, and an anonymous reviewer for further comments. All remaining errors are, of course, my own.
2 The zeta function of quotient stacks
Throughout this section we let k be a finite field of characteristic p. In this section we study the point counts and zeta functions of categories associated to quotient stacks. The main results
(Propositions 2.14 and 2.19) are quite technical in nature, but we need them in this form in order to prove Theorems 1.1 and 1.2.
Let G be a smooth algebraic group over k. Let X be a variety over k, by which we mean a reduced k-scheme of finite type. Suppose X has a left action of G. Recall that the quotient stack [G\X] is
defined as follows: If S is a k-scheme, then the objects of the category [G\X](S) are pairs (T, f ), where T is a left G-torsor over S in the ´etale topology, and f : T → XS is a GS-equivariant
morphism of S-schemes. A morphism (T, f ) → (T′[, f]′[) in [G\X](S) is an]
isomorphism of G-torsors ϕ : T [−]∼[→ T]′ [such that f = f]′[ϕ. In order to calculate]
Notation 2.1. Suppose G is a smooth algebraic group over k, and let z be a cocycle in Z1(k, G). Recall that this means that z is a continuous map z : Gal(¯k/k) → G(¯k) (where the right hand side has
the discrete topology) for which the following equation is satisfied for all γ, γ′ [∈ Gal(¯][k/k):]
z(γγ′) = z(γ) ·γz(γ′). (2.2)
Let X be a k-variety with a left action of G, and let z be a cocycle in Z1(k, G). We define the twisted algebraic space Xz as follows: Let Xz,¯k be isomorphic to
X¯[k]as ¯k-algebraic spaces with a G¯[k]-action via an isomorphism ϕz: Xz,¯k−∼→ X¯k.
We define the Galois action on Xz(¯k) by taking γ[x := ϕ]−1
z (z(γ) ·γϕz(x))
for all x ∈ Xz,¯k(¯k) and all γ ∈ Gal(¯k/k); this defines an algebraic space Xz
over k. Its isomorphism class only depends on the class of z in H1(k, G). Two cases deserve special mention:
• We let G act on itself on the left by defining g · x := xg−1[. Then G] z is a
left G-torsor, and H1(k, G) classifies the left G-torsors in this way. • We let G act on itself on the left by inner automorphisms. The twist is
denoted Gin(z), and this is again an algebraic group. If X is a k-variety
with a left G-action, then Xz naturally has a left Gin(z)-action.
Remark 2.3. Since the algebraic space Xz is in particular an algebraic stack,
we have a notion of the point count #Xz(k′) for any finite extension k′ of k.
Since the objects of Xz(¯k) have no nontrivial automorphisms, we can regard
Xz(k′) as a set, and its point count as the cardinality of this set.
This terminology enables us to formulate the following proposition.
Proposition2.4. Let k′ [be a finite extension of k. Let G be a smooth algebraic]
group over k, and let X be a k-variety equipped with a left action of G. Then #[G\X](k′) = X
Proof. It suffices to show this for k′ [= k. Let T be a left G-torsor over k, and let]
z ∈ Z1(k, G) be such that T ∼= Gz. Then the automorphism group scheme of T
as a left G-torsor is Gin(z), which acts by right multiplication on Gz. As such,
can define a variety Tz as in Notation 2.1. This naturally has the structure of
a (Gin(z), Gin(z))-bitorsor; in fact, a straightforward calculation using Notation
2.1 shows that it is a trivial bitorsor. If f : T → Xk is a (left) G-equivariant
map, then the map f¯[k]: T¯[k] → X[k]¯ is defined over k when considered as a map
Tz,¯k→ Xz,¯k, and we denote the resulting map Tz→ Xzby fz; it is (left) Gin(z)
-equivariant. This gives a one-to-one correspondence between HomG(T, X) and
HomGin(z)(Tz, Xz). Let t0 be an element of Tz(k), which exists since Tz is a
trivial Gin(z)-torsor. We may identify the sets HomGin(z)(Tz, Xz) and Xz(k) by
identifying a map with its image of t0, and two maps fz, fz′ ∈ HomGin(z)(Tz, Xz)
correspond to isomorphic objects (T, f ), (T, f′[) in [G\X](k) if and only if f] z(t0)
and f′
z(t0) are in the same Gin(z)(k)-orbit in Xz(k). On the other hand, the
automorphism group of (T, f ) is identified with StabGin(z)(k)(fz(t0)). From the
orbit-stabiliser formula we find X (T′ ,f′ )∈[[G\X](k)], T′[∼] =T 1 #Aut(T′[, f]′[)]= X x∈Gin(z)(k)\Xz(k) 1 #StabGin(z)(k)(x) = #Xz(k) #Gin(z)(k) .
Summing over all cohomology classes in H1(k, G) now proves the proposition.
While Proposition 2.4 gives a direct formula for the point count of a quotient stack over a given field extension k′ of k, it is not as useful in a context where k′ [varies, as it is a priori unclear
how H]1[(k]′[, G) varies with it. In Propositions]
2.14 and 2.19 we give formulas for the point counts [G\X](k′) that do not involve determining the cohomology set H1(k′[, G), under some (quite technical)]
conditions on G and X. We first set up some notation.
Notation 2.5. As before let G be a smooth algebraic group over k, and let γ ∈ Gal(¯k/k) be the #k-th power Frobenius. We let G(¯k) act on itself on the left by defining
g · x := gx(γg)−1. (2.6)
Its set of orbits is denoted Conjk(G).
Lemma 2.7. Let G be a smooth algebraic group over k. Let γ ∈ Gal(¯k/k) be the #k-th power Frobenius. Then the map
Z1(k, G) → G(¯k) z 7→ z(γ)
is a bijection, and it induces a bijection H1(k, G)[−]∼[→ Conj] k(G).
Proof. Let Γ be the Galois group Gal(¯k/k). Since hγi ⊂ Γ is a dense subgroup, the map is certainly injective. To show that it is surjective, fix a g ∈ G(¯k), and define a map z : hγi → G(¯k) by
z(γn) = (
g · (γ[g) · · · (]γn−1
g), if n ≥ 0; (γ−1[g]−1[) · · · (]γn[g]−1[)] [if n < 0.]
This satisfies the cocycle condition (2.2) on hγi. Let e be the unit element of G(¯k). To show that we can extend z continuously to Γ, we claim that there is an integer n such that z(γN[) = e for all
N ∈ nZ. To see this, let k]′ [be]
a finite extension of k such that g ∈ G(k′[). Then from the definition of the]
map z we see that z maps hγi to G(k′[). The latter is a finite group, and hence]
there must be two nonnegative integers m < m′ such that z(γm) = z(γm′). Set n = m′[− m. From the definition of z we see that]
z(γm′) = z(γm) · (γmg) · · · (γm
g), hence (γm
g) · · · (γm′−1
g) = e; but the left hand side of this is equal to γm
hence z(γn[) = e. The cocycle condition (2.2) now tells us that z(γ]N[) = e for]
every multiple N of n; furthermore, we see that for general f ∈ Z the value z(γf[) only depends on ¯][f ∈ Z/nZ. Hence we can extend z to all of Γ via the]
composite map
Γ ։ Γ/nΓ[−]∼[→ hγi/hγ]n[i] z
−→ G(¯k),
and this is an element of Z1(k, G) that sends γ to g; hence the map in the lemma is surjective, as was to be shown. This map is also G(¯k)-equivariant with respect to the actions that give rise to
the quotients H1(k, G) and Conjk(G),
which proves the second statement of the lemma.
Recall that the classifying stack of an algebraic group G is defined to be B(G) := [G\∗], where ∗ = Spec(k) (with the trivial G-action).
Lemma2.8. Let G be a finite ´etale group scheme over k. Then for every finite extension k′ [of k we have #B(G)(k]′[) = 1.]
Proof. It suffices to show this for k = k′[. The category B(G)(k) is the category]
of G-torsors over k; its objects are classified by H1(k, G). Let γ ∈ Gal(¯k/k) be the #k-th power Frobenius, and let z ∈ H1(k, G). Then the automorphism
group (as an abstract group) of the torsor Gzis equal to Gin(z)(k), which equals Gin(z)(k) ∼= n g ∈ G(¯k) : g = z(γ) ·γg · z(γ)−1o =ng ∈ G(¯k) : z(γ) = g · z(γ) · (γ[g)]−1o = StabG(¯k)(z(γ)),
where the action of G(¯k) on itself in the last line is the one in (2.6). For every orbit C ∈ Conjk(G) choose an element xC∈ C; then the orbit-stabiliser
formula and Lemma 2.7 yield X z∈H1[(k,G)] 1 #Aut(Gz) = X C∈Conjk(G) 1 #StabG(¯k)(xC) = X C∈Conjk(G) #C #G(¯k) = 1.
Lemma 2.9. Let 1 → A → B → C → 1 be a short exact sequence of smooth algebraic groups over k. Suppose that A is connected.
1. The natural map H1(k, B) → H1(k, C) is bijective.
2. For z ∈ H1(k, B) = H1(k, C), let Az be the twist of A induced by the
image of z under the natural map H1(k, B) → H1(k, Aut(A¯k)). Then
#Bin(z)(k) = #Az(k) · #Cin(z)(k).
Proof. The short exact sequence of algebraic groups over k 1 → A → B → C → 1
induces an exact sequence of pointed cohomology sets
1 → A(k) → B(k) → C(k) → H1(k, A) → H1(k, B) → H1(k, C). From Lang’s theorem we know that H1(k, A) is trivial. By [16, III.2.4.2 Cor. 2] the last map is surjective, so by exactness it is bijective,
which proves the first statement. Furthermore for a z ∈ H1(k, B) the inclusion map Az(¯k) →
Bin(z)(¯k) is Galois-equivariant, and the quotient of Bin(z)(¯k) by the image of
this map is isomorphic to Cin(z)(¯k). This shows that we get a twisted short
exact sequence
Since Az is connected, we find H1(k, Az) = 1, and then a long exact sequence
analogous to the one above proves the second statement.
Definition 2.10. Let X be an algebraic stack over a field k. Let k′ [⊂ k]′′ [be]
two field extensions of k, and let x ∈ X(k′′[). Then a model of x over k]′ [is an]
object y ∈ X(k′[) such that y] k′′∼= x.
Lemma2.11. Let G be a smooth algebraic group over k, and let X be a variety over k. Then there is a bijection [G\X](¯k) [−]∼[→ G(¯][k)\X(¯][k) with the following]
property: let k′[be a finite extension of k, and let ξ be an element of G(¯][k)\X(¯][k),]
corresponding to a (T, f ) ∈ [G\X](¯k). Then (T, f ) has a model over k′ [if and]
only if ξ is fixed under the action of Gal(¯k/k′[) on G(¯][k)\X(¯][k).]
Proof. Over ¯k every torsor is trivial, and a G-equivariant map f : G¯k→ X¯k is
determined by its image of the unit element e ∈ G(¯k). Furthermore, two maps f, f′[: G]
k→ Xk¯ yield isomorphic elements (G¯k, f ), (G¯k, f′) of [G\X](¯k) if and
only if f (e) and f′[(e) lie in the same G(¯][k)-orbit. Since f (G(¯][k)) is a G(¯][k)-orbit]
in X(¯k), we get a bijection:
Φ : [[G\X](¯k)] [−]∼[→ G(¯][k)\X(¯][k)] [(2.12)]
(Gk¯, f ) 7→ f (G(¯k)).
Now suppose (T, f ) is an element of [G\X](k′[). Then f : T (¯][k) → X(¯][k) is]
Gal(¯k/k′[)-equivariant. Hence ξ := f (T (¯][k)) is an element of G(¯][k)\X(¯][k) that]
is invariant under the action of Gal(¯k/k′[). On the other hand, suppose a]
ξ ∈ G(¯k)\X(¯k) is Gal(¯k/k′[)-invariant. Let γ ∈ Gal(¯][k/k]′[) be the #k]′[-th power]
Frobenius. Let x ∈ ξ; then there exists a g ∈ G(¯k) such that g · γ(x) = x. Let z ∈ Z1(k′[, G) be the unique cocycle such that z(γ) = g as in Lemma 2.7. Then]
the G-equivariant map
G[k]¯→ X[k]¯
g 7→ g · x descends to a G-equivariant map of k′[-varieties f : G]
z→ Xk′(where we identify
Gz,¯k with G¯k via ϕz as in Notation 2.1), and Φ(Gz, f ) = ξ.
Remark 2.13. Let ξ be a G(¯k)-orbit in X(¯k), and let x be an element of ξ. Then the automorphism group of the object of [G\X](¯k) corresponding to ξ by Lemma 2.11 is isomorphic to StabG¯k(x). In
particular its isomorphism class
does not depend on the choice of x in ξ. We write A(ξ) for the algebraic group StabG¯k(x) over ¯k.
While in general the point count #Gin(z)(k) depends on the choice of the cocycle
z ∈ H1(k, G), reduced unipotent groups are always isomorphic (as varieties) to affine space. Under suitable conditions on X and G this allows us to simplify the expression in Proposition 2.4.
Proposition2.14. Let G be an algebraic group over k. Let X be a k-variety with an action of G, such that for every ξ ∈ G(¯k)\X(¯k) the identity component of the algebraic group A(ξ)red [is unipotent.
Define a(ξ) := dim(A(ξ)), and]
Y := G(¯k)\X(¯k).
1. Let k′ [be a finite field extension of k. Then]
#[G\X](k′) = X
2. Write k = Fq and suppose that Y := G(¯k)\X(¯k) is finite. Let Γ :=
Gal(¯Fq/Fq), and for ξ ∈ Y , let b(ξ) be the cardinality of the orbit Γ · ξ in
Y . Then a, b : Y → Z≥0 are Γ-invariant, and
Z(X, t) = Y
¯ ξ∈Γ\Y
(1 − (q−a( ¯ξ)t)b( ¯ξ))−1.
Proof. 1. As before it suffices to show this for k = k′[. Let Φ be as in]
(2.12). We may then define the full subcategory S(ξ) of [G\X](k), the isomorphism classes of whose objects form the set
x ∈ [[G\X](k)] : x¯[k]= Φ−1(ξ)
o .
By Lemma 2.11 this category is nonempty if and only if ξ ∈ (G(¯k)\X(¯k))Gal(¯k/k)[. Suppose this is true for ξ, and let x]
0 be an
ob-ject of S(ξ). Then the algebraic group Aut(x0) is a k-form of A(ξ). By
[6, Thm. III.2.5.1] S(ξ) is equivalent to the category B(Aut(x0))(k); its
elements are classified by H1(k, Aut(x0)) = H1(k, Aut(x0)red). Write
L := Aut(x0)red; we now find for the point count
#S(ξ) := X x∈[S(ξ)] 1 #Aut(x) = X z∈H1[(k,L)] 1 #Lin(z)(k) . (2.15)
Let L0 [be the identity component of L; this is a connected unipotent]
group of dimension dim(A(ξ)). Let π0(L) be the component group of L.
By Lemma 2.9, applied to the short exact sequence 1 → L0[→ L → π]
we see that the natural map H1(k, L) → H1(k, π0(L)) is a bijection. On
the other hand, let z ∈ H1(k, L); then the same lemma tells us that
#Lin(z)(k) = (#L0in(z)(k)) · (#π0(Lin(z))(k)). (2.16)
By [15, Thm. 5] we get an equality
#L0[in(z)](k) = (#k)a(ξ) (2.17)
which does not depend on the choice of z. Furthermore, if we identify H1(k, L) and H1(k, π0(L)) as above, we find π0(Lin(z)) ∼= π0(L)in(z).
Ap-plying Lemma 2.8 to the finite ´etale group scheme π0(L) yields
X z∈H1[(k,π] 0(L)) 1 #π0(L)in(z)(k) = #B(π0(L)) = 1. (2.18)
Combining (2.15), (2.16), (2.17), and (2.18) now gives us
#S(ξ) = X z∈H1[(k,L)] 1 #Lin(z)(k) = X z∈H1[(k,π] 0(L)) 1 #π0(L)in(z)(k) · (#k)a(ξ) = (#k)−a(ξ)[.]
Summing over all ξ ∈ (G(¯k)\X(¯k))Gal(¯k/k)[now proves the statement.]
2. From the definition it is clear that b is Γ-invariant. To see that a is Γ-invariant, note that a(ξ) = dim(G) − dim(ξ) (remember that ξ is a G-orbit in X), and note that dim(γ · ξ) = dim(ξ) for all
γ ∈ Γ. For a ξ ∈ Y we have that the object in [G\X](¯k) has a model over Fqv if and
only if b(ξ) | v. As such we find Z([G\X], t) = exp X v≥1 tv v#[G\X](Fqv) = exp X v≥1 tv v X ξ∈YGal(¯Fq /Fqv ) q−a(ξ)v = exp X v≥1 X ξ∈Y : b(ξ)|v (q−a(ξ)[t)]v v = exp
X ξ∈Y X w≥1 (q−a(ξ)[t)]b(ξ)w b(ξ)w = Y ξ∈Y exp X w≥1 (q−a(ξ)[t)]b(ξ)w w 1 b(ξ) = Y ξ∈Y (1 − (q−a(ξ)[t)]b(ξ)[)]− 1 b(ξ) = Y ¯ ξ∈Γ\Y (1 − (q−a( ¯ξ)t)b( ¯ξ))−1.
Proposition2.19. Let G be a smooth algebraic group over k with a unipotent identity component. Let X be a variety over k isomorphic to An
k for some
nonnegative integer n. Suppose that the action of G on X factors through a connected group ˜G. Let k′ [be a finite field extension of k. Then]
#[G\X](k′[) = (#k]′[)]dim(V )−dim(G)[.]
If k = Fq, then
Z([G\X], t) = (1 − qdim(V )−dim(G)t)−1.
Proof. As for the first statement, it suffices to prove this for k′ [= k. Lang’s]
theorem tells us that H1(k, ˜G) = 1. Since the action of G on X factors through ˜
G, we find that Xz ∼= X for all z ∈ H1(k, G). If we denote the identity
component of G by G0 [and its component group by π]
0(G), and apply Lemma
2.9 to the short exact sequence
1 → G0[→ G → π]
we get the following from Proposition 2.4 and Lemma 2.8: #[G\X](k) = X z∈H1[(k,G)] #Xz(k) #Gin(z)(k) = X z∈H1[(k,π] 0(G)) #X(k) #G0 z(k) · #π0(G)in(z)(k) = (#k)dim(X)−dim(G)· X z∈H1[(k,π] 0(G)) 1 #π0
(G)in(z)(k) = (#k)dim(X)−dim(G)· #B(π0(G))(k) = (#k)dim(X)−dim(G).
The statement on the zeta function is then a straightforward calculation. 3 Weyl groups and Levi decompositions
In this section we briefly review some relevant facts about Weyl groups and Levi decompositions, in particular those of nonconnected reductive groups. 3.1 The Weyl group of a connected reductive
Let G be a connected reductive algebraic group over a field k. For any pair (T, B) of a Borel subgroup B ⊂ G[k]¯ and a maximal torus T ⊂ B, let
ΦT,B be the based root system of G with respect to (T, B), and let WT,B
be the Weyl group of this based root system, i.e. the Coxeter group gen-erated by the set ST,B of simple reflections. As an abstract group WT,B
is isomorphic to NormG(¯k)(T (¯k))/T (¯k). If (T′, B′) is another choice of a
Borel subgroup and a maximal torus, then there exists a g ∈ G(¯k) such that (T′[, B]′[) = (gT g]−1[, gBg]−1[). Furthermore, such a g is unique up to right ]
multi-plication by T (¯k), which gives us a unique isomorphism ΦT,B −∼→ ΦT′[,B]′. As
such, we can simply talk about the based root system Φ of G, with correspond-ing Coxeter system (W, S). By these canonical identifications Φ, W and S come equipped with an action of Gal(¯k/k).
The set of parabolic subgroups of G[k]¯ containing B is classified by the power
set of S, by associating to I ⊂ S the parabolic subgroup P = L · B, where L is the reductive group with maximal torus T whose root system is ΦI, the
root subsystem of Φ generated by the roots whose associated reflections lie in I. We call I the type of P . Let U := RuP be the unipotent radical of P ; then
For every subset I ⊂ S, let WI be the subgroup of W generated by I; it is the
Weyl group of the root system ΦI, with I as its set of simple reflections.
For w ∈ W , define the length ℓ(w) of w to be the minimal integer such that there exist s1, s2, . . . , sℓ(w) ∈ S such that w = s1s2· · · sℓ(w). Since Gal(¯k/k)
acts on W by permuting S, the length is Galois invariant. Let I, J ⊂ S; then every (left, double, right) coset WIw, WIwWJ or wWJ has a unique element of
minimal length, and we denote the subsets of W of elements of minimal length in their (left, double, right) cosets byI[W,]I[W]J[, and W]J[.]
Proposition 3.1. (See [3, Prop. 4.18]) Let I, J ⊂ S. Let x ∈ I[W]J[, and]
set Ix = J ∩ x−1Ix ⊂ W . Then for every w ∈ WIxWJ there exist unique
wI ∈ WI, wJ ∈ IxWJ such that w = wIxwJ. Furthermore ℓ(w) = ℓ(wI) +
ℓ(x) + ℓ(wJ).
Lemma 3.2. (See [14, Prop. 2.8]) Let I, J ⊂ S. Every element w ∈IW can uniquely be written as xwJ for some x ∈IWJ and wJ ∈IxWJ.
Lemma 3.3. (See [14, Lem. 2.13]) Let I, J ⊂ S. Let w ∈ I[W and write]
w = xwJ with x ∈IWJ, wJ ∈ WJ. Then
ℓ(x) = #nα ∈ Φ+\ΦJ : wα ∈ Φ−\ΦI
o .
3.2 The Weyl group of a nonconnected reductive group
Now let us drop the assumption that our group is connected. Let ˆG be a reductive algebraic group and write G for its connected component. Let B be a Borel subgroup of G[k]¯, and let T be a maximal
torus of B. Define the
following groups:
W = NormG(¯k)(T )/T (¯k);
W = Norm[G(¯]ˆ[k)](T )/T (¯k);
Ω = (NormG(¯ˆ k)(T ) ∩ NormG(¯ˆk)(B))/T (¯k).
Lemma 3.4. 1. One has ˆW = W ⋊ Ω.
2. The composite map Ω ֒→ G(¯k)/T (¯k) ։ π0(G)(¯k) is an isomorphism of
Proof. 1. First note that W is a normal subgroup of ˆW , since it consists of the elements of ˆW that have a representative in G(¯k), and G is a normal subgroup of ˆG. Furthermore, ˆW acts on the set
X of Borel subgroups
of G¯k containing T . The stabiliser of B under this action is Ω, whereas
W acts simply transitively on X; hence Ω ∩ W = 1 and W Ω = ˆW , and together this proves ˆW = W ⋊ Ω.
2. By the previous point, we see that
Ω ∼= ˆW /W ∼= NormG(¯ˆk)(T )/NormG(¯k)(T ),
so it is enough to show that every connected component of ˆG¯k has an
element that normalises T . Let x ∈ ˆG(¯k); then xT x−1 [is another ]
maxi-mal torus of G¯k, so there exists a g ∈ G(¯k) such that xT x−1 = gT g−1.
From this we find that T = (g−1[x)T (g]−1[x)]−1[, and g]−1[x is in the same]
connected component as x.
We call ˆW the Weyl group of ˆG with respect to (T, B). Again, choosing a different (T, B) leads to a canonical isomorphism, so we may as well talk about the Weyl group of ˆG. The two statements of
Lemma 3.4 are then to be un-derstood as isomorphisms of groups with an action of Gal(¯k/k). Note that we can regard W as the Weyl group of the connected reductive group G; as such we can apply the
results of the previous subsection to it. Let S ⊂ W be the generating set of simple reflections.
Now let us define an extension of the length function to a suitable subset of ˆ
W . First, let I and J be subsets of the set S of simple reflections in W , and consider the set I[W :=][ˆ] I[W Ω. Define a subset] I[W][ˆ]J [of] I[W as follows: every][ˆ]
element w ∈ I[W can uniquely be written as w = w][ˆ] ′[ω, with w]′ [∈] I[W and]
ω ∈ Ω. We rewrite this as w = ωw′′[, with w]′′ [= ω]−1[w]′[ω ∈]ω−1Iω[W ; then per]
definition w ∈I[W][ˆ]J [if and only if w]′′[∈]ω−1[Iω]
WJ[. Note that the set]I[W]J [is a]
subset of the setI[W][ˆ]J[.]
Now let w ∈I[W ; write w = ωw][ˆ] ′′[with ω ∈ Ω and w]′′[∈]ω−1[Iω]
W as above. Since w′′ [is an element of]ω−1[Iω]
W , we can uniquely write w′′[= yw]
J by Lemma 3.2,
with y ∈ω−1[Iω]
WJ [and w]
J ∈IωyWJ. Then define the extended length function
ℓI,J:IW → Zˆ ≥0 by ℓI,J(w) := # n α ∈ Φ+\ΦJ: ωyα ∈ Φ−\ΦI o + ℓ(wJ). (3.5)
Remark 3.6. 1. By Proposition 3.1 and Lemma 3.3 the map ℓI,J:IW →ˆ
Z≥0 extends the length function ℓ :IW → Z≥0.
2. Analogously to Proposition 3.1 we see that every w ∈I[W can be uniquely][ˆ]
3. In general ℓI,J depends on J. It also depends on I, in the sense that if
I, I′[⊂ S, then ℓ]
I,J(w) and ℓI′[,J](w) for w ∈IW ∩ˆ I ′
ˆ W =I∩I′
W need not coincide. As an example, consider over any field the group G = SL2. Let
Ω = hωi be cyclic of order 2, and let ˆG = G ⋊ Ω be the extension given by ωgω−1 [= g]T,−1[. Then ω acts as −1 on the root system, and S has only]
one element. A straightforward calculation shows ℓ∅,∅(ω) = 1, whereas
ℓ∅,S(ω) = ℓS,S(ω) = ℓS,∅(ω) = 0.
3.3 Levi decomposition of nonconnected groups
Let P be a connected smooth linear algebraic group over a field k. A Levi subgroup of P is the image of a section of the map P ։ P/RuP , i.e. a subgroup
L ⊂ P such that P = L ⋉ RuP . In characteristic p, such a Levi subgroup need
not always exist, nor need it be unique. However, if P is a parabolic subgroup of a connected reductive algebraic group, then for every maximal torus T ⊂ P there exists a unique Levi subgroup of P
containing T (see [4, Prop. 1.17]). The following proposition generalises this result to the non-connected case. Proposition 3.7. Let ˆG be a reductive group over a field k, and let ˆP be a subgroup
of ˆG whose identity component P is a parabolic subgroup of G. Let T be a maximal torus of P . Then there exists a unique Levi subgroup of ˆP containing T , i.e. a subgroup ˆL ⊂ ˆP such that ˆP = ˆL
⋉ RuP .
Proof. Let L be the Levi subgroup of P containing T . Then any ˆL satisfying the conditions of the proposition necessarily has L as its identity component, hence
L ⊂ Norm[P]ˆ(L). On the other hand we know that NormP(L) = L, so the only
possibility is ˆL = NormPˆ(L), and we have to check that π0(NormPˆ(L)) = π0( ˆP ),
i.e. that every connected component in ˆP¯[k] has an element normalising L. Let
x ∈ ˆP (¯k). Then xT x−1 [is another maximal torus of P] ¯
k, so there exists a
y ∈ P (¯k) such that xT x−1[= yT y]−1[. Then y]−1[x is in the same connected ]
com-ponent as x, and (y−1[x)T (y]−1[x)]−1[= T . Since L is the unique Levi subgroup]
of P containing T , and (y−1[x)L(y]−1[x)]−1 [is another Levi subgroup of P , we]
see that y−1[x normalises L, which completes the proof.]
4 G-zips
In this section we give the definition of G-zips from [13] along with their clas-sification and their connection to BT1. We will need the discussion on Weyl
groups from Subsection 3.2. As before, we denote the component group of a nonconnected algebraic group A by π0(A).
Let q0 be a power of p. Let ˆG be a reductive group over Fq0, and write G for
its identity component. Let q be a power of q0, and let χ : Gm,Fq → GFq be a
cocharacter of G[F]q. Let L = CentGFq(χ), and let U+ ⊂ GFq be the unipotent
subgroup defined by the property that Lie(U+) ⊂ Lie(GFq) is the direct sum
of the weight spaces of positive weight; define U− similarly. Note that L is
connected (see [4, Prop. 0.34]). This defines parabolic subgroups P±= L ⋉ U±
of GFq. Now take an Fq-subgroup scheme Θ of π0(CentGˆFq(χ)), and let ˆL be
the inverse image of Θ under the canonical map CentGˆ[Fq](χ) → π0(CentGˆ[Fq](χ));
then ˆL has L as its identity component and π0( ˆL) = Θ. We may regard Θ as
a subgroup of π0( ˆG) via the inclusion
Fq(χ)) = CentGˆFq(χ)/L ֒→ π0( ˆGFq).
We may then define the algebraic subgroups ˆP± := ˆL ⋉ U± of ˆGFq, whose
identity components P± are equal to L ⋉ U±. Let γ ∈ Gal(¯Fq0/Fq0) be the
q0-th power Frobenius. Then ˆG and ˆGγ are canonically isomorphic; as such
we can regard ˆP±,γ, ˆL±,γ, etc. as subgroups of ˆG. They correspond to the
parabolic and Levi subgroups associated to the cocharacter ϕ ◦ χ of ˆGk and
the subgroup ϕ(Θ) of π0( ˆG), where ϕ : ˆG → ˆG is the relative q0-th Frobenius
Definition 4.1. Let A be an algebraic group over a field k, and let B be a subgroup of A. Let T be an A-torsor over some k-scheme S. A B-subtorsor of T is an S-subscheme Y of T , together with an
action of BS, such that Y is a
B-torsor over S and such that the inclusion map Y ֒→ T is equivariant under the action of BS.
Definition 4.2. Let S be a scheme over Fq. A ˆG-zip of type (χ, Θ) over S is
a tuple Y = (Y, Y+, Y−, υ) consisting of:
• A right- ˆGFq-torsor Y over S;
• A right- ˆP+-subtorsor Y+ of Y ;
• A right- ˆP−,γ-subtorsor Y− of Y ;
• An isomorphism υ : Y+,γ/U+,γ−∼→ Y−/U−,γ of right- ˆLγ-torsors.
Together with the obvious notions of pullbacks and morphisms we get a fibred category ˆG-Zipχ,Θ[F][q] over Fq. If ˆG is connected there is no choice for Θ, and we
Proposition4.3. (See [13, Prop. 3.2 & 3.11]) The fibred category ˆG-Zipχ,Θ[F][q] is a smooth algebraic stack of finite type over Fq.
Now let q0, q, ˆG, χ, Θ, ˆL, U± and ˆP± be as above. As in subsection 3.2 let
W = W ⋊ Ω be the Weyl group of ˆG. Let I ⊂ S be the type of P+ and
let J be the type of P−,γ. If w0 ∈ W is the unique longest word, then J =
γ(w0Iw0−1) = w0γ(I)w−10 . Let w1 ∈JWγ(I) be the element of minimal length
in WJw0Wγ(I), and let w2 = γ−1(w1); then we may write this relation as
J = γ(w2Iw−12 ) = w1γ(I)w1−1.
The group Θ can be considered as a subgroup of Ω ∼= π0( ˆG). Let ˆψ be the
automorphism of ˆW given by ˆψ = inn(w1) ◦ γ = γ ◦ inn(w2), and let Θ act on
ˆ W by
θ · w := θw ˆψ(θ)−1.
Lemma 4.4. The subsetI[W ⊂ ˆ][ˆ] [W is invariant under the Θ-action.]
Proof. Since ˆL normalises the parabolic subgroup P+ of GFq, the subset I ⊂ S
is stable under the action of Θ by conjugation; hence for each θ ∈ Θ one has θ(I[W )θ]−1[=]I[W , so]
θ(IW ) ˆˆ ψ(θ)−1= (θ(IW )θ−1) · (θΩ ˆψ(θ)−1) =IW Ω =IW .ˆ Let us write Ξχ,Θ [:= Θ\]I[W .][ˆ]
Proposition 4.5. (See [13, Rem. 3.21]) There is a natural bijection between the sets Ξχ,Θ [and [ ˆ][G-Zip]χ,Θ
Fq (¯Fq)].
This bijection can be described as follows. Choose a Borel subgroup B of G¯[F][q] contained in P−,γ, and let T be a maximal torus of B. Let γ ∈ G(¯Fq)
be such that (γBγ−1[)]
γ = B and (γT γ−1)γ = T . For every w ∈ ˆW =
Norm[G(¯]ˆ[F][q][)](T )/T (¯Fq), choose a lift ˙w to Norm[G(¯]ˆ[F][q][)](T ), and set g = γ ˙w2. Then
ξ ∈ Ξχ,Θ [corresponds to the ˆ][G-zip Y]
w = ( ˆG, ˆP+, g ˙w ˆP−,γ, g ˙w·) for any
repre-sentative w ∈I[W of ξ; its isomorphism class does not depend on the choice][ˆ]
of the representatives w and ˙w. Note that this description differs from the one given in [13, Rem. 3.21], as that description seems to be wrong. Since there it is assumed that B ⊂ P−,K rather than
that B ⊂ P−,γ,K, the choice
of (B, T, g) presented there will not be a frame for the connected zip datum (GK, P+,K, P−,γ,K, ϕ : LK → Lγ,K). Also, the choice for g given there needs
to be modified to account for the fact that P+,K and P−,γ,K might not have a
The rest of this subsection is dedicated to the extended length functions ℓI,J
de-fined in Subsection 3.2. We need Lemma 4.6 in order to show a result on the di-mension of the automorphism group of a ˆG-zip that extends [13, Prop. 3.34(a)] to the nonconnected case (see
Proposition 5.7.2).
Lemma4.6. The length function ℓI,J:IW → Zˆ ≥0 is invariant under the
semi-linear conjugation action of Θ.
Proof. Let w ∈ I[W , let θ ∈ Θ, and let ˜][ˆ] [w = θw ˆ][ψ(θ)]−1[. Let w = ωyw] J be
the decomposition as in subsection 3.2. A straightforward computation shows ˜ w = ˜ω ˜w′′[= ˜][ω ˜][y ˜][w] J with ˜ ω = θω ˆψ(θ)−1∈ Ω; ˜ w′′= ˆψ(θ)w′′ψ(θ)ˆ −1∈ω˜−1I ˜ωW ; ˜ y = ˆψ(θ)y ˆψ(θ)−1∈ω˜−1I
˜ωWJ; ˜ wJ = ˆψ(θ)wJψ(θ)ˆ −1∈Iω ˜˜yWJ,
since conjugation by ˆψ(θ) fixes J. Furthermore, ˆψ(Θ) fixes ΦJ (as a subset of
Φ) and Θ fixes ΦI, and Ω fixes Φ+ and Φ−, hence
ℓI,J( ˜w) = # n α ∈ Φ+\ΦJ : ˜ω ˜yα ∈ Φ−\ΦI o + ℓ( ˜wJ) = #nα ∈ Φ+\ΦJ : θωy ˆψ(θ)−1α ∈ Φ−\ΦI o + ℓ( ˜wJ) = #nα ∈ Φ+\ΦJ : ωyα ∈ Φ−\ΦI o + ℓ(wJ) = ℓI,J(w).
Example 4.7. Let p be an odd prime, let V be the Fp-vector space F4p, and
let ψ be the symmetric nondegenerate bilinear form on V given by the matrix 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 .
Let ˆG be the algebraic group O(V, ψ) over Fp; it has two connected components.
The Weyl group W of its identity component G = SO(V, ψ) is of the form W ∼= {±1}2 (with trivial Galois action), and its root system is of the form Ψ ∼= {r1, r2, −r1, −r2}, where the i-th factor of W
acts on {ri, −ri}. The set
nontrivial element σ of Ω permutes the two factors of W (as well as e1and e2);
hence ˆW ∼= {±1}2⋊S2.
Let χ : Gm → G be the cocharacter that sends t to diag(t, t, t−1, t−1). Its
associated Levi factor L is isomorphic to GL2; the isomorphism is given by the
injection GL2 ֒→ ˆG that sends a g ∈ GL2 to diag(g, g−1,T). The associated
parabolic subgroup P+is the product of L with the subgroup B ⊂ ˆG of upper
triangular orthogonal matrices. The type of P+ is a singleton subset of S;
without loss of generality we may choose the isomorphism W ∼= {±1}2in such a way that P+ has type I = {(−1, 1)}. Recall that J denotes the type of the parabolic subgroup P−,γ of G. Since W is abelian
and has trivial Galois action,
the formula J = w0γ(I)w0−1shows us that J = I. Furthermore, since Cent[G]ˆ(χ)
is connected, the group Θ has to be trivial.
An element of ˆW is of the form (a, b, c), with a, b ∈ {±1} and c ∈ S2= {1, σ};
thenI[W is the subset of ˆ][ˆ] [W consisting of elements for which a = 1. Also, note]
that Φ+[\ Φ]
J= {e2}, Φ−\ ΦI = {−e2}, so to calculate the length function ℓI,J
as in (3.5) we only need to determine ℓ(wJ) and whether ωy sends e2 to −e2
or not. If we use the terminology ω, w′′[, y, w]
J from subsection 3.2, we get the
following results: w (1, 1, 1) (1, −1, 1) (1, 1, σ) (1, −1, σ) ω (1, 1, 1) (1, 1, 1) (1, 1, σ) (1, 1, σ) w′′ [(1, 1, 1)] [(1, −1, 1)] [(1, 1, 1)] [(−1, 1, 1)] y (1, 1, 1) (1, −1, 1) (1, 1, 1) (1, 1,
1) wJ (1, 1, 1) (1, 1, 1) (1, 1, 1) (−1, 1, 1)
ωye2= −e2? no yes no no
ℓ(wJ) 0 0 0 1
ℓI,J( ˆw) 0 1 0 1
5 Zeta functions of stacks ofG-zips
We fix q0, G, q, χ and Θ as in Section 4. The aim of this section is to calculate
the point counts and the zeta function of the stack ˆG-Zipχ,Θ[F][q] . Before proving Theorem 1.1 we first need to introduce some auxiliary results. Let ϕ be as in Section 4, and let r±: ˆP± → ˆL
denote the natural projection. Then to the
of ¯Fq-points is defined as E(¯Fq) = n (y+, y−) ∈ ˆP+(¯Fq) × ˆP−(¯Fq) : ϕ(r+(y+)) = r−(y−) o . Then E acts on ˆG[F]q by (y+, y−) · g ′ [= y]
+g′y−−1, and this action allows us to
represent stacks of ˆG-zips as quotient stacks:
Proposition5.1. (See [13, Prop. 3.11]) There is an isomorphism ˆG-Zipχ,Θ[F][q] ∼= [E\ ˆGFq] of Fq-stacks.
The next step is to connect the quotient stack [E\ ˆGFq] to the Weyl group of
G. To make the discussion more explicit, we define the Weyl group using a maximal torus T and a Borel subgroup B of G satsifying some nice properties. Lemma 5.2. Let B ⊂ P−,γ be a Borel subgroup
defined over Fq containing
Lγ, and let T ⊂ B be a maximal torus defined over Fq. Then there exists an
element g ∈ G(Fq) such that:
• gBg−1 [is a Borel subgroup of P]
+ containing L;
• ϕ(gT g−1[) = T .]
Proof. Let B′ [⊂ P]
+ be a Borel subgroup of G containing L. Consider the
algebraic subset
X =ng ∈ G(¯Fq) : gBg−1= B′, ϕ(gT g−1) = T
of G(¯Fq). Since NormG(B) ∩ NormG(T ) = T , we see that X forms a T -torsor
over Fq. By Lang’s theorem such a torsor is trivial, hence X has a rational
For the rest of this section we fix B, T , g as above, and we use T and B to define the Weyl group of ˆG.
Lemma 5.3. Choose, for every w ∈ ˆW = Norm[G(¯]ˆ[F][q][)](T (¯Fq))/T (¯Fq), a lift ˙w of
w to the group Norm[G(¯]ˆ[F][q][)](T (¯Fq)). Then the map
Ξχ,Θ→ E(¯Fq)\ ˆG(¯Fq)
Θ · w 7→ E(¯Fq) · g ˙w
is well-defined, and it is an isomorphism of Gal(¯Fq/Fq)-sets that does not
Proof. In [14, Thm. 10.10] it is proven that this map is a well-defined bijection independent of the choices of w and ˙w (applied to the zip datum from [13, Def. 3.6] and the frame (B, T, g) from
Lemma 5.2). Furthermore, if τ is an element of Gal(¯Fq/Fq), then the fact that T and g are defined over Fq implies
that τ ( ˙w) is a lift of τ (w) to Norm[G]ˆ(T ); this shows that the map is
Remark 5.4. Together with the identification [[E\ ˆGFq](¯Fq)] ∼= E(¯Fq)\ ˆG(¯Fq)
from Lemma 2.11.1 the isomorphism above gives the natural bijection in Propo-sition 4.5.
The following proposition gives an explicit formula for the orbits of ˆG under the action by E. It is proven in the case that ˆG is connected in [14, Thm. 7.5c & Thm. 8.1], applied to the zip datum
from [13, Def. 3.6]. While the proof is long (it requires most of sections 3–8 of [14]), a lot of it carries over essentially unchanged to the nonconnected case. The few modifications that are needed
for the proof are discussed in Remark 5.10.
Proposition5.5. Let w ∈I[W , and let ˙][ˆ] [w be a lift of w to Norm] ˆ
G(¯Fq)(T (¯Fq)).
Then the orbit E¯[F][q]· (g ˙w) ⊂ ˆG¯[F][q] has dimension dim(P+) + ℓI,J(w). The reduced
stabiliser StabE¯[Fq](g ˙w)
red [has a unipotent identity component.]
We are now in a position to define the functions a and b in the statement of Theorem 1.1.
Notation 5.6. Let Γ = Gal(¯Fq/Fq). We define functions a, b :IW → Zˆ ≥0 on I[W as follows:][ˆ]
• a(w) = dim(G/P+) − ℓI,J(w);
• b(w) is the cardinality of the Γ-orbit of Θ · w in Ξχ,Θ[, i.e.]
b(w) = #nξ ∈ Ξχ,Θ: ξ ∈ Γ · (Θ · w)o.
The fact that a(w) is nonnegative for every w ∈I[W is a consequence of the][ˆ]
following proposition.
Proposition5.7. For ξ ∈ Ξχ,Θ[, let Y]
ξ be the ˆG-zip over ¯Fq corresponding to
ξ. Then one has dim(Aut(Yξ)) = a(ξ) and the identity component of the group
Proof. Note that dim(E) = dim(G). Let w ∈I[W be such that ξ = Θ · w. By][ˆ]
Remark 2.13 and Proposition 5.5 we have
dim(Aut(Yξ)) = dim(StabEZˆ(g ˙w))
= dim(E) − dim(E · g ˙w) = dim(G) − dim(E · g ˙w) = dim(G) − dim(P+) − ℓI,J(ξ)
= a(ξ).
By Proposition 5.5 the identity component of Aut(Yξ)red is unipotent.
Remark 5.8. The formula dim(Aut(Yξ)) = dim(G/P ) − ℓI,J(ξ) from
Propo-sition 5.7 apparently contradicts the proof of [13, Thm. 3.26]. There an ex-tended length function ℓ : ˆW → Z≥0 is defined by ℓ(wω) = ℓ(w) for w ∈ W ,
ω ∈ Ω. It is stated that the codimension of E · (g ˙w) in ˆG is equal to dim(G/P+) − ℓ(w). In other words, if this were correct, dim(Aut(Yξ)) would
be equal to dim(G/P+[) − ℓ(w) rather than dim(G/P]+[) − ℓ]
I,J(w). However,
the proof seems to be incorrect (and the theorem itself as well). The dimension formula is based on [14, Thm. 5.11], but that result only treats the connected case. It fails in the nonconnected case,
as there ℓ(w) and ℓI,J(w) do not
gener-ally coincide. One can construct a counterexample by taking ˆG as in Remark 3.6.3, and taking the cocharacter χ : Gm→ G given by
x 7→ x[0 x]0−1 .
Then a straightforward calculation shows that ℓ(ω) = 0 and ℓI,J(ω) = 1 do not
Remark 5.9. In general Aut(Yξ) will not be reduced; see [10, Rem. 3.1.7] for
the first found instance of this phenomenon, or [13, Rem. 3.35] for the general case.
Proof of Theorem 1.1. By Proposition 5.1 we can consider ˆG-Zipχ,Θ[F][q] as a quo-tient stack, and by Propositions 4.5 and 5.7 the assumptions of Proposition 2.14.2 are satisfied. Furthermore, in the
notation of this proposition, we find Y = Ξχ,Θ[, and a, b : Y → Z]
≥0 are as in Notation 5.6 by Proposition 5.7. The
theorem is now a direct consequence of Proposition 2.14.2.
Remark 5.10. Although the proof of Proposition 5.5 over from the connected case without much difficulty, we feel compelled to make some comments about
what exactly changes in the non-connected case, since the proofs of these theo-rems require most of the material of [14]. The key change is that in [14, Section 4] we allow x to be an element ofI[W]
[ˆ]J[, rather than just] I[W]J[; however, one]
can keep working with the connected algebraic zip datum Z, and define from there a connected algebraic zip datum Zx˙ as in [14, Constr. 4.3]. There, one
needs the Levi decomposition for non-connected parabolic groups; but this is handled in our Proposition 3.7. The use of non-connected groups does not give any problems in the proofs of most
propositions and lemmas in [14, §4–8]. In [14, Prop. 4.8], the term ℓ(x) in the formula will now be replaced by ℓI,J(x).
The only property of ℓ(x) that is used in the proof in the connected case is that if x ∈I[W]J[, then ℓ(x) = #{α ∈ Φ]+[\Φ]
J : xα ∈ Φ−\ΦI}. In our case, we have
x ∈I[W][ˆ]J[, and ℓ]
I,J:IWˆJ→ Z≥0 is the extension of ℓ :IWJ → Z≥0 that gives
the correct formula. Furthermore, in the proof of [14, Prop. 4.12] the assump-tion x ∈I[W]J [is used, to conclude that xΦ]+
J ⊂ Φ+. However, the same is true
for x ∈I[W][ˆ]J[: write x = ωx]′ [with ω ∈ Ω and x]′ [∈]ω−1Iω[W]J[; then x]′[Φ]+ J ⊂ Φ+,
and ωΦ+ [= Φ]+[, since Ω acts on the based root system. Finally, the proofs of]
both [14, Thm. 7.5c] and [14, Thm. 8.1] rest on an induction argument, where the authors use that an element w ∈I[W can uniquely be written as w = xw]
with x ∈I[W]J[, w]
J∈IxWJ, and ℓ(w) = ℓ(x) + ℓ(wJ). The analogous statement
that we need to use is that any w ∈I[W can uniquely be written as w = xw][ˆ] J,
with x ∈I[W][ˆ]J[, w]
J ∈IxWJ, and ℓI,J(w) = ℓI,J(x) + ℓ(wJ), see Remark 3.6.2.
The proofs of the other lemmas, propositions and theorems work essentially unchanged.
6 Stacks of truncated Barsotti–Tate groups
The aim of this section is to prove Theorem 1.2. We fix integers h > 0 and 0 ≤ d ≤ h, and we want to determine the zeta function of the stack BTh,dn over
Fp for every integer n ≥ 1. This turns out to be related to the theory of ˆG-zips
and their moduli stacks. Our strategy will be to interpret the results of [18] and [5], which concern the set of BTn+1 over ¯k extending a given BTn, in a
‘stacky’ sense over a finite k. This allows us to invoke the results of Section 2. Notation6.1. For the rest of this section, let G be the reductive group GL[h,F]p.
Let χ : Gm,Fp → G be a cocharacter that induces the weights 0 with multiplicity
d and weight 1 with multiplicity h − d on the standard representation of G. Employing the notation of sections 3 and 4, we see that W is the permutation group on h elements (with trivial Galois
action), S = {(1 2), (2 3), ..., (h−1 h)},
and I = S \ {(d d + 1)}. Note that Θ has to be trivial, as we can regard it as a subgroup of Ω ∼= π0(G), which is trivial. Hence Ξ := Γ\Ξχ,Θ is equal toIW ,
and the map a : Ξ → Z≥0 from Notation 5.6 is given by a(ξ) = dim(G/P+) −
ℓ(ξ) = d(h − d) − ℓ(ξ).
For general n, let Dh,dn be the stack over Fp of truncated Dieudonn´e crystals
D of level n that are locally of rank h, for which the map F : D → D(p) [has]
rank d locally (see [7, Rem. 2.4.10]). Then Dieudonn´e theory (see [2, 3.3.6 & 3.3.10]) tells us that there is a morphism of stacks over Fp
Dn: BTh,dn → Dh,dn
that is an equivalence of categories over perfect fields; hence Z(BTh,dn , t) =
Z(Dh,dn , t). As such, we are interested in the categories Dh,dn (Fq). An object in
this category is a Dieudonn´e module of level n, i.e. a triple (D, F, V ) where: 1. D is a free module of rank h over Wn(Fq), the Witt vectors of length n
over Fq;
2. F is a σ-semilinear endomorphism of D of rank d, where σ is the auto-morphism of Wn(Fq) lifting the automorphism Frp of Fq;
3. V is a σ−1[-semilinear endomorphism of D satisfying F V = V F = p.]
Now fix h and d, and choose a (non-truncated) Barsotti–Tate group G of height h and dimension d over Fp. Let (Dn, Fn, Vn) be the Dieudonn´e module of
G[pn[], and choose a basis for every D]
n in a compatible manner (i.e. the
basis of Dn is the image of the basis of Dn+1 under the natural reduction
map Dn+1/pnDn+1 −∼→ Dn). Then for every power q of p, every element in
n (Fq) is isomorphic to Dn,g := (Dn⊗Z/pn[Z]W[n](F[q]), gF[n], V[n]g−1) for some
g ∈ GLh(Wn(Fq)) (See [18, 2.2.2]).
For a smooth affine group scheme G over Spec(W(Fp)), let Wn(G) be the group
scheme over Spec(Fp) defined by Wn(G)(R) = G(Wn(R)) (see [18, 2.1.4]); it
is again smooth and affine. For every n there is a natural reduction morphism Wn+1(G) ։ Wn(G).
Proposition 6.2. Let Dn := Wn(GLh). Then there exists a smooth affine
group scheme H over Zp and for every n an action of Hn := Wn(H) on Dn,
compatible with the reduction maps Hn+1 ։ Hn and Dn+1 ։ Dn, such that
for every power q of p, there exists for every g, g′[∈ D]
n(Fq) an isomorphism of
Fq-group varieties
ϕg,g′: Transp
that is compatible with compositions in the sense that for every g, g′[, g]′′ [∈]
Dn(Fq) the following diagram commutes, where the horizontal maps are the
natural composition morphisms: Transp[H]
′[, g]′′[)]red[× Transp]
H[n,Fq](g, g′)red TranspH[n,Fq](g, g′′)red
Isom(Dn,g′, D[n,g]′′)red× Isom(D[n,g], D[n,g]′)red Isom(D[n,g], D[n,g]′′)red
ϕg′,g′′×ϕg′ g′ ϕg,g′′
Proof. The group H and the action Hn× Dn → Dn are defined in [18, 2.1.1
& 2.2] over an algebraically closed field k of characteristic p, but the defi-nition still makes sense over Fp. The isomorphism of groups ϕg,g is given
on k-points in [18, 2.4(b)]. The definition of the map there shows that it is algebraic and defined over Fp. Since it is an isomorphism on ¯Fp-points, it
is an isomorphism of reduced group schemes over Fp. Furthermore, a
mor-phism TranspH[n,Fq](g, g′) → Isom(Dn,g, Dn,g′) is given in the proof of [18,
2.2.1]. It is easily seen that this map is compatible with compositions in the sense of the diagram above, and that it is equivariant under the action of StabHn(g)(¯Fp) ∼= Isom(Dn,g)(¯Fp). Since both
varieties are torsors under this
action, this must be an isomorphism as well.
Corollary 6.3. For every power q of p the categories Dh,d
n (Fq) and
[Hn\Dn](Fq) are equivalent.
Proof. For every object D ∈ Dh,d
n (Fq) choose a gD ∈ Dn(Fq) such that D ∼=
Dn,gD. Define a functor
E : Dh,dn (Fq) → [Hn\Dn](Fq)
that sends a D to the pair (Hn, fD), where fD: Hn→ Dn is given by fD(h) =
h · gD. We send an isomorphism from D to D′ to the corresponding element of
Isom((Hn, fD), (Hn, fD′)) = Transp
Hn(Fq)(gD, gD′).
From the description of H in [18] it is clear that each Hn is connected, hence
every Hn-torsor is trivial, and E is essentially surjective. By Proposition 6.2 it
is also fully faithful, hence it is an equivalence of categories.
By [13, 9.18, 8.3 & 3.21] (and before by [8] and [9]) the set of isomorphism classes of Dieudonn´e modules of level 1 over an algebraically closed field of characteristic p are classified by Ξ as in
Notation 6.1. For each ξ ∈ Ξ, let Dh,d,ξ
be the substack of Dh,d
n consisting of truncated Barsotti–Tate groups of level
n, locally of rank h, and with F locally of rank d, whose reduction to a BT1is
of type ξ at all geometric points. Then over fields k of characteristic p one has Dh,dn (k) = F ξ∈ΞDh,d,ξn (k) as categories, hence Z(Dh,dn , t) = Y ξ∈Ξ Z(Dh,d,ξn , t).
From Proposition 2.11.1, or directly from the description in [8, §5], each isomor-phism class over ¯Fphas a model over Fp. For every ξ ∈ Ξ choose a g1,ξ∈ D1(Fp)
such that the isomorphism class of D1,g1,ξ⊗ ¯Fp corresponds to ξ. For every n,
let Dn,ξ be the preimage of g1,ξ under the reduction map Dn ։ D1. Let Hn,ξ
be the preimage of StabH1(g1,ξ) in Hn; then analogous to Corollary 6.3 for
every power q of p we get an equivalence of categories (see [5, 3.2.3 Lem. 2(b)]) Dh,d,ξ[n] (Fq) ∼= [Hn,ξ\Dn,ξ](Fq).
Proof of Theorem 1.2. By the discussion above we see that Z(BTh,dn , t) =
Z([Hn,ξ\Dn,ξ], t).
By [13, 9.18 & 8.3] there is an isomorphism of stacks over Fp
Dh,d[1,p] → G-Zip∼ χ,Θ[F][p] ,
where G, χ, Θ are as in Notation 6.1. By Proposition 5.7, or earlier by [10, 2.1.2(i) & 2.2.6], the group scheme StabH1(g1,ξ)
red [∼][= Aut(D] 1,g1,ξ)
red [has an]
identity component that is unipotent of dimension a(w). The reduction mor-phism Hn→ H1is surjective and its kernel is unipotent of dimension h2(n − 1),
see [5, 3.1.1 & 3.1.3]. This implies that Hn,ξ has a unipotent identity
compo-nent of dimension h2[(n − 1) + a(ξ). Now fix a g]
n,ξ ∈ Dn,ξ(Fp); then we can
identify Dn,ξ with the affine group X = Wn−1(Math×h), by sending an x ∈ X
to gn,ξ+ ps(x), where s : Wn−1(Math×h) −→ pW∼ n(Math×h) ⊂ Wn(Math×h) is
the canonical identification. Furthermore, the action of an element h ∈ Hn,ξ
on (gn,ξ+ ps(x)) ∈ Dn,ξ is given by f (h)(gn,ξ+ ps(x))f′(h) for some algebraic
f, f′[: H]
n,ξ → Wn(GLh) (see [18, 2.2.1a]). From this we see that the induced
action of Hn,ξ on the variety X is given by
h · x = f (h)xf′(h) +1
p(f (h)gn,ξf
′[(h) − g] n,ξ),
which makes sense because f (h)gn,ξf′(h) is equal to gn,ξmodulo p. If we regard
X as Wn−1(Gh
Hn,ξ on X factors through the canonical action of Wn−1(Gh
a ) ⋊ Wn−1(GLh2)
on Wn−1(Gh
a ). This algebraic group is connected, so we can apply Proposition
2.19, from which we find
Z([Hn,ξ\Dn,ξ], t) = 1 1−pdim(Dn,ξ)−dim(Hn,ξ) = 1 1−ph2 (n−1)−(h2 (n−1)+a(ξ))[t] = 1 1−p−a(ξ)[t],
which completes the proof.
Remark 6.4. Since the zeta function Z(BTh,dn , t) does not depend on n, one
might be tempted to think that the stack BTh,dof non-truncated Barsotti–Tate groups of height h and dimension d has the same zeta function. However, this stack is not of finite type. For instance,
every Barsotti–Tate group G over Fq
has a natural injection Z×
p ֒→ Aut(G), which shows us that its zeta function is
not well-defined. References
[1] Kai A. Behrend. “The Lefschetz trace formula formula for algebraic stacks”. In: Inventiones mathematicae 112.1 (1993), pp. 127–149.
[2] Pierre Berthelot, Lawrence Breen, and William Messing. Th´eorie de Dieudonn´e cristalline II. Lecture Notes in Mathematics, Vol. 930. Berlin & Heidelberg, Germany: Springer-Verlag, 1982.
[3] Bangming Deng et al. Finite dimensional algebras and quantum groups. Mathematical Surveys and Monographs, Vol. 150. Providence, United States: American Mathematical Society, 2008.
[4] Fran¸cois Digne and Jean Michel. Representations of finite groups of Lie type. London Mathematical Society Student Texts, Vol. 21. Cambridge, United Kingdom: Cambridge University Press, 1991.
[5] Ofer Gabber and Adrian Vasiu. “Dimensions of group schemes of automor-phisms of truncated Barsotti–Tate groups”. In: International Mathematics Research Notices (2013), pp. 4285–4333.
[6] Jean Giraud. Cohomologie non ab´elienne. Grundlehren der mathematis-chen Wissenschaften, Vol. 179. Berlin & Heidelberg, Germany: Springer Verlag, 1971.
[7] Aise Johan de Jong. “Crystalline Dieudonn´e module theory via formal and rigid geometry”. In: Publications Math´ematiques de l’Institut des Hautes
Etudes Scientifiques 82.1 (1995), pp. 5–96.
[8] Hanspeter Kraft. Kommutative algebraische p-Gruppen (mit Anwen-dungen auf p-divisible Gruppen und abelsche Variet¨aten). Unpublished manuscript. Bonn, Germany: Sonderforschungsbereich Bonn, 1975.
[9] Ben Moonen. “Group schemes with additional structures and Weyl group
cosets”. In: Moduli of abelian varieties (Texel Island, 1999). Ed. by Carel Faber, Gerard van der Geer, and Frans Oort. Progress in Mathematics, Vol. 195. Basel, Switzerland: Birkh¨auser, 2001, pp.
[10] Ben Moonen. “A dimension formula for Ekedahl-Oort strata”. In: Annales de l’Institut Fourier 54.3 (2004), pp. 666–698.
[11] Ben Moonen and Torsten Wedhorn. “Discrete invariants of varieties in positive characteristic”. In: International Mathematics Research Notices (2004), pp. 3855–3903.
[12] Frans Oort. “A stratification of a moduli space of abelian varieties.” In: Moduli of abelian varieties (Texel Island, 1999). ed. by Carel Faber, Gerard van der Geer, and Frans Oort. Progress in
Mathematics, Vol. 195. Basel, Switzerland: Birkh¨auser, 2001, pp. 345–416.
[13] Richard Pink, Torsten Wedhorn, and Paul Ziegler. “F -zips with additional structure”. In: Pacific Journal of Mathematics 274.1 (2015), pp. 183–236. [14] Richard Pink, Torsten Wedhorn, and Paul
Ziegler. “Algebraic zip data”.
In: Documenta Mathematica 16 (2011), pp. 253–300.
[15] Maxwell Rosenlicht. “Questions of rationality for solvable algebraic groups over nonperfect fields”. In: Annali di matematica pura ed applicata 62.1 (1963), pp. 97–120.
[16] Jean-Pierre Serre. Galois cohomology. Springer Monographs in Mathemat-ics. Berlin & Heidelberg, Germany: Springer Verlag, 1997.
[17] Shenghao Sun. “L-series of Artin stacks over finite fields”. In: Algebra & Number Theory 6.1 (2012), pp. 47–122.
[18] Adrian Vasiu. “Level m stratifications of versal deformations of p-divisible groups”. In: Journal of Algebraic Geometry 17 (2008), pp. 599–641.
[19] Torsten Wedhorn. “The dimension of Oort strata of Shimura varieties of PEL-type”. In: Moduli of abelian varieties (Texel Island, 1999). Ed. by Carel Faber, Gerard van der Geer, and Frans Oort.
Progress in Mathe-matics, Vol. 195. Basel, Switzerland: Birkh¨auser, 2001, pp. 441–471.
Milan Lopuha¨a-Zwakenberg Security Group
Technische Universiteit Eindhoven Eindhoven
The Netherlands [email protected] | {"url":"https://5dok.net/document/dy48xxkq-functions-moduli-stacks-moduli-stacks-truncated-barsotti-groups.html","timestamp":"2024-11-11T11:29:53Z","content_type":"text/html","content_length":"211045","record_id":"<urn:uuid:a21f1803-0ea2-4330-a619-9792c87b6d48>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00436.warc.gz"} |
Nested Sets vs Adjucency Matrix | stratos.me
Nested Sets vs Adjucency Matrix
by stratosg | Apr 12, 2008 | Thoughts, Tutorials
First of all we need to see how each way works. To do this i will explain what fields each node in the tree needs. Each node for the adjucency matrix needs to keep a “pointer” to the parent node. In
other words, on an SQL table it could be an integer keeping the parent id if any. If ordering in the same level under each node matters, then we need to keep the place of the node between the childs
of the parent. This is all we need. A small example would be something like the one below:
ID Name Parent Place
1 Main Category 0 1
2 Subcat 1 1 1
3 Subcat 2 1 2
4 Subsubcat 1 3 1
5 Subcat 3 1 3
The same tree, shown above, in this conventional way, is shown in the table below using the Nested sets method.
ID Name Left Right Level
1 Main Category 1 10 1
2 Subcat 1 2 3 2
3 Subcat 2 4 7 2
4 Subsubcat 1 5 6 3
5 Subcat 3 8 9 2
I know, it looks confusing. Let’s take a quick look at a graphic. It will make it much much easier.
I hope this makes it more clear. So, as you can see, each node has a left and a right number indicating, actually, whose child he is and how many child he has. If, for instance, you have the node
Subcat 2, the left and right indications are 4 and 7. So he, surely, has a parent. As for the children he has ((7-4)-1)/2=1 childs. Same for Main category. He has ((10-1)-1)/2=4 childs.
So this is another way of representing a tree. But is it better than the adjucency matrix, programing-wise i mean. Let’s take all the possibilities under consideration.
Inserting node: Let’s say we want to insert a node under the main category, on the first level (same level as subcat) after subcat 2 and call it subcat 4. All we have to do is “open” some space
between subcat 2 and 3 and put it in there. To do that we have to add 2 (the left and right index of the new category) to all the nodes that have left or right greater than 7 (this is the right index
of the category after which we want to add the new one). So the left of subcat 3 will be 10, right will 11 and finally, the right index of the main category should be 12. After that, all we have to
do, is insert our new node with left index 8 and right index 9.
Deleting node: This is a bt trickier than adding. The difficult task would be, if the node we want to remove has children. We will see what happens then. The simple one, removing a node without
children, is a very simple one so the reader will figure it out in no time. Let’s say we want to remove the subcat 2 node. Here is what we have to do. All the childrens’ nodes indexes have to be
updated to index-1 and the other ones that have larger indexes than the right one of the one to be removed have to be index-2. On this example, removing Subcat 2, the subsubcat 1 new indexes should
be 4,5 and the subcat 3 should have 6,7, finally the main category’s right index should be updated to 8. One more thing. Do not forget the level! Subsubcat 1’s level was 2, it has to be updated to 1.
Moving node: If we want to move a single node, without children, then things are pretty easy. All we have to do is remove the node and then add it to the new place we want to move it to. For
instance, if we want to move subsubcat 1 and place it under subcat 1 all we have to do is remove it as a child from subcat 2, as instructed above, and then add it as a node under subcat 1, also as
instructed above. But beware. If we want to move a node that has children, things are not so easy. Let’s say we want to move subcat 2 under subcat 1. The strategy is to remove the whole subtree under
subcat 2 including this node and then add them under the subcat 1. Removing the subtree is fairly easy. All we need to do is update all the nodes with greater left or right index than the right index
on removal to be right_index-left_index+1. For this example, all indexes with greater value than 7 (the right index of the subcat 2 we are removing) have to be index-(7-4+1)=index-4. So, subcat 3 for
instance, will become from (8,9) to (4,5) and main category will have aright index of 6. Then, we have to add the removed subtree under the subcat 1. Here comes the tricky part. What we have to do
is, renumber the indexes of all the nodes in the moved subtree to reflect the new situation, then update the indexes of all the rightmost nodes to the node we put the subtree under. Hey! Do not
forget the level indicators 😉
Pretty complex situation don’t you think? Well, i do not want to be negative to any method but, in my oppinion, we add much more complexity than we can handle and even need to have. I guess, the
usability should be in favor of the programmer on some case but for me, good ole adjucency matrix! | {"url":"http://www.stratos.me/2008/04/nested-sets-vs-adjucency-matrix/","timestamp":"2024-11-11T21:30:55Z","content_type":"text/html","content_length":"157549","record_id":"<urn:uuid:a6cbed22-06e6-444b-ac06-045306590de0>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00546.warc.gz"} |
When the 30th cast comes, bug will spawn.
Be better than you were yesterday :D
Hello, I was making a spell, but when the 30th cast of the spell, a bug is spawning. I dunno why. I was not thinking to ask, cuz I was making a spell pack for waaaks RoS, but since I can't fix this,
the bug has forced me to ask.
//Note that this spell is not yet finished so don't say anything, like "hey, lol why no damage?".
//Requires: TimerUtils(RedFlavor, by Vexorian), GTrigger(J4L)
scope LightningTentacle
private struct s
private static constant real DISTANCE = 100. //Distance moved per period.
private static constant real PERIODIC = .05 //Period.
private static constant real WORMS = 25 //Number of worms.
private static constant integer MAX = 10 //Max number of worm segments.
private static constant integer MOVES = 50 //Number of moves.
private static constant integer RAWCODE = 039;A002039; //The rawcode of the spell.
private static constant real REDCOLOR = 1. //Lightning color red
private static constant real GREENCOLOR = 1. //Lightning color green
private static constant real BLUECOLOR = 1. //Lightning color blue
private static constant real ALPHA = .5 //Lightning Alpha
private static constant string LIGHTNING = "DRAL" //The Lightning Effect.
private static constant string EFFECT1 = ""
private static constant string EFFECT2 = ""
real array x[11]
real array y[11]
lightning array l[10]
//DO NOT TOUCH THIS PART===================================================================================
unit u
unit c
real a
timer t
effect e
integer v
integer d
method Destroy takes nothing returns nothing
call ReleaseTimer(.t)
set .u = null
set .c = null
set .t = null
set .v = 0
set .v = .v+1
set .l[.v] = null
exitwhen .v == MAX
static method create takes unit u returns s
local s this = s.allocate()
set .t = NewTimer()
set .u = u
return this
static method Remove takes nothing returns nothing
local s this = GetTimerData(GetExpiredTimer())
set .v = .v-1
call DestroyLightning(.l[.v])
if .v == 1 then
call .Destroy()
method Fader takes nothing returns nothing
call ReleaseTimer(.t)
set .v = MAX+1
set .t = NewTimer()
call DestroyEffect(.e)
call SetTimerData(.t,this)
call TimerStart(.t,PERIODIC,true,function s.Remove)
static method Mover takes nothing returns nothing
local s this = GetTimerData(GetExpiredTimer())
set .d = .d+1
set .v = MAX+1
set .v = .v-1
if .x[.v] == .x[1] and .y[.v] == .y[1] then
set .x[1] = .x[1]+DISTANCE*Cos(a+GetRandomInt(-90,90)*bj_DEGTORAD)
set .y[1] = .y[1]+DISTANCE*Sin(a+GetRandomInt(-90,90)*bj_DEGTORAD)
set .x[.v] = .x[.v-1]
set .y[.v] = .y[.v-1]
exitwhen .v == 1
set .v = 0
set .v = .v+1
call MoveLightning(.l[.v],true,.x[.v+1],.y[.v+1],.x[.v],.y[.v])
if .l[.v] == .l[1] then
if .e != null then
call DestroyEffect(.e)
set .e = AddSpecialEffect(EFFECT1,.x[.v],.y[.v])
exitwhen .v == MAX-1
if .d >= MOVES then
call .Fader()
static method Act takes nothing returns nothing
local unit u = GetSpellAbilityUnit()
local s this = s.create(u)
set .x[1] = GetUnitX(.u)
set .y[1] = GetUnitY(.u)
set .a = Atan2(.y[1]+DISTANCE*Sin(GetRandomInt(0,360)*bj_DEGTORAD)-.y[1],(.x[1]+DISTANCE*Cos(GetRandomInt(0,360)*bj_DEGTORAD))-.x[1])
set .v = MAX+1
set .v = .v-1
if .x[.v] == .x[1] and .y[.v] == .y[1] then
set .x[1] = .x[1]+DISTANCE*Cos(a)
set .y[1] = .y[1]+DISTANCE*Sin(a)
set .x[.v] = GetUnitX(.u)
set .y[.v] = GetUnitY(.u)
exitwhen .v == 1
set .v = 0
set .v = .v+1
set .l[.v] = AddLightning(LIGHTNING,true,.x[.v+1],.y[.v+1],.x[.v],.y[.v])
call SetLightningColor(.l[.v],REDCOLOR,GREENCOLOR,BLUECOLOR,ALPHA)
exitwhen .v == MAX-1
call SetTimerData(.t,this)
call TimerStart(.t,PERIODIC,true,function s.Mover)
static method Action takes nothing returns nothing
local integer i = 0
set i = i+1
call s.Act()
exitwhen i==WORMS
private static method onInit takes nothing returns nothing
local trigger g = CreateTrigger()
call GT_RegisterStartsEffectEvent(g,RAWCODE)
call TriggerAddAction(g,function s.Action)
MADE in china, woops I mean in 1.24b version. Help plz.
Here's the spelll
ok, I tried the spell, also tried tweaking the spell, I noticed that when WORMS count is too much, it will bug in the 30th cast, I tried lowering the WORMS value from 25 to 15, I don't know when will
the bug will occur again, because no bug appeared in the 30th cast.
You have lots of calculations and loops, which makes the spell slow in terms of time, which will cause lag.
So I simply remade your code to my version. Please note that this version still causes lag when WORMS is set to a high value (ex. 25), anyways, 10 for WORMS is enough.
here's my version
//! zinc
library LightningTentacle requires TimerUtils{
constant integer SPELL = 039;A002039;;
constant integer MOVES = 15; //amount of movement per segment
constant integer WORMS = 10; //amount of tentacles
constant integer ANGLE = 60; //angle of frizziness, lower = straighter
constant string FORM = "DRAL"; //lightning form
constant real TICK = 0.05; //tick per movement change
constant real DIST = 100.0; //lenght of each segment
constant string sfx = "Abilities\\Spells\\Human\\ManaFlare\\ManaFlareBoltImpact.mdl"; //effect spawned on each cast
struct dest{
lightning l;
static method create(lightning lg)->dest{
dest d = dest.allocate();
timer t = NewTimer();
d.l = lg;
timer t = GetExpiredTimer();
dest d = GetTimerData(t);
t = null;
d.l = null;
t = null;
return d;
struct tentacle{
unit c;
real nx,ny,pa;
lightning light;
integer movesCtr,ctr;
static method create(unit owner)->tentacle{
tentacle d = tentacle.allocate();
real a,x=GetUnitX(owner),y=GetUnitY(owner),px,py;
timer t = NewTimer();
d.c = owner;
a = Atan2(y+DIST*Sin(GetRandomInt(0,360)*bj_DEGTORAD)-y,(x+DIST*Cos(GetRandomInt(0,360)*bj_DEGTORAD))-x);
px = x + DIST * Cos(a);
py = y + DIST * Sin(a);
d.pa = a;
d.nx = px;
d.ny = py;
d.light = AddLightning(FORM,true,x,y,px,py);
d.movesCtr = 0;
d.ctr = 0;
timer t = GetExpiredTimer();
timer ts;
tentacle d = GetTimerData(t);
real x = d.nx, y = d.ny, a;
dest ds;
if(d.movesCtr >= MOVES){
d.movesCtr = 0;
a = d.pa + (bj_DEGTORAD * GetRandomInt(-ANGLE,ANGLE));
d.nx = x + DIST * Cos(a);
d.ny = y + DIST * Sin(a);
d.light = AddLightning(FORM,true,x,y,d.nx,d.ny);
d.movesCtr += 1;
t = null;
ts = null;
t = null;
return d;
method onDestroy(){
dest ds = dest.create(light);
light = null;
struct data{
unit c;
integer ctr;
function act(){
unit c = GetTriggerUnit();
integer n;
timer t = NewTimer();
data d = data.create();
d.c = c;
d.ctr = 0;
timer t = GetExpiredTimer();
data d = GetTimerData(t);
tentacle te;
if(d.ctr >= WORMS){
d.ctr = 0;
te = tentacle.create(d.c);
d.ctr += 1;
t = null;
c = null;
t = null;
function abilityCheck()->boolean{
return GetSpellAbilityId() == SPELL;
function onInit(){
trigger t = CreateTrigger();
TriggerAddCondition(t,Condition(function abilityCheck));
TriggerAddAction(t,function act);
//! endzinc
also please update your TimerUtils | {"url":"https://www.thehelper.net/threads/when-the-30th-cast-comes-bug-will-spawn.129882/#post-1134243","timestamp":"2024-11-11T23:12:44Z","content_type":"text/html","content_length":"164751","record_id":"<urn:uuid:e8bb733e-cbd7-4979-b749-1149668f0387>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00237.warc.gz"} |
Time complexity function | CodeYZ.com
How to compare algorithms?
Suppose, there is a number of algorithms solving a problem. How to compare which one is faster? One of the solutions is to run these algorithms multiple times and compare the average time used for
each of the runs.
However, algorithms can be written in different programming languages or they can run on computers with different architectures. So results may be different. Actually, in this case, we compare the
implementations of the algorithms and not the algorithmsthemselves.
Comparing algorithms requires abstracting out of programming language, operating system, hardware, programming skills and etc. The idea is to determine or estimate the time needed by an algorithm
without referencing the implementation details.
The Time Complexity Function can be used to solve this task. The Time Complexity Function precisely shows how many operations are required to process the input data of an algorithm. It allows
comparing algorithms without a reference to a particular implementation, hardware, coding skills and so on. The only important thing for the Time Complexity Function is a number of input elements
(the size of the input data). The Time Complexity Function is denoted as T(n) where n is the size of the input data.
Counting the number of operations
Imagine that you have 44 pencils of different length in your hand. You need to sort them out from the longest one to the shortest one. What would you do?
You can find the longest pencils one by one and put them on a table in descending order until there are no more pencils in your hand. But how to find the longest pencil and how many operations are
Here is the algorithm for 44 pencils:
1. Choose a random pencil from pencils in your hand (11 operation). Name chosen pencil as a run candidate.
2. Compare it with other pencils in your hand. If you find a longer pencil than a run candidate then use that longer one for further comparisons. That costs 4 – 1 = 34−1=3 operations.
Let’s name the procedure of finding the longest pencil LongestInHand. Finding the longest pencil from 4 pencils in your hand requires LongestInHand(4) = 1 + 3 = 4 operations. But how many operations
are required to find the longest pencil if we have k pencils in hand? Choosing a random pencil always costs 11 operation. Comparison step costs k – 1 operations. Hence LongestInHand(k) = 1 + k – 1 =
k operations where k is a number of pencils in your hand. We counted a number of operations for LongestInHand and hence we found the Time Complexity Function for that algorithm:
T(k)=T LongestInHand (k)=1+(k−1)=k
Let’s use LongestInHand to find the number of operations needed to sort pencils.
1. Use LongestInHand(4) for 44 pencils in hand to find the longest pencil. That would cost 44 operations.
2. Put the longest pencil on the table. 11 operation.
3. Use LongestInHand(3) for 33 pencils in hand (one pencil is on the table now) to find the longest pencil. 33 operations.
4. Put the longest pencil on the table on its place in descending order. 11 operation.
5. Use LongestInHand(2) for 22 pencils in hand. 22 operations.
6. Put the longest pencil on the table. $1$ operation.
7. Use LongestInHand(1) for 11 pencil in hand. 11 operation.
8. Put the last pencil on the table. 11 operation.
What about Time Complexity Functionfor that sorting algorithm?
T NaiveSort (4)=(T LongestInHand (4)+1)+(T LongestInHand (3)+1) \newline +(T LongestInHand (2)+1)+ (T LongestInHand (1)+1)=(4+1) \newline +(3+1)+(2+1)+(1+1)=14 operations.
We needed 14 operations to sort those pencils. But how many operations are required to sort nn pencils?
Let’s calculate that! T_{NaiveSort}(n) = (n + 1) + (n – 1 + 1) + … + (1 + 1) = (n + 1) + n + … + 2T NaiveSort \newline (n)=(n+1)+(n−1+1)+…+(1+1)=(n+1)+n+…+2 \newline = (2 + n + 1) * n / 2 = (n + 3)
* n / 2(2+n+1)∗n/2=(n+3)∗n/2
But what if you knew the secret? What if you simply took all pencils in your hand and hit them of the table by their ends? Now you would see that the longest pencil is higher than the others! The
second in length is easy to identify when you remove the longest pencil and so on. What about Time Complexity Function now? One iteration of “secret” sorting algorithm requires only 2 operationsnow —
take longest pencil (11 operation) and put it on the table in the right place (11 operation). So
T secret (4)=2+2+2+2=8 and T_{secret}(n) = 2 * nT secret (n)=2∗n .
Let’s compare two algorithms now:
This graph shows the relationship between the input size nn (x-axis) and the number of operations T(n)T(n) the given algorithm performs (y-axis). This kind of graph is often used to represent time
complexities of different algorithms and compare them because it is clear which algorithm performs faster.
Here we can see the “secret” algorithm for sorting the pencils is much faster than the naive one.
Time Complexity Function T(n) is used to determine how efficient the algorithm really is independent of its implementations. It depends only on nn, the size of the input data.
Time complexity is also useful for comparing different algorithms since we can determine the difference in their speed on the same data.
Calculating precise Time Complexity Function can be difficult so most of the time we use its asymptotic behavior represented by the Big O notation. You’ll get a chance to learn about it in another
topic, don’t worry! Still, the main idea behind it is the notion of Time Complexity Function, so keep that in mind!
You must be logged in to post a comment. | {"url":"https://codeyz.com/uncategorized/time-complexity-function/","timestamp":"2024-11-11T03:00:58Z","content_type":"text/html","content_length":"54707","record_id":"<urn:uuid:c62e36f1-c102-421f-a773-27366c13972d>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00449.warc.gz"} |
17. Impact of Single Constant Optimization on the Precision of IOL Power Calculation - Docteur Damien Gatinel
17. Impact of Single Constant Optimization on the Precision of IOL Power Calculation
One of the primary factors influencing patient satisfaction after cataract surgery is the accurate determination of intraocular lens (IOL) power to achieve the desired postoperative refraction.
For optimal outcomes, these calculations need to be both accurate and precise. An accurate method will result in a mean prediction error—defined as the mean of the differences between the achieved
and predicted refractions—being close to zero. This principle underpins the concept of « zeroization, » where adjustments are made to minimize the mean prediction error as much as possible.
Precision, in contrast, refers to the reliability or consistency of these calculations and is measured by the standard deviation (SD) of the prediction error. A lower SD signifies a more precise
method, indicating that most errors are closely grouped around the mean.
These metrics are frequently utilized in clinical studies and research to evaluate the performance of various IOL power calculation formulas and to assess their accuracy and precision in predicting
the correct lens power for cataract surgery.
This page is dedicated to the consequences of formula optimization performed in the traditional way (zeroing the mean error) on the precision (SD) of the prediction error. In real-life clinical
practice, many variables contribute to the induction of refractive prediction error, which in turn can explain the variance (the square of the SD) of this error.
To give the takeaway message upfront: adjusting a positional constant (such as the A-constant, SF constant, a0 constant, etc.) improves both precision and accuracy when the prediction error is due to
an incorrect estimation of the Effective Lens Position (ELP) or a mismeasurement of axial length—these are known as positional errors. In contrast, optimizing a corneal power estimation error using a
positional constant leads to an increase in the SD of the error: even if the mean error is zeroed, the risk is an increase in the dispersion of the error.
An instructive theoretical scenario
To illustrate this point, we can imagine a theoretical scenario where an IOL calculation formula makes no error for a series of implants with powers ranging from 0.5D to 35D. Now, let’s assume that
the corneal power estimation has been systematically overestimated by one diopter in the spectacle plane due to a miscalibration of the biometer’s keratometer. What would be the resulting error?
The formula would predict the same implant position and calculate the IOL power required to achieve the target refraction. During postoperative refraction measurement, a hyperopic error of one
diopter would be observed, corresponding to the effective corneal power deficit. The mean prediction error would be 1D, and the standard deviation (SD) of the prediction error would be zero.
Next, let’s imagine that optimization is performed by adjusting a positional constant. This constant would need to modify the predicted position of all implants in the same way (an increment is added
to the position predicted by the formula, it was calculated at +0.86 mm in this example). This increment would then alter the predicted refraction for each eye so that the recalculated mean error
becomes zero. However, this does not mean that the prediction error is minimized for each eye. Rather, it is the final mean that must be zero. Additionally, the axial displacement of an implant
produces a refractive change proportional to the square of its power.
As a result, the constant adjustment would need to create a negative refractive prediction error (to compensate for the positive prediction error) in some eyes within the dataset (in the examples,
eyes having received an IOL of 18.5 D or more). Some of the positive prediction errors would persist because it is not possible to adjust the refraction for eyes with low-power implants
significantly. In the end, while the mean prediction error is zero and the formula appears accurate, it becomes much less precise as the SD of the error increases from 0.
This example is instructive but should not be overgeneralized regarding the mechanism of optimization. First, the dataset used in this example is not representative of the typical distribution of
implant powers, which is generally centered around 21D. Second, the sources of prediction error in IOL calculation formulas are multifactorial.
This example highlights that adjusting a positional constant cannot compensate for a portion of the prediction error (related to an incorrect estimation of corneal power) without degrading the
precision of the power/post-operative refraction estimation, leading to an increase in the SD of the prediction error.
The following figure shows the result of a similar experiment, conducted this time for a smaller error (0.50D), but with the addition of the prediction error representation for a series with a
typical distribution of implant power (on the right).
These insights may help improve the optimization strategies for IOL power calculation formulas by highlighting the impact of the type of error being corrected and the appropriateness of the constant
being adjusted.
The remainder of this page is dedicated to exploring these concepts in greater depth.
Accuracy vs. Precision
In the context of intraocular lens (IOL) power calculation, accuracy refers to how close the predicted refractive outcome (the target refraction) is to the actual result after surgery. An accurate
prediction means the average prediction error is near zero, indicating that, on average, the calculations are correct, but individual predictions may still vary.
Precision, on the other hand, relates to the consistency of the predictions. A precise IOL power calculation formula will have a small spread or variability in the prediction errors, meaning the
results are clustered closely together, even if they are not centered on the target refraction (i.e., they may consistently miss the mark by a similar amount).
It is not clinically relevant for an implant power prediction formula to introduce a consistent positive or negative average error. A systematic bias, either positive or negative, can arise under
various circumstances, such as a change in the implant model or alterations in the methods used to collect biometric data or estimate corneal power.
It is important to note that biometric calculation formulas have specific methods for predicting the effective lens position and estimating corneal power. For instance, the Holladay, Haigis, and
Hoffer formulas use a specific keratometric index. These formulas partially compensate for discrepancies through their constants.
The Mean Bias Error (MBE) characterizes a formula’s accuracy, as it reflects how close the average prediction is to the actual refractive outcome. Zeroization improves the formula’s accuracy by
reducing this bias. On the other hand, the Standard Deviation (SD) represents the formula’s precision, indicating the spread around the mean of refractive prediction errors. Precision, unlike
accuracy, does not typically undergo any specific reduction procedure in clinical practice (learn more about these metrics).
However, we have proposed to change this paradigm with a new method intended to « optimize optimization« , and what follows justifies this approach.
This page is derived from a recently published article. We aimed to evaluate the impact of zeroization on the standard deviation (SD) value. To the best of our knowledge, no prior research had
specifically examined the effects of reducing prediction errors resulting from systematic inaccuracies in various biometric parameters.
Optimization of IOL Power Formulas: General Concepts
Let’s begin by reviewing some general concepts about the standard optimization procedure.
Optimization: Historical Overview
The SRK formula introduced the concept of an adjustment constant. This formula is a regression-based, empirical formula:
IOL Power=A constant−2.5×AL−0.9×Km
Where AL represents the axial length (in mm) and Km is the average keratometry (in diopters). The mix of units highlights the empirical nature of this formula. The A-constant allows for a linear
adjustment of the implant power to optimize the formula: an increase of one unit in the A constant corresponds to a 1 diopter (D) increase in the predicted power.
Thus, any variation in the A-constant has a near-linear effect on postoperative refraction, with a factor of 0.7, meaning that a 1D change in IOL power results in a 0.7D change in refractive outcome.
For example, if an average prediction error of +0.50D is measured, it would be necessary to increase the A constant by approximately 0.50/0.7, or 0.71 units.
Evolution with SRK/T Formula
During the development of their theoretical formula (SRK/T), the authors aimed to retain the concept of the A-constant. Still, they applied it in the context of an optically-based formula. In this
new framework, adjustments were no longer made by varying the implant’s power but rather by its predicted position (Effective Lens Position or ELP). The empirical A-constant values were typically
around 118 for posterior chamber lenses, and a linear adjustment using a new constant (ACD-constant) was needed to convert variations in the empirical A constant into positional increments.
Optimization of Theoretical Formulas
Most theoretical formulas developed alongside the SRK/T formula rely on an optical model in which an implant is placed at a predicted position from which the IOL power is calculated. These formulas
possess a constant designed to optimize their performance by zeroing the mean prediction error, effectively eliminating the Mean Bias Error (MBE). This optimization ensures that the formula provides
an accurate refractive outcome by compensating for any systematic deviations in the prediction.
-Hoffer Q Formula: pACD (personalized Anterior Chamber Depth):
This is the primary constant used in the Hoffer Q formula. It represents an estimate of the effective lens position (ELP) and is typically personalized based on surgeon or lens manufacturer data. The
pACD is specific to the formula and adjusts for the expected postoperative position of the intraocular lens (IOL).
-Holladay 1 Formula: Surgeon Factor (SF):
The primary constant for the Holladay 1 formula. The SF is used to adjust for the predicted effective lens position (ELP) based on the surgeon’s technique, IOL model, and other biometric factors.
-Olsen formula: C constant:
This is the primary constant in the Olsen formula. It is used to estimate the Effective Lens Position (ELP), which is a critical factor in calculating the IOL power.
–Barrett Universal II formula: Lens Factor (LF):
This is the primary constant used in the Barrett formula, analogous to the A constant in other formulas. The Lens Factor takes into account the specific IOL design to estimate the Effective Lens
Position (ELP) more accurately.
-Haigis Formula: a0, a1; a2
• a0: This constant adjusts the overall IOL power prediction, serving as a general offset for the formula.
• a1: This constant is related to the axial length (AL) of the eye and adjusts the prediction based on the eye’s length.
• a2: This constant is related to the anterior chamber depth (ACD) and adjusts the prediction based on the depth of the anterior chamber.
Single-Optimized Haigis:
In this approach, only a0 is optimized, often using a lens constant provided by the manufacturer or based on the surgeon’s experience.
Triple-Optimized Haigis:
In this approach, all three constants—a0, a1, and a2—are optimized. This provides a more customized and potentially more accurate prediction, as it takes into account the specific biometric
characteristics of the patient’s eye. In the triple-optimized version, a1 and a2 are also adjusted, which can be likened to retraining the formula. Other formulas likely use multiplicative constants
for preoperative biometric parameters as well, but they do not provide the user with the ability to adjust these constants.
The PEARL DGS formula is a theoretical thick-lens formula that uses Artificial Intelligence to predict certain variables, such as the effective lens position (ELP). It utilizes the A constant to
infer some data about the design of the considered implant but optimizes by incrementally adjusting the predicted position of the implant.
Altering the value of a constant in a theoretical formula essentially means adjusting the predicted position of the implant rather than directly changing its power. This distinction is important
because, in theoretical formulas, the constant is primarily used to refine the estimated Effective Lens Position (ELP). By modifying the constant, you are influencing where the formula predicts the
IOL will sit within the eye, which in turn affects the final refractive outcome, but not by directly altering the IOL’s calculated power.
Consequences of Optimization on the SD
Holladay et al. provided compelling evidence that the standard deviation (SD) of the prediction error is the most accurate measure of surgical outcomes. This metric offers a reliable evaluation and
can predict other indicators, such as the percentage of cases within a specific range (e.g., ±0.50), the mean absolute deviation, and the median. The SD is calculated after optimizing the constant to
eliminate any systematic bias. The SD of the error is determined by the distance of each data point from the mean rather than the mean itself. As a result, the effect of « zeroing » the mean
prediction error on the SD cannot be anticipated without further details since individual errors, not mean errors, influence the SD.
Our study will now focus on how specific factors can individually influence the changes in SD following the zeroization of predicted errors, providing new insights into the interplay between these
biometric inaccuracies and overall formula precision.
In previous work, we introduced a method that facilitates the straightforward calculation of the optimal lens constant, which corrects for systematic biases in formula predictions.
Building on this work, we developed an analytical approach to identify the relationships and predict the effects of systematic errors in keratometry, axial length (AL) measurement, the estimation of
the keratometric index, and the prediction of the effective lens position.
Impact of Biometric Variable Alteration on Prediction Accuracy and Standard Deviation
In a recent study, we have conducted a series of theoretical simulations to evaluate the predictive accuracy of an IOL power calculation formula (Haigis single-optimized), which has previously shown
a nearly zero prediction error (<0.05 diopters [D]) in a selected group of eyes. Before altering the value of one of the studied parameters, the formula used performs perfectly for all eyes in the
Our goal is to re-run these predictions after deliberately altering one of the input variables by a consistent positive or negative amount. This is meant to simulate a scenario where a new data
acquisition method for this variable is introduced, such as using a different measurement instrument that might produce slightly different values from the same eyes.
For example, we might uniformly adjust the anterior corneal curvature radius by adding or subtracting a specific value from all radii.
By doing so, the formula will predict a new postoperative refractive outcome, which will deviate from the original prediction. This deviation represents the theoretical prediction error caused by
inaccuracies in the modified biometric variable. Following this, we examine the effect of adjusting the optimization constant that compensates for the mean prediction error and analyze its impact on
the standard deviation (SD) of this error.
Analytical Method
The method used to study the impact of zeroization on the standard deviation is relatively complex for those who are not familiar with certain analytical and statistical concepts. However, beyond
these theoretical developments, examples will be provided to illustrate and support the results of the formulas presented below (The method presented was published here).
Once this equation is generated, we can expand the term on the right to obtain a more interpretable expression:
The generated equations split the expected SD value after zeroing the mean error into three terms (A, B, and C). Terms A and B always have a positive value, with B proportional to the mean prediction
error. Only term C (which can take negative values) has the potential to reduce the SD value.
It is not easy to predict in advance the impact of an error source on the sign of C. Practical investigations are preferable to study the impact of zeroing on compensating for measurement or
estimation errors in biometric variables.
Example 1: Prediction Error caused by Effective Lens Position (ELP)
Let’s begin by examining the impact of zeroing an error caused solely by an incorrect prediction of the Effective Lens Position (ELP). In this theoretical experiment, we start with a formula (Haigis
single optimized) that performs perfectly for all the eyes in a dataset, with a prediction error of less than 0.05 D for each eye. As a result, the mean error is also below 0.05 D.
In this context, we know the power of the IOLs that were placed and the postoperative prediction error, which was measured to be less than 0.05 D in all cases. To calculate the error induced by an
incorrect estimation of the ELP, we increment the a0 constant that helps predict the ELP value (which had previously ensured the desired refraction in these eyes without prediction errors) and
calculate the error induced for each eye by the displacement of the IOL. This is not exactly equivalent to the real-life situation where the formula predicts a power intended to achieve the desired
target refraction. However, the induced error is transferable since the equations are reversible.
So, we can suppose that the formula is applied to calculate the power of an implant whose geometry is slightly different (the optic plane is shifted by a certain amount, the same for all implants,
regardless of their power). This introduces an error that will likely be proportional to the implant’s power. We can explore the impact of zeroing this error on the evolution of the variance (the
square of the standard deviation, SD) in two ways: by using the classic formula that gives the value of variance (the mean of the sum of the squares of deviations from the mean), and by using our
formula, which is based on the relationship between the implant’s displacement and the induced refractive error. This formula breaks the postoperative variance into three distinct terms.
Visual Representation of ELP Prediction Error
We begin by graphically representing the consequences of a prediction error in the ELP (the same error applied to all eyes in the dataset).
When the formula underestimates the ELP (i.e., the predicted position is smaller than the actual observed position), the predicted power will be weaker, leading to a positive mean refractive error.
Conversely, when the formula overestimates the ELP (i.e., the predicted position is larger than the actual observed position), the predicted power will be stronger, resulting in a negative prediction
error (myopic surprise).
Consequences of Zeroing on the SD
What is the impact of zeroing the mean error on the SD? For each value of the ELP error, an adjustment of the constant a0 is made to zero out the formula, and the value of the SD of the prediction
error is calculated. This SD is then estimated using the formula based on the variance calculation through the sum of the terms (A + B + C).
We can calculate this in two ways (predicted vs. measured) and explain it through the analysis of the variations of the A, B, and C terms in our formula:
When the error in the ELP value increases, the values of terms A and B increase exponentially, as expected, while term C decreases exponentially. As a result, the SD is minimized (very close to zero)
when the mean-induced prediction error is zeroed.
Is this surprising? Not really, because a positional adjustment perfectly and appropriately corrects the initial error (the ELP prediction error). Thus, we return to the ideal situation, where no
residual error remains.
Covariance Analysis: Ei and (P² + 2KP)
This conclusion is corroborated by studying the covariance between Ei (the individual refractive error) and (P² + 2KP), where P represents the IOL power and K is the keratometry. If we plot a graph
of the prediction errors induced by an underestimated ELP (which leads to a negative mean prediction error), we notice that the larger the value of (P² + 2KP), the more negative the induced error
(indicating a strong, negative covariance).
The product of the negative mean error and the negative covariance is positive, and thus, the sign of C is negative.
Had we plotted the graph for an overestimated ELP, the mean error would be positive, and the correlation (covariance) between Ei and (P² + 2KP) would also be positive, again ensuring a negative value
for C.
In this example, we demonstrate that zeroing the mean error caused by an incorrect ELP prediction minimizes the standard deviation. This is expected because the zeroing is achieved by adjusting the
effective lens position, which essentially corrects the initial positioning error.
We can address another potential source of error: the estimation of corneal power. Again, we will start with a « perfect » Haigis formula and then simulate the impact of modifying the keratometric
index value used for corneal power measurement. For each variation, we will evaluate the induced mean prediction error (MBE), the SD of this error, and then analyze its evolution during optimization
by adjusting the positional constant a0.
Example 2: Systematic Keratometric Index Value Increment
The value of the keratometric index used for estimating corneal power varies depending on the formula used. A biometer equipped with keratometer measures not the corneal power itself but the average
apical radius of curvature (in mm). This value is then converted into keratometric power (in diopters) using a keratometric index (typically nk=1.3375). When an IOL calculation formula uses a
different value (e.g., nk=1.3315 for the Haigis formula), the corneal power is estimated based on this index and the average curvature radius.
In all cases, these keratometric index values provide an approximation based on an assumed constant ratio between the anterior and posterior corneal surfaces (we have published the formula linking
these parameters). Simulating the error caused by a deviation in the keratometric index from an ideal situation allows us to assess the impact of a systematic average deviation on the accuracy and
precision of a biometric calculation formula.
The value of nk was adjusted in increments that ranged from −0.005 to +0.005, with each step involving a change of ±0.001:
We observe that an overestimation of the keratometric index value results in a positive prediction error (hyperopia), which is expected, as the formula underestimates the IOL power needed to achieve
the target refraction. The standard deviation (SD) of the error appears to increase exponentially; however, when examining the actual SD values, they remain low and relatively constant (unlike what
was observed in the previous example).
This is explained by the induction of a relatively consistent refractive variation across all eyes in the dataset, as the impact of a change in the keratometric index produces a refractive shift that
is relatively insensitive to the value of the average apical curvature radius.
Analysis of the Consequences of Zeroing the Mean Error
Let’s now examine the consequences of zeroing the mean error (this adjustment is made for each increment of the keratometric index, and the SD of the error distribution, now centered on zero, is
We observe that zeroing, achieved through positional adjustment, leads to an exponentially increasing SD. This result is striking and warrants further analysis.
Even though the value of term C is negative, the magnitude of C is too small to offset the increase in terms A and B observed during the zeroing process.
The study of the covariance between the induced prediction error and (P² + 2KP) explains these small values of C:
Looking at the graph, we observe that there is no clear relationship or pattern between Ei and P² + 2KP, indicating that the variables are not correlated. This lack of visible correlation suggests
that the covariance between them is likely low, as covariance measures the degree to which two variables move together. In this case, the scattered nature of the data points implies that changes in
P² + 2KP have little to no consistent impact on Ei, supporting the presumption of a low covariance. This low correlation is expected because a corneal power error related to an inappropriate
keratometric index value induces a relatively constant prediction error, regardless of the power of the implant used. This can be easily understood if we imagine the refractive consequences of
placing a +1 D contact lens (which modifies the corneal diopter power): it induces a constant change in refraction, regardless of the eye concerned. In other words, the correlation between the error
in estimating corneal power and the power of the implanted IOL is very low, resulting in a reduced value for term C. To be precise, the effect of the refractive index change is not entirely constant,
as its impact varies slightly with the value of the average corneal curvature radius used for calculating corneal vergence. However, this variation is too small to create a strong enough correlation
between the induced prediction error and the value of (P² + 2KP).
Impact of other variables
Equivalent results would be observed with an error in estimating the corneal curvature radius (see the referenced article for more detailed results). This outcome stems from the low correlation
between the refractive error induced by the discrepancy in estimated corneal power and the value of (P² + 2KP) in the eyes studied.
In the experiment involving axial length (a systematic error leading to a non-zero mean bias error, which is then zeroed by adjusting a positional constant), we would observe that zeroing neither
significantly reduces nor increases the SD of the prediction error. Using a positional constant to adjust the mean error addresses an initial positional error (the estimated position of the retinal
plane), which has a certain degree of correlation with the implant power (the induced refractive error increases with the power of the implant).
This demonstrates that while zeroing is effective—meaning it improves both accuracy (by eliminating the mean prediction error) and precision (by controlling the standard deviation, SD)—for positional
errors where some correlation exists between the prediction error and implant power, it may not be as effective in cases like corneal power estimation errors. In such cases, the induced error remains
largely independent of implant power, making it difficult to simultaneously achieve both improved accuracy and control over SD through zeroing alone.
These results have led to considering an alternative to the traditional optimization technique, which is presented here.
Les commentaires sont fermés. | {"url":"https://www.gatinel.com/17-impact-of-single-constant-optimization-on-the-precision-of-iol-power-calculation/","timestamp":"2024-11-10T20:17:54Z","content_type":"text/html","content_length":"177873","record_id":"<urn:uuid:e94f82e0-a740-4eb8-8bdd-dfec0937b2ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00037.warc.gz"} |
Simple IRR question
Hey guys, i’m a little confused about IRR and was figuring some of the geniuses on here could help me out: 1. The IRR is defined as the “rate of return at which a project’s NPV equals zero”, or in
other words, the rate of return at which PV cash inflows equals PV cash outflows. That being said, the text also says the greater the IRR is above the hurdle rate, the greater the NPV and vice
versa". Now, definitionally, the IRR is a SINGLE VALUE, the rate at which NPV = 0. So how can the NPV be higher or lower then 0 based on the IRR? 2. I may be wrong here, but as I understand it, you
can calculate the IRR by making different cost-of-capital (discount rate) assumptions, and seeing which one brings you closest to the NPV being 0, and the highest IRR will mean the highest NPV for
the project. Now again, ignoring my confusion in Question 1, why are we saying that the project with the HIGHEST IRR (or in other words, highest cost of capital assumption) will have the highest NPV.
Shouldn’t the lowest IRR have the highest NPV, as cost of capital is less? Also, the text I’m using says you can think of the IRR as both a rate of return, and the cost of capital. How can it be both
those things, as an increase in a cost of capital would directly reduce the rate of return? I think I’m missing a piece to complete this puzzle, I appreciate any help! I just started off with my
studying, so this is all new to me. Thanks
You kind of answered your own question. The IRR is the point at which NPV equals 0. If this equals the discount rate or cost of capital, any IRR value higher than this would indicate that the PV of
cash flows is > than the PV of the initial outlay. Vice versa if the IRR is lower than the discount rate. So when they say you can think of it as the rate of return and the discount rate, they are
correct. It’s the discount rate that makes the NPV=0 for that specific project. However this isn’t the actual firm’s discount rate used firmwide when evaluating capital expenditures. Since it’s also
the project’s rate of return you compare to the firmwide discount rate and if > you accept the project, if < you reject. The higher it is over the firmwide discount rate, the higher the NPV is for
that specific project. You just need to think about it as a discount rate for a specific project vs the discount rate aka cost of capital for the entire firm. You then compare the discount rate for
the project to the one for the firm. If greater it will add value, if lower it will subtract value. It tells you that the rate of return for the project is either higher/lower than how much it will
cost the firm to fund the project. You then accept/reject accordingly. You can probably use examples from the books to help you understand that have a positive NPV and maybe change one of the cash
inflows to make it larger. Then compute the IRR. You’ll notice the higher NPV value you would get a higher IRR as well. The reason is that the discount rate for the project to make the NPV=0 will now
have to increase since the cash flow just increased(this should just make sense intuitively). The PV of FCF’s is now larger so you need a higher IRR to make the NPV = 0. Try adding a cash outflow to
now lower the PV of FCF’s. You’ll see the IRR decrease accordingly. Hope that helps a little.
brafique Wrote: ------------------------------------------------------- > Hey guys, i’m a little confused about IRR and was > figuring some of the geniuses on here could help > me out: > > 1. The IRR
is defined as the “rate of return at > which a project’s NPV equals zero”, or in other > words, the rate of return at which PV cash inflows > equals PV cash outflows. That being said, the text > also
says the greater the IRR is above the hurdle > rate, the greater the NPV and vice versa". > > Now, definitionally, the IRR is a SINGLE VALUE, > the rate at which NPV = 0. So how can the NPV be >
higher or lower then 0 based on the IRR? > > 2. I may be wrong here, but as I understand it, > you can calculate the IRR by making different > cost-of-capital (discount rate) assumptions, and >
seeing which one brings you closest to the NPV > being 0, and the highest IRR will mean the highest > NPV for the project. > > Now again, ignoring my confusion in Question 1, > why are we saying that
the project with the > HIGHEST IRR (or in other words, highest cost of > capital assumption) will have the highest NPV. > Shouldn’t the lowest IRR have the highest NPV, as > cost of capital is less?
> > Also, the text I’m using says you can think of the > IRR as both a rate of return, and the cost of > capital. How can it be both those things, as an > increase in a cost of capital would directly
> reduce the rate of return? > > I think I’m missing a piece to complete this > puzzle, I appreciate any help! I just started off > with my studying, so this is all new to me. > > Thanks > seeing
which one brings you closest to the NPV > being 0, and the highest IRR will mean the highest > NPV for the project. Highest IRR doesn’t necessarily mean highest NPV. A $100 project with $30 return vs
$1,000,000 project and $200,000 return will have conflicting values. IRR will pick highest return percentage return ($30, 30%), but NPV will pick highest Net Present Value, the $1,000,000 project.
Thanks for the answers guys, especially jlive, I really appreciate that you took the time to write out such a detailed reply! It really helped me out. I had a similar question regarding IRR though,
but in regards to YTM calculations with Bond Valuations - of course, YTM and IRR are basically identical concepts. Anyway, my question was this: The book says: Bond selling at Par: Coupon = Current
Yield = YTM Given that YTM takes into account capital gain/loss and reinvestment of received cash flows, I can understand how a bond bought at par where n = 1 (one coupon payment, then repayment of
principal) will have the Coupon and YTM equal. However, if you purchased a similar bond where n=2 (two coupon payments, then repayment of principal), wouldn’t that reinvestment of the 1st period cash
flow as assumed under the YTM push the YTM higher as compared to the Coupon rate or CY? Also, the text says that the Current Yield does not take capital gain/loss into account. However, it also says
that CY is greater when the bond is selling at a discount, and smaller when the bond is selling at a premium. The bond selling at a discount or a premium adds a capital gain or loss respectively, so
if that causes the CY to increase or decrease respectively, then how can one say that the CY does not consider capital gain or loss at all? I hope my questions are clear, thanks! | {"url":"https://www.analystforum.com/t/simple-irr-question/25285","timestamp":"2024-11-09T10:36:49Z","content_type":"text/html","content_length":"28653","record_id":"<urn:uuid:baa118c8-11a1-4329-9b57-3cbb82128be1>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00825.warc.gz"} |
A List of the Top Machine Learning Algorithms in Python | Board Infinity
Machine learning algorithms help data scientists to work on complex problems and solve them. If you are an aspiring Data Scientist, you need to have good knowledge of these algorithms too.
Let’s start by understanding basic algorithm terminologies.
• Seen Data or Train Data: This is the information that we already have.
• Predicted Variable (Y): Machine Learning algorithms are aimed at predicting this variable.
• Features (X): Features or variable X is usually all the inputs that are fed to the system.
Here is a list of top Machine Learning Algorithms used by Data Scientists which can be applied to any problem with either Python or R code:
Watch this video to get a brief description of the below-mentioned Machine Learning Algorithms.
Linear Regression
This algorithm is used to estimate real values on the basis of continuous values.
Logistic Regression
This algorithm is used to estimate discrete values based on the given set of independent values.
Decision Tree
This algorithm is mainly used for classification problems. A set of dependent variables are split into two or more homogeneous sets to arrive at a solution.
This algorithm is another classification method in which a set of X variables are plotted on an n-dimensional space.
Naive Bayes
This is a classification technique that is easy to build and very useful for large sets of data.
This algorithm can be used for both, regression problems as well as for classifications.
This algorithm is used to solve clustering problems.
Random Forest
The Random Forest is a term for an ensemble of trees. In this algorithm, there is a collection of decision trees that are used to classify new objects based on attributes.
Dimensionality Reduction Algorithms
Data scientists collect heterogeneous data which from which they need to derive useful information. To do so they need the Dimensionality reduction algorithm along with other algorithms to sort out
meaningful data from the rest.
Gradient Boosting algorithms
This is a boosting algorithm used to make predictions with high prediction power. All these algorithms are very important and are frequently used by Data Scientists and Machine Learning Engineers to
solve problems.
According to PayScale, Entry level, Machine Learning engineers in Mumbai and Pune get paid an average of Rs 6,00,000 to Rs 7,00,000 a year.
Take up the Data Science Learning Path and prepare for a successful career as a Machine Learning Engineer now! | {"url":"https://www.boardinfinity.com/blog/list-of-machine-learning-algorithms-in-python/","timestamp":"2024-11-09T04:09:54Z","content_type":"text/html","content_length":"70376","record_id":"<urn:uuid:526891d7-8d8b-4e04-8671-a9417aa9def8>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00286.warc.gz"} |
Efficient extension to mixture techniques for prediction and decision trees
We present a method for maintaining mixtures of prunings of a prediction or decision tree that extends the `node-based' prunings of [Bun90, WST95, HS95] to the larger class of edge-based prunings.
The method includes an efficient online weight allocation algorithm that can be used for prediction, compression and classification. Although the set of edge-based prunings of a given tree is much
larger than that of node-based prunings, our algorithm has similar space and time complexity to that of previous mixture algorithms for trees. Using the general on-line framework of Freund and
Schapire [FS95], we prove that our algorithm maintains correctly the mixture weights for edge-based prunings with any bounded loss function. We also give a similar algorithm for the logarithmic loss
function with a corresponding weight allocation algorithm. Finally, we describe experiments comparing node-based and edge-based mixture models for estimating the probability of the next word in
English text, which show the advantages of edge-based models.
Other Proceedings of the 1997 10th Annual Conference on Computational Learning Theory
City Nashville, TN, USA
Period 7/6/97 → 7/9/97
All Science Journal Classification (ASJC) codes
• Computational Mathematics
Dive into the research topics of 'Efficient extension to mixture techniques for prediction and decision trees'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/efficient-extension-to-mixture-techniques-for-prediction-and-deci","timestamp":"2024-11-14T18:55:37Z","content_type":"text/html","content_length":"49914","record_id":"<urn:uuid:458ddfa8-d070-412f-b9e0-162ae35b28d7>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00499.warc.gz"} |
≡ Pages
Favorited Favorite 62
Series and Parallel Resistors
Resistors are paired together all the time in electronics, usually in either a series or parallel circuit. When resistors are combined in series or parallel, they create a total resistance, which can
be calculated using one of two equations. Knowing how resistor values combine comes in handy if you need to create a specific resistor value.
Series resistors
When connected in series resistor values simply add up.
N resistors in series. The total resistance is the sum of all series resistors.
So, for example, if you just have to have a 12.33kΩ resistor, seek out some of the more common resistor values of 12kΩ and 330Ω, and butt them up together in series.
Parallel resistors
Finding the resistance of resistors in parallel isn't quite so easy. The total resistance of N resistors in parallel is the inverse of the sum of all inverse resistances. This equation might make
more sense than that last sentence:
N resistors in parallel. To find the total resistance, invert each resistance value, add them up, and then invert that.
(The inverse of resistance is actually called conductance, so put more succinctly: the conductance of parallel resistors is the sum of each of their conductances).
As a special case of this equation: if you have just two resistors in parallel, their total resistance can be calculated with this slightly-less-inverted equation:
As an even more special case of that equation, if you have two parallel resistors of equal value the total resistance is half of their value. For example, if two 10kΩ resistors are in parallel,
their total resistance is 5kΩ.
A shorthand way of saying two resistors are in parallel is by using the parallel operator: ||. For example, if R[1] is in parallel with R[2], the conceptual equation could be written as R[1]||R[2].
Much cleaner, and hides all those nasty fractions!
Resistor networks
As a special introduction to calculating total resistances, electronics teachers just love to subject their students to finding that of crazy, convoluted resistor networks.
A tame resistor network question might be something like: "what's the resistance from terminals A to B in this circuit?"
To solve such a problem, start at the back-end of the circuit and simplify towards the two terminals. In this case R[7], R[8] and R[9] are all in series and can be added together. Those three
resistors are in parallel with R[6], so those four resistors could be turned into one with a resistance of R[6]||(R[7]+R[8]+R[9]). Making our circuit:
Now the four right-most resistors can be simplified even further. R[4], R[5] and our conglomeration of R[6] - R[9] are all in series and can be added. Then those series resistors are all in parallel
with R[3].
And that's just three series resistors between the A and B terminals. Add 'em on up! So the total resistance of that circuit is: R[1]+R[2]+R[3]||(R[4]+R[5]+R[6]||(R[7]+R[8]+R[9])). | {"url":"https://learn.sparkfun.com/tutorials/resistors/series-and-parallel-resistors","timestamp":"2024-11-08T17:16:46Z","content_type":"text/html","content_length":"54617","record_id":"<urn:uuid:470b1acd-6221-48b5-80e4-d67d3c70a69f>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00841.warc.gz"} |
Depth First Search Time Complexity Question
Understanding Depth First Search Time Complexity
Depth First Search (DFS) is a fundamental algorithm in computer science used for traversing or searching tree or graph data structures. Despite its widespread use, many programmers and computer
science enthusiasts often find themselves puzzled over the time complexity of DFS. In this post, we’ll explore the time complexity of the DFS algorithm and clarify a common misconception about its
The DFS Algorithm
Before diving into the complexity analysis, let’s take a look at a simple implementation of DFS. Below is a Python code snippet that demonstrates the DFS algorithm using an adjacency list
representation of a graph.
def add_edge(adj, s, t):
# Add edge from vertex s to t
# Due to undirected Graph
def dfs_rec(adj, visited, s, nv):
# Mark the current vertex as visited
visited[s] = True
nv += 1
# Print the current vertex
# Recursively visit all adjacent vertices that are not visited yet
for i in adj[s]:
nv += 1
print('run for loop', i)
if not visited[i]:
dfs_rec(adj, visited, i, nv)
def dfs(adj, s, nv):
visited = [False] * len(adj)
dfs_rec(adj, visited, s, nv)
if __name__ == "__main__":
V = 5
# Create an adjacency list for the graph
adj = [[] for _ in range(V)]
# Define the edges of the graph
edges = [[1, 2], [1, 0], [2, 0], [2, 3], [2, 4], [1, 3], [1, 4], [3, 4], [0, 3], [0, 4]]
for e in edges:
add_edge(adj, e[0], e[1])
source = 1
print("DFS from source:", source)
dfs(adj, source, 0)
Analyzing the Time Complexity
When analyzing the time complexity of DFS, we need to consider how the algorithm traverses the graph. The key points to remember are:
1. Vertices and Edges: In a graph with V vertices and E edges, DFS will visit each vertex once and explore each edge exactly once.
2. Time Complexity Formula: Therefore, the overall time complexity of DFS can be expressed as: [ O(V + E) ] This means that the time taken to complete the DFS traversal is proportional to the sum of
the number of vertices and the number of edges in the graph.
The Common Misconception
As illustrated in the original post, there is often confusion regarding the time complexity due to assumptions about the number of iterations in the recursive calls. The statement that DFS runs the
“for loop” in dfs_rec for a certain number of times (in this case, 20 times) might lead to the misconception that the time complexity should be (O(V + E^2)).
However, this is not the case. The key points to clarify are:
• The for loop in dfs_rec iterates over the adjacent vertices of the current vertex being visited. The total number of such iterations across the entire DFS traversal corresponds to the total
number of edges, thus contributing to the (O(E)) part of the complexity.
• Each edge is considered and traversed once, leading to a linear relationship with the number of edges.
Practical Implications
The practical implications of this understanding are significant. Knowing that DFS runs in (O(V + E)) allows developers to predict the performance of their algorithms more accurately, especially when
dealing with large graphs.
Moreover, the structure of the graph can influence the actual execution time of DFS. For example, a sparse graph (where (E) is much less than (V^2)) will generally run faster than a dense graph due
to fewer edges to traverse.
In summary, the time complexity of the Depth First Search algorithm is (O(V + E)). This reflects the linear relationship between the number of vertices, the number of edges, and the total time taken
for traversal. It’s essential to clarify misconceptions around the complexity to ensure a proper understanding of DFS, especially in practical applications.
If you have any thoughts or questions about the DFS algorithm, feel free to share them in the comments below! Let’s discuss how different graph structures can affect the performance of DFS and other
algorithms. Happy coding!
Unlock your coding potential! Schedule a 1-on-1 coaching session today and master Depth First Search and more! | {"url":"https://www.interviewhelp.io/blog/posts/depth_first_search_time_complexity_question/","timestamp":"2024-11-12T12:48:59Z","content_type":"text/html","content_length":"22564","record_id":"<urn:uuid:d6069240-57a3-46e1-a2b8-5f5bed32f609>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00293.warc.gz"} |
Pwning 0x01: C Typecasting
When I was learning how to tackle pwn challenges in CTFs, I had a tough time finding a single, clear guide that could show me the ropes of actually carrying out these exploits. That's why I decided
to put together a complete guide that covers everything you need to know.
Having some background knowledge in C, stack, and assembly will be super helpful, and eventually a must, as we tackle more advanced topics. I will be using Kali Linux as the main platform for this
series, but most of what we'll learn can also be applied to Windows.
What is Typecasting?
Typecasting takes place when the compiler converts a value into a different data type. Such conversions can often be mishandled and may result in unexpected behaviour that you can abuse to control
the program's control flow. In this article, our focus will primarily be on C, which offers the most attack vectors to explore.
Explicit Type Casting
Explicit type casting is when the programmer explicitly specifies the desired type conversion. For example:
double x = 10.5; // x is a double
int y = (int)x; // Explicitly convert the double x to an int
printf("%d\n", y); // Output: 10
Here, x is a double, and it has been specifically changed into an int using (int)x. This explicit casting clearly tells the system to convert the double value into an int.
Implicit Type Casting
When doing arithmetic operations, the compiler can automatically change the types of data to be compatible with each other. This occurs automatically by the language's rules and type system. For
int x = 10;
double y = 5.2;
double result = x + y; // The integer x is implicitly promoted to a double
printf("%lf\n", result); // Output: 15.200000
The integer x is implicitly promoted to a double so that the addition operation can be performed without data loss. But how does it know what operand to change?
Overview of the Data Types
Here's a quick overview of various data types, their sizes in bytes, and their respective value ranges:
Data Type Size (bytes) Range
short int 2 -32,768 to 32,767
unsigned short int 2 0 to 65,535
unsigned int 4 0 to 4,294,967,295
int 4 -2,147,483,648 to 2,147,483,647
long int 4 -2,147,483,648 to 2,147,483,647
unsigned long int 4 0 to 4,294,967,295
long long int 8 -(2^63) to (2^63)-1
unsigned long long int 8 0 to 18,446,744,073,709,551,615
signed char 1 -128 to 127
unsigned char 1 0 to 255
float 4 1.2E-38 to 3.4E+38
double 8 1.7E-308 to 1.7E+308
long double 16 3.4E-4932 to 1.1E+4932
Generic Arithmetic Conversions
The intricate specifics of how arithmetic conversions determine what to convert fall beyond the scope of this article. You won't need an understanding for this article. It is a very large topic, and
some more detailed information can be found here. But In a nutshell, it adheres to a data type hierarchy and a set of straightforward rules. When the values are of the same type, the conversion
process concludes.
1. Floating Points Take Precedence: If any operand is a floating-point number, convert all operands to the floating-point type with the highest precision. No further conversion is needed.
2. Apply Integer Promotions: When both operands are of integer types, integer promotions are carried out on both operands. This entails converting any integer type narrower than an int into an int,
while leaving unchanged any type that matches the width of an int, is larger than an int, or is not an integer type.
3. Conversion Based on Integer Conversion Rank: If the operands have the same sign (both signed or both unsigned), convert the operand with the lower integer conversion rank to the type of the
operand with the higher integer conversion rank. This step finishes the conversion.
• If the unsigned operand has a higher or equal integer conversion rank compared to the signed operand, convert the signed operand to the type of the unsigned operand.
• If the signed operand has a higher integer conversion rank than the unsigned operand and a value-preserving conversion is possible, convert the unsigned operand to the type of the signed operand,
completing the conversion.
• If the signed operand has a higher integer conversion rank than the unsigned operand, but a value-preserving conversion is not possible, convert both operands to the unsigned type corresponding
to the type of the signed operand. This is the final step in the conversion process.
If you don't know what a "signed" data type is, it will be further discussed below. But how does the compiler convert between different signs, sizes and ranges?
Conversion Type:
Although typecasting works most of the time, it's not perfect. To help learn how C deals with conversion. I wrote a program here%20TypeCasting) that will allow you to simply convert to and from
different data types.
[0] signed char
[1] unsigned char
[2] short int (or short)
[3] unsigned short int (or unsigned short)
[4] int
[5] unsigned int
[6] long int (or long)
[7] unsigned long int (or unsigned long)
[8] long long int (or long long)
[9] unsigned long long int (or unsigned long long)
[10] float
[11] double
[12] long double
Enter the first corresponding type index: 3
Enter the second corresponding type index: 2
Enter original Value: 65535
Original Unsigned Short Int Value: 65535
Converted to Short Int Value: -1
There are 3 different types of typecasting:
This occurs when a value is converted to a data type with a smaller range. For example, converting an int to an short int is a narrowing conversion. It may result in data loss if the int value is
larger than the size of the value that the short int can hold.
Here is an example in C:
Enter the first corresponding type: int
Enter the second corresponding type: short int
Enter original Value: 1011135
Original int Value: 1011135
Converted to short int Value: 28095
Breaking this down into binary:
Original Int (4 bytes): 010101010101111|0110110110111111 //1011135
Short Int (2 Bytes): |0110110110111111 //48576
As you can see, the operation simply disregards the larger bits.
Signed Conversion:
This occurs when a data type's sign convention is changed. For example, converting a negative int to an unsigned int will be converted incorrectly. But to understand how sign conversion works, let's
first understand how the sign convention works.
Two's Complement representation
To understand signed conversion, let's begin by getting a handle on how a signed data type operates. C relies on what's known as Two's Complement representation. In this case, we'll illustrate it
using just 4 bits, and it essentially boils down to two key aspects:
• The leftmost bit, also known as the most significant bit (MSB), serves as the sign bit. It tells us whether the number is positive (0) or negative (1).
• The rest of the bits follow the standard binary rules to represent the magnitude of the number.
Let's provide examples for both positive and negative numbers:
Example 1: Positive Number
Suppose we need to convert 6 into a 4-bit binary representation. 6 can be represented as (1 × 2²) + (1 × 2¹) + (0 × 2⁰) = (6)₁ or 110. As it is not negative, the MSB is 0 thus the 4-bit signed binary
representation of 6 is 0110.
Example 2: Negative Number
Now, let's consider the value -6. We can start by converting 6 into binary, which gives us 0110. Next, we invert all the bits, resulting in 1001. Finally, we add 1 to this inverted value, yielding
1010. So, the final representation of -6 in Two's Complement is 1010.
Base 10: -6
6 in binary: 0110
Inverted: 1001
Add 1: 1010
Final represntation: 1010
How Sign Conversion Works
How can we weaponize this in C? When performing sign conversions in C, it involves a direct translation with no bit swapping. As a result, the sign bit is also directly translated, potentially
leading to an unexpected value.
Enter the first corresponding type index: unsigned short int
Enter the second corresponding type index: short int
Enter original Value: 65500
Original Unsigned Short Int Value: 65500
Converted to Short Int Value: -36
Let's look at the conversion in binary:
unsigned short int (2 bytes): 1111111111011100 //65500
short int (2 Bytes): 1111111111011100 //-36
The sign bit was directly translated to the short int, which is a signed data type. Which gives us a value of -36.
Widening Conversion (Promotion)
This process occurs when a value is converted to a data type with a larger range. When converting from a smaller type to a larger type and the original type is unsigned, it fills all extra bits with
0. If the original type is signed, it uses the sign's bit value and copies it into the extra bits of the new type.
Enter the first corresponding type index: short int
Enter the second corresponding type index: int
Original Short Int Value: -5 // |1111111111111011
Converted to Int Value: -5 // 1111111111111111|1111111111111011
Enter the first corresponding type: unsigned short int
Enter the second corresponding type: unsigned int
Original Short Int Value: 5 // |0000000000000101
Converted to Int Value: 5 // 0000000000000000|0000000000000101
There are no issues when widening the data type, provided the destination data type maintains the same sign convention (either signed or unsigned) as the source data type. Errors occur when there is
a disparity in the sign representation between the source and destination data types.
Enter the first corresponding type index: short int
Enter the second corresponding type index: unsigned int
Enter original Value: -9
Original Short Int Value: -9 // |1111111111110111
Converted to Unsigned Int Value: 4294967287// 1111111111111111|1111111111110111
Now, here's a CTF challenge that applies these principles. I suggest trying to work out the solution before peeking below.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define USERNAME_LEN 6
#define NUM_USERS 8
char logins[NUM_USERS][USERNAME_LEN] = { "user0", "user1", "user2", "user3", "user4", "user5", "user6", "admin" };
void init() {
setvbuf(stdout, 0, 2, 0);
setvbuf(stdin, 0, 2, 0);
int read_int_lower_than(int bound) {
int x;
scanf("%d", &x);
if(x >= bound) {
puts("Invalid input!");
return x;
int main() {
printf("Select user to log in as: ");
unsigned short idx = read_int_lower_than(NUM_USERS - 1);
printf("Logging in as %s\n", logins[idx]);
if(strncmp(logins[idx], "admin", 5) == 0) {
puts("Welcome admin.");
} else {
The core program flow can be summarised as follows:
1. The program begins by defining an array of usernames (logins) and initialising it with 8 usernames, one of which is "admin."
2. It then prompts the user to input an index corresponding to the user they wish to access.
3. The program utilises the "read_int_lower_than()" function, which performs the following steps:
a) Reads an integer input from the user.
b) Checks if the input is greater than 7. If it is, an error message is displayed, and the program exits.
c) It will return the index number.
4. If the selected index is "admin," the program spawns a shell.
Additionally, in the provided code, the "read_int_lower_than()" function returns an integer, which is then assigned to the variable "idx," with the type of an unsigned short int.
If we can somehow supply a number that will pass an int through the condition "less than 7" and when converted from an int to an unsigned short int gives us 7, we will be able to get the shell.
To achieve an unsigned integer value of 7 from a converted integer, we know:
• The first 16 bits of data are discarded during the conversion. This includes the MSB.
• The int value must be less than 7.
• The last 4 bits should equal 0111.
Since an int is signed, we can make it negative, fulfilling the condition that the int value must be less than 7. Now we need to construct a number that equals 7 when converted from an int to an
unsigned short int. Let's make it in binary:
In this case, the second to the 16th bits can be junk, while the last 4 bits equal 0111. This results in the integer number of -2147483641, which fulfils both conditions.
Select user to log in as: -2147483641
Logging in as admin
Welcome admin.
More Examples:
This website here is full of in-depth examples.
Thanks for reading this article. If you have any questions you can dm me. I will aim to try and get a new article out every 1-2 weeks, with the next article about buffer overflows. | {"url":"https://colej.net/pwning-0x01-c-typecasting","timestamp":"2024-11-07T19:29:33Z","content_type":"text/html","content_length":"316044","record_id":"<urn:uuid:e7ea0c8f-042a-46b4-973e-1fff2d624c1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00358.warc.gz"} |
AAS SFMC Manuscript Format Template
Curriculum Vitae: Michael W. Busch Updated June 12, 2015 Contact Information Email:
[email protected]
Telephone: 1-612-269-9998 Mailing Address: SETI Institute 189 Bernardo Ave, Suite 100 Mountain View, CA 94043 USA Academic & Employment History BS Physics & Astrophysics, University of Minnesota,
awarded May 2005. PhD Planetary Science, Caltech, defended April 5, 2010. JPL Planetary Science Summer School, July 2006. Hertz Foundation Graduate Fellow, September 2007 to June 2010. Postdoctoral
Researcher, University of California Los Angeles, August 2010 – August 2011. Jansky Fellow, National Radio Astronomy Observatory, August 2011 – August 2014. Visiting Scholar, University of Colorado
Boulder, July – August 2012. Research Scientist, SETI Institute, August 2013 – present. Current Funding Sources: NASA Near Earth Object Observations Program (grant NNX13AQ44G) PI of SETI Institute
group for European Commission NEOShield project, 2014 November – 2015 June Research Interests: • Shapes, spin states, trajectories, internal structures, and histories of asteroids. • Identifying and
characterizing targets for both robotic and human spacecraft missions. • Ruling out potential future asteroid-Earth impacts. • Radio and radar astronomy techniques. Selected Recent Papers: Naidu,
S.P., Margot, J.L., Taylor, P.A., Nolan, M.C., Busch, M.W., Benner, L.A.M., Brozovic, M., Giorgini, J.D., Jao, J.S., Magri, C., 2015, Radar imaging and characterization of binary near-Earth asteroid
(185851) 2000 DP107, AJ, in press. Benner, L.A.M., Busch, M.W., Giorgini, J.D., Taylor, P.A., Margot, J.L., Nolan, M.C., 2015. Radar observations of near-Earth and main-belt asteroids, review chapter
in Asteroids IV, U. Arizona Press, in press. Harris A.W., and NEOShield Consortium, 2015, NEOShield Final Report: A Global Approach to Near-Earth Object Impact Threat Mitigation, European Commission
FP7 Project No. | {"url":"https://docslib.org/doc/49450/aas-sfmc-manuscript-format-template","timestamp":"2024-11-07T22:42:00Z","content_type":"text/html","content_length":"65989","record_id":"<urn:uuid:fdad67d6-5ce9-4e7c-9771-884372c58c63>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00788.warc.gz"} |
What is Random?
Ενα cool άρθρο και μια ωραια συζητηση στα σχολια του | http://goo.gl/I7CPF
Random is as random does
by John on April 19, 2012
What is randomness? Nobody knows, or at least there’s no consensus. Everybody has some vague ideas what randomness is, but when you dig into it deeply enough you find all kinds of philosophical
quandaries. If you’d like a taste of the subtleties, you could start by reading one of Gregory Chaitin’s books. Or chew on this tome.
What is a random variable? That’s easy. It’s a measurable function on a probability space. What’s a probability space? Easy too. It’s a measure space such that the measure of the entire space is 1.
Probability theory avoids defining randomness by working with abstractions like random variables. This is actually a very sensible approach and not mere legerdemain. Mathematicians can prove theorems
about probability and leave the interpretation of the results to others.
As far as applications are concerned, it often doesn’t matter whether something is random in some metaphysical sense. The right question isn’t “is this system random?” but rather “is it useful to
model this system as random?” Many systems that no one believes are random can still be profitably modeled as if they were random.
Probability models are just another class of mathematical models. Modeling deterministic systems using random variables should be no more shocking than, for example, modeling discrete things as
continuous. For example, cars come in discrete units, and they certainly are not fluids. But sometimes it’s useful to model the flow of traffic as if it were a fluid. (And sometimes it’s not.)
Random phenomena are studied using computer simulations. And these simulations rely on random number generators, deterministic programs whose output is considered random for practical purposes. This
bothers some people who would prefer a “true” source of randomness. Such concerns are usually misplaced. In most cases, replacing a random number generator with some physical source of randomness
would not make a detectable difference. The output of the random number generator might even be higher quality since the measurement of the physical source could introduce a bias.
Permissions in this forum:
reply to topics in this forum | {"url":"https://grstats.forumotion.net/t421-what-is-random","timestamp":"2024-11-05T20:21:51Z","content_type":"text/html","content_length":"68987","record_id":"<urn:uuid:de53db54-6739-4d56-89f3-35d3136aaecc>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00079.warc.gz"} |
How to Calculate a Bootstrap Standard Error in R » finnstats
How to Calculate a Bootstrap Standard Error in R
Bootstrap Standard Error in R, Bootstrapping is a technique for calculating the standard error of a mean.
The following is the basic procedure for calculating a bootstrapped standard error.
Model Selection in Machine Learning » finnstats
From a given dataset, take k repeated samples using replacement and calculate the standard error for each sample: s/√n
As a result, there are k distinct standard error estimates. Take the mean of the k standard errors to get the bootstrapped standard error.
The following examples show how to calculate a bootstrapped standard error in R using two distinct methods.
Approach 1: Boot Package
The boot() function from the boot library is one technique to calculate a bootstrap standard error in R.
In R, the following code demonstrates how to compute a bootstrap standard error for a given dataset.
Let’s take the example reproducible
Now load the boot library
We can define the dataset
x <- c(112, 64, 84, 78, 67, 221, 125, 219, 45, 79)
Let’s create a function to calculate mean
meanF <- function(x,i){mean(x[i])}
Okay, now we can calculate standard error using 500 bootstrapped samples
boot(x, meanF, 5000)
boot(data = x, statistic = meanF, R = 5000)
Bootstrap Statistics :
original bias std. error
t1* 109.4 -0.13972 18.41172
The “original” number of 109.4 represents the dataset’s mean. The bootstrap standard error of the mean is represented by the value 18.41 in the “std. error” column.
NLP Courses Online (Natural Language Processing) » finnstats
In this example, we used 5000 bootstrapped samples to estimate the standard error of the mean, but we could have used 1,000, 10,000, or any other number of bootstrapped samples.
Approach 2: Own Formula
We can also construct our own code to calculate a bootstrapped standard error.
The code below demonstrates how to do so:
create a repeatable example
Let’s load the boot library
Now we can use the same dataset
x <- c(112, 64, 84, 78, 67, 221, 125, 219, 45, 79)
mean(replicate(500, sd(sample(x, replace=T))/sqrt(length(x))))
[1] 18.11736
18.11 is the bootstrapped standard error. This standard error looks a lot like the one determined in the previous example.
Leave a Reply Cancel reply | {"url":"https://finnstats.com/2022/03/15/how-to-calculate-a-bootstrap-standard-error-in-r/","timestamp":"2024-11-05T07:48:03Z","content_type":"text/html","content_length":"285938","record_id":"<urn:uuid:c34e6db1-4618-46d4-a28a-d4f7c512f060>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00703.warc.gz"} |
What is the equation in point-slope form of the line given (3,7); m=0? | HIX Tutor
What is the equation in point-slope form of the line given (3,7); m=0?
Answer 1
The line passes through the points #(3,7)# and has a slope of #m=0#.
We know that the slope of a line is given by:
Choosing a y-coordinate, we see that it passes through #(3,7)#, and so #y_2=y_1=7#.
Therefore, the line is #y=7#.
Here is a graph of the line:
graph{y=0x+7 [-4.54, 18.89, -0.84, 10.875]}
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/what-is-the-equation-in-point-slope-form-of-the-line-given-3-7-m-0-8f9af92537","timestamp":"2024-11-05T22:17:02Z","content_type":"text/html","content_length":"570459","record_id":"<urn:uuid:d63f7467-5ae6-4721-a3ce-f44593c4ba54>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00196.warc.gz"} |
Learn R Programming
R Introduction
R Flow Control
R Functions
R Data Structure
R Object & Class
R Graphs & Charts
R Advanced Topics
R Application
R is primarily used in the field of data science. Some areas where R is used are:
• Data Analysis: R is a great choice for data science related tasks such as data wrangling, visualization, and analysis, thanks to its extensive libraries like dplyr, tidyr, ggplot2, Shiny, etc.
• Statistical Modeling: R provides extensible statistical and graphical techniques making it easy for researchers and statisticians.
• Machine Learning: R's growing machine learning libraries and packages help data scientists build complex models and algorithms for machine learning.
How to Learn R?
R Programming can be useful for professionals seeking a career in data analysis, statistics, or data visualization. Here's how to get started with R:
Learn R basics: The first step in learning R is to be comfortable with its syntax and fundamentals. It is important to be able to read and write R to get started with it.
Practice coding: After getting acquainted with the fundamentals, it is important to practice them. Start with simple exercises and work your way up to more complex challenges.
Build projects: The best way to learn R is by building projects. Think of projects that interest you and try to build them using R. Start with basic projects and move on to more advanced ones.
Best Resources to Learn R | {"url":"https://www.datamentor.io/r-programming","timestamp":"2024-11-03T13:05:20Z","content_type":"text/html","content_length":"100706","record_id":"<urn:uuid:a3195059-c427-4c95-9dbe-6ec34b2f42c4>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00077.warc.gz"} |
Newly Released Scientific Notation Activity
There are things in life that science teachers and math teachers disagree on. Is 0.1 equal to 0.10? Do all numbers require units? Is a ratio different from a rate?
But there is one thing that math and science teachers can always agree on—teaching scientific notation is a strangely difficult thing to do.
Desmos isn’t here to tell you that we’ve solved this problem, but we are here to tell you that we’ve built a thing that makes this difficult thing a bit more fun.
We have adapted some ideas from the Grade 8 Illustrative Mathematics materials (which you can download for free through Open Up Resources) to build The Solar System, Test Tubes, and Scientific
Notation, which is free for you to use in your own classroom.
Scientific notation is useful for a number of things. Communicating a sense of the size of the numbers involved is one of those things, so our activity begins by asking students to sort two sets of
numbers from least to greatest.
We keep it a secret that the two lists are actually the same numbers until students have had a chance to sort them out.
Next up, we help students distinguish scientific notation from not-scientific notation, and then it’s time to get some practice where we offer two types of feedback. The first is about whether your
numbers are in the correct form.
The second type of feedback is whether the number a student has expressed in scientific notation is in fact the correct number. If it is, we’ll put their planet into orbit; if not, we’ll show them
that their orbit doesn’t match their planet.
As the activity’s title suggests, we ask students to express numbers that are both large (the solar system) and small (test tubes). Plenty of practice and good, useful feedback—perhaps these are
things on which math and science teachers can agree? | {"url":"https://blog.desmos.com/articles/new-scientific-notation-activity/","timestamp":"2024-11-12T05:53:24Z","content_type":"text/html","content_length":"5089","record_id":"<urn:uuid:5c22fedd-005b-4bdd-bd3c-6665bff57c79>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00895.warc.gz"} |
Why Black Hole Interiors Grow (Almost) Forever | Quanta Magazine
Leonard Susskind, a pioneer of string theory, the holographic principle and other big physics ideas spanning the past half-century, has proposed a solution to an important puzzle about black holes.
The problem is that even though these mysterious, invisible spheres appear to stay a constant size as viewed from the outside, their interiors keep growing in volume essentially forever. How is this
In a series of recent papers and talks, the 78-year-old Stanford University professor and his collaborators conjecture that black holes grow in volume because they are steadily increasing in
complexity — an idea that, while unproven, is fueling new thinking about the quantum nature of gravity inside black holes.
Black holes are spherical regions of such extreme gravity that not even light can escape. First discovered a century ago as shocking solutions to the equations of Albert Einstein’s general theory of
relativity, they’ve since been detected throughout the universe. (They typically form from the inward gravitational collapse of dead stars.) Einstein’s theory equates the force of gravity with curves
in space-time, the four-dimensional fabric of the universe, but gravity becomes so strong in black holes that the space-time fabric bends toward its breaking point — the infinitely dense
“singularity” at the black hole’s center.
According to general relativity, the inward gravitational collapse never stops. Even though, from the outside, the black hole appears to stay a constant size, expanding slightly only when new things
fall into it, its interior volume grows bigger and bigger all the time as space stretches toward the center point. For a simplified picture of this eternal growth, imagine a black hole as a funnel
extending downward from a two-dimensional sheet representing the fabric of space-time. The funnel gets deeper and deeper, so that infalling things never quite reach the mysterious singularity at the
bottom. In reality, a black hole is a funnel that stretches inward from all three spatial directions. A spherical boundary surrounds it called the “event horizon,” marking the point of no return.
Since at least the 1970s, physicists have recognized that black holes must really be quantum systems of some kind — just like everything else in the universe. What Einstein’s theory describes as
warped space-time in the interior is presumably really a collective state of vast numbers of gravity particles called “gravitons,” described by the true quantum theory of gravity. In that case, all
the known properties of a black hole should trace to properties of this quantum system.
Indeed, in 1972, the Israeli physicist Jacob Bekenstein figured out that the area of the spherical event horizon of a black hole corresponds to its “entropy.” This is the number of different possible
microscopic arrangements of all the particles inside the black hole, or, as modern theorists would describe it, the black hole’s storage capacity for information.
Bekenstein’s insight led Stephen Hawking to realize two years later that black holes have temperatures, and that they therefore radiate heat. This radiation causes black holes to slowly evaporate
away, giving rise to the much-discussed “black hole information paradox,” which asks what happens to information that falls into black holes. Quantum mechanics says the universe preserves all
information about the past. But how does information about infalling stuff, which seems to slide forever toward the central singularity, also evaporate out?
The relationship between a black hole’s surface area and its information content has kept quantum gravity researchers busy for decades. But one might also ask: What does the growing volume of its
interior correspond to, in quantum terms? “For whatever reason, nobody, including myself for a number of years, really thought very much about what that means,” said Susskind. “What is the thing
which is growing? That should have been one of the leading puzzles of black hole physics.”
In recent years, with the rise of quantum computing, physicists have been gaining new insights about physical systems like black holes by studying their information-processing abilities — as if they
were quantum computers. This angle led Susskind and his collaborators to identify a candidate for the evolving quantum property of black holes that underlies their growing volume. What’s changing,
the theorists say, is the “complexity” of the black hole — roughly a measure of the number of computations that would be needed to recover the black hole’s initial quantum state, at the moment it
formed. After its formation, as particles inside the black hole interact with one another, the information about their initial state becomes ever more scrambled. Consequently, their complexity
continuously grows.
Using toy models that represent black holes as holograms, Susskind and his collaborators have shown that the complexity and volume of black holes both grow at the same rate, supporting the idea that
the one might underlie the other. And, whereas Bekenstein calculated that black holes store the maximum possible amount of information given their surface area, Susskind’s findings suggest that they
also grow in complexity at the fastest possible rate allowed by physical laws.
John Preskill, a theoretical physicist at the California Institute of Technology who also studies black holes using quantum information theory, finds Susskind’s idea very interesting. “That’s really
cool that this notion of computational complexity, which is very much something that a computer scientist might think of and is not part of the usual physicist’s bag of tricks,” Preskill said, “could
correspond to something which is very natural for someone who knows general relativity to think about,” namely the growth of black hole interiors.
Researchers are still puzzling over the implications of Susskind’s thesis. Aron Wall, a theorist at Stanford (soon moving to the University of Cambridge), said, “The proposal, while exciting, is
still rather speculative and may not be correct.” One challenge is defining complexity in the context of black holes, Wall said, in order to clarify how the complexity of quantum interactions might
give rise to spatial volume.
A potential lesson, according to Douglas Stanford, a black hole specialist at the Institute for Advanced Study in Princeton, New Jersey, “is that black holes have a type of internal clock that keeps
time for a very long time. For an ordinary quantum system,” he said, “this is the complexity of the state. For a black hole, it is the size of the region behind the horizon.”
If complexity does underlie spatial volume in black holes, Susskind envisions consequences for our understanding of cosmology in general. “It’s not only black hole interiors that grow with time. The
space of cosmology grows with time,” he said. “I think it’s a very, very interesting question whether the cosmological growth of space is connected to the growth of some kind of complexity. And
whether the cosmic clock, the evolution of the universe, is connected with the evolution of complexity. There, I don’t know the answer.” | {"url":"https://www.quantamagazine.org/why-black-hole-interiors-grow-forever-20181206/","timestamp":"2024-11-05T22:43:28Z","content_type":"text/html","content_length":"202868","record_id":"<urn:uuid:7e024374-c416-4b2f-8e99-f15d65b1957f>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00019.warc.gz"} |
Representativeness Bias in Investment Decisions: Understanding and Overcoming Stereotype-Based Judgments
12.2.5 Representativeness Bias
In the realm of finance and investment, cognitive biases can significantly influence decision-making processes, often leading to suboptimal outcomes. One such cognitive bias is the representativeness
bias, a mental shortcut that involves judging the probability of an event based on how much it resembles existing stereotypes or patterns, rather than relying on objective data. This section delves
into the intricacies of representativeness bias, its implications for investment decisions, and strategies to counteract its influence.
Understanding Representativeness Bias
Representativeness bias is a cognitive heuristic where individuals assess the likelihood of an event by comparing it to an existing prototype or stereotype. This bias can lead to erroneous judgments
because it often ignores base rates and statistical probabilities. Instead of evaluating an event based on factual data, individuals rely on perceived similarities, leading to potentially flawed
Key Characteristics of Representativeness Bias
• Stereotype-Based Judgments: Decisions are influenced by how closely an event matches a preconceived stereotype or pattern.
• Neglect of Base Rates: There is a tendency to overlook statistical prevalence or base rates in favor of specific, anecdotal information.
• Pattern Recognition: Individuals may see patterns or trends where none exist, leading to misguided expectations.
Impact on Investment Decisions
Representativeness bias can significantly affect investment decisions and market expectations. Investors may fall into the trap of making decisions based on perceived patterns rather than objective
Ignoring Base Rates
One of the primary effects of representativeness bias in investment is the neglect of base rates. Investors may focus on specific, vivid information while ignoring the broader statistical context.
For example, an investor might be swayed by a company’s recent success without considering the overall industry performance or historical data.
Assuming Trends
Another common manifestation of representativeness bias is the assumption that recent performance will continue indefinitely. This can lead to the belief that a stock or sector will replicate past
success without considering fundamental factors that may affect future performance.
Common Pitfalls of Representativeness Bias
Representativeness bias can lead to several cognitive pitfalls that distort investment decisions.
Gambler’s Fallacy
The gambler’s fallacy is the erroneous belief that a reversal is due after a series of similar outcomes. In the context of investing, this might manifest as expecting a stock price to fall after
consecutive rises, even if there are no underlying reasons for such a reversal.
Hot Hand Fallacy
Conversely, the hot hand fallacy involves the belief that a pattern will continue indefinitely. For instance, investors might assume that a fund manager’s recent success will persist, ignoring the
possibility of regression to the mean.
Illustrative Examples in Financial Contexts
To better understand representativeness bias, consider the following scenarios:
Sector Investment Scenario
An investor might decide to invest heavily in a sector because a few companies within that sector have performed exceptionally well. The assumption is that the entire sector will replicate this
success, ignoring the unique factors that contributed to the individual companies’ performance.
Startup Investment Example
Another example is the tendency to assume that a startup will become the “next big thing” because it resembles successful companies, such as tech startups modeled after industry giants. This
assumption overlooks the unique challenges and market conditions that the startup may face.
Counteracting Representativeness Bias
Overcoming representativeness bias requires a conscious effort to incorporate objective analysis and critical thinking into investment decisions.
Base Rate Analysis
Incorporating statistical probabilities and base rates into decision-making can help counteract the influence of representativeness bias. By grounding decisions in factual data, investors can avoid
the pitfalls of stereotype-based judgments.
Due Diligence
Conducting thorough due diligence is essential for evaluating investments based on comprehensive research rather than superficial similarities. This involves analyzing financial statements, market
conditions, and competitive landscapes to make informed decisions.
Awareness Training
Recognizing and questioning snap judgments based on stereotypes is a crucial step in overcoming representativeness bias. Awareness training can help investors identify when they are relying on
cognitive shortcuts and encourage more deliberate analysis.
Representativeness bias is a pervasive cognitive bias that can significantly impact investment decisions. By understanding its characteristics and effects, investors can take proactive steps to
mitigate its influence. Incorporating base rate analysis, conducting thorough due diligence, and fostering awareness of cognitive biases can lead to more rational investment choices grounded in
factual analysis.
Quiz Time!
📚✨ Quiz Time! ✨📚
### What is representativeness bias? - [x] A cognitive heuristic where individuals assess the likelihood of an event based on how much it resembles existing stereotypes. - [ ] A bias that involves
making decisions based on past experiences only. - [ ] A tendency to overestimate the probability of rare events. - [ ] A bias that leads to ignoring all statistical data. > **Explanation:**
Representativeness bias involves judging the probability of an event based on its resemblance to existing stereotypes rather than objective data. ### How does representativeness bias affect
investment decisions? - [x] It leads to ignoring base rates and assuming trends. - [ ] It encourages diversification of investments. - [ ] It promotes reliance on statistical analysis. - [ ] It
reduces the impact of cognitive biases. > **Explanation:** Representativeness bias can cause investors to ignore statistical prevalence and assume that recent performance will continue without
considering fundamental factors. ### What is the gambler's fallacy? - [x] The belief that a reversal is due after a series of similar outcomes. - [ ] The assumption that a pattern will continue
indefinitely. - [ ] The tendency to overestimate the likelihood of rare events. - [ ] The reliance on past performance to predict future outcomes. > **Explanation:** The gambler's fallacy involves
expecting a reversal after a series of similar outcomes, such as expecting a stock price to fall after consecutive rises without underlying reasons. ### What is the hot hand fallacy? - [x] The belief
that a pattern will continue indefinitely. - [ ] The expectation of a reversal after a series of similar outcomes. - [ ] The tendency to ignore statistical probabilities. - [ ] The assumption that
all investments will perform equally well. > **Explanation:** The hot hand fallacy involves believing that a pattern, such as a fund manager's success, will continue indefinitely without considering
the possibility of regression to the mean. ### Which strategy can help counteract representativeness bias? - [x] Base rate analysis - [ ] Relying solely on recent performance - [ ] Ignoring
statistical data - [ ] Making decisions based on stereotypes > **Explanation:** Base rate analysis involves incorporating statistical probabilities into decision-making, helping to counteract the
influence of representativeness bias. ### What is a common pitfall of representativeness bias? - [x] Assuming trends will continue indefinitely - [ ] Overestimating the likelihood of rare events - [
] Diversifying investments too broadly - [ ] Relying on statistical data exclusively > **Explanation:** A common pitfall of representativeness bias is assuming that recent performance or trends will
continue indefinitely without considering fundamental factors. ### How can due diligence help mitigate representativeness bias? - [x] By evaluating investments based on comprehensive research - [ ]
By relying on stereotypes and patterns - [ ] By ignoring statistical probabilities - [ ] By focusing solely on recent performance > **Explanation:** Conducting thorough due diligence involves
evaluating investments based on comprehensive research, helping to avoid decisions based on superficial similarities. ### What role does awareness training play in overcoming representativeness bias?
- [x] It helps recognize and question snap judgments based on stereotypes. - [ ] It encourages reliance on cognitive shortcuts. - [ ] It promotes decision-making based on stereotypes. - [ ] It
reduces the need for statistical analysis. > **Explanation:** Awareness training helps investors recognize when they are relying on cognitive shortcuts and encourages more deliberate analysis,
reducing the impact of representativeness bias. ### What is an example of representativeness bias in investing? - [x] Investing heavily in a sector because a few companies have performed well. - [ ]
Diversifying investments across multiple sectors. - [ ] Relying solely on statistical analysis for decision-making. - [ ] Ignoring recent performance trends. > **Explanation:** An example of
representativeness bias is investing heavily in a sector based on the success of a few companies, assuming the entire sector will replicate that success. ### True or False: Overcoming
representativeness bias leads to more rational investment choices. - [x] True - [ ] False > **Explanation:** Overcoming representativeness bias involves making investment choices grounded in factual
analysis, leading to more rational and informed decisions. | {"url":"https://csccourse.ca/12/2/5/","timestamp":"2024-11-07T11:07:14Z","content_type":"text/html","content_length":"91667","record_id":"<urn:uuid:c07a0ff0-efac-4a2e-b2d1-4495d67f1fcd>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00688.warc.gz"} |
How to Change Parameter name of SQL function
Renaming Parameters in SQL Functions: A Step-by-Step Guide
Let's say you've created a SQL function, but you've realized that the parameter name you chose doesn't accurately reflect its purpose or is simply confusing. Don't worry, changing parameter names in
SQL functions is a simple process! This article will guide you through the steps involved, using examples to illustrate the concepts.
The Scenario:
Imagine you have a SQL function called calculate_discount that takes a product_price as input and applies a discount based on certain conditions. You realize that product_price isn't the best name
and you want to change it to original_price for better clarity.
Here's the original function definition:
CREATE FUNCTION calculate_discount (product_price DECIMAL(10,2))
RETURNS DECIMAL(10,2)
DECLARE discount DECIMAL(10,2);
-- Calculate discount based on product_price
-- ...
RETURN product_price - discount;
The Solution:
To rename the parameter, you need to follow these steps:
1. Drop the existing function: This step ensures that you're working with a clean slate.
DROP FUNCTION calculate_discount;
2. Recreate the function with the new parameter name: Now, you can define the function again with the desired parameter name.
CREATE FUNCTION calculate_discount (original_price DECIMAL(10,2))
RETURNS DECIMAL(10,2)
DECLARE discount DECIMAL(10,2);
-- Calculate discount based on original_price
-- ...
RETURN original_price - discount;
Important Considerations:
• Impact on Existing Code: If you've already used the function in other queries or stored procedures, you'll need to update those references to reflect the new parameter name.
• Data Type: While renaming the parameter, make sure the data type remains consistent with the original definition.
• Function Dependencies: If other functions or procedures depend on this function, be mindful of the changes and ensure compatibility after renaming.
Best Practices:
• Descriptive Names: Choose parameter names that clearly convey their purpose and meaning, making your code more readable and maintainable.
• Consistent Naming Conventions: Follow a consistent naming convention for parameters throughout your codebase to enhance uniformity and comprehension.
Let's see how the function call would look before and after the parameter rename:
-- Before renaming
SELECT calculate_discount(100);
-- After renaming
SELECT calculate_discount(100);
As you can see, the call itself remains the same, but the code is now more descriptive and easier to understand.
Renaming parameters in SQL functions is a simple yet impactful change that can greatly improve the readability and maintainability of your code. By following the steps outlined in this guide, you can
ensure a smooth transition and avoid any potential issues. Remember to carefully consider the impact on your existing code and test thoroughly after making any changes. | {"url":"https://laganvalleydup.co.uk/post/how-to-change-parameter-name-of-sql-function","timestamp":"2024-11-13T03:14:52Z","content_type":"text/html","content_length":"82020","record_id":"<urn:uuid:49bcd9af-2abd-4d87-a686-32a6ae8d0815>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00442.warc.gz"} |
Gabriels Technology Solutions Salaries | How Much Does Gabriels Technology Solutions Pay in the USA | CareerBliss
Oops, we don't have any salaries at Gabriels Technology Solutions right now... Please try another job title.
Senior Project Manager is the highest paying job at Gabriels Technology Solutions at $88,000 annually.
Data Analyst is the lowest paying job at Gabriels Technology Solutions at $46,000 annually.
Gabriels Technology Solutions employees earn $63,000 annually on average, or $30 per hour. | {"url":"https://www.careerbliss.com/gabriels-technology-solutions/salaries/","timestamp":"2024-11-04T07:03:48Z","content_type":"text/html","content_length":"85923","record_id":"<urn:uuid:d6b98674-995e-48df-8d16-82a4df0545ac>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00331.warc.gz"} |
Big O & What You Need To Know
The Basics of Big O Notation
During my journey of becoming a software engineer I would hear about Big O Notation from time to time and would think , what in the world does this have to do with writing code? I for reasons beyond
me had always associated Big O Notation with algorithms and associated algorithms with math and have always associated math with the word YIKES! Seeing the chart below seemed daunting enough , so i
decided to stop playing word association and jumped into the scary deep end of Big O Notation.
My first few question regarding Big O notation was what it was? , what did the chart above have to do with anything? and why is this so important to know when working as a software engineer. To sum
Big O at its most basic level, Big O notation defines how long it takes an algorithm to run, this can also be regarded as time complexity. Big O represents how long the runtime for a given algorithm
can be as the data grows larger. Someone new to Big O , like myself, may be wondering why it is important enough to calculate the speed of an algorithm ,but as programs grow in size, milliseconds
start to add up and boom an algorithm that used to little to no time all starts to slow your entire application or program down, so software engineers need to know the “worst-case”, or the slowest an
algorithm will run given a growing list of data. My journey into Big O led me to discover there are four main classifications to represent Big O : O(1) , O(N) , O(logN) and O(N²).
The best time complexity in Big O notation is O(1) as seen on the chart above . This includes algorithms that take pretty much the same amount of time to run no matter how long or short a list. If we
were to write a simple block of code that was greeting my dog Oliver , you might think this is not an algorithm, but you would be wrong (like I was) any line of code that is completing a task is an
algorithm, but for this algorithm the time complexity is extremely low. That said, the line print “Hello Oliver!” is what is known as, in Big O notation, O(1) which means very simply that the
algorithm will take the same number of steps, no matter how much data is given. 0(1) is known as constant time.
Let us say we have an array of my furry friends ,dogs = [ “Oliver”, “Charlie”, “Malo”] being that we understand O(1), let’s try to think about searching an array for a specific value. Given our dogs
array mentioned above say we were only looking for an element in that array that was equal to the name “Charlie”, simple enough, we would write a simple loop that would iterate through out dogs array
and display the elements that contain the name “Charlie”.
const dogs = ['Oliver','Charlie','Malo'];function helloPuppy(array) {
for (let i = 0; i < array.length; i++) {
if (array[i] === 'Charlie') {
console.log('Hello Charlie!');
O(n) gets a little more complicated, the variable “n” being the size of the data you are working on. This usually represents an algorithm that has to sort through each item in a list to find the
element it is looking for, or elements where the algorithm uses each item and alters or combines them in some way.With this in mind, we now can see that this type of algorithm would most likely not
be O(1) since the number of steps it would take to complete this would depend on the size of the data set. That size, in Big O, is referred to as “N” and if we were to classify this loop in terms of
Big O notation it would be O(N), or it would take “N” times to complete this algorithm. O(N) is usually known as linear time. According to our chart above O(N) time complexity is not the best,
neither the worst, but is dependent on data size.
Lets say I build a furry friends application and now dogs array has grown to a tremendous size, there’s another time complexity that is better than O(n), referred to as O(logN).The log here stands
for logarithm, which you could think of as the opposite of an exponent. O(log n) is also called logarithmic time.
The way O(log n) works is by using binary search in a sorted array. A binary search splits the list in half and just searches one half. If the data you’re looking for is in that half, it splits it in
half again repeats the search until there’s nothing left to split. This way of searching becomes much more efficient when you’re working with large sets of data, because it discards one half each
time it searches.
The last time complexity I will cover here is O(n²).This one is pretty bad, as it starts out okay and then very quickly becomes slow. This time complexity usually deals with nested data, and the
reason it has a bad runtime is because not only do you have to sort through every item in a list, but you also have to sort through every item within them.
let’s say you had an array with five dogs in it, and each of those dogs was actually another array containing 5 details about the dog. This algorithm will have to iterate through a total of 25 times,
which is ⁵². Deeper nesting will result in O(n³), O(n⁴) and so on.
Big O can get very complex, and it can be hard to conceptualize. However, once understanding the why and how of Big O, it will eventually help understand the more complex ideas of Big O and now the
chart above doesn't look so scary! | {"url":"https://daniel-c-reyes30.medium.com/big-o-what-you-need-to-know-a9b09f373cd1","timestamp":"2024-11-04T17:54:15Z","content_type":"text/html","content_length":"103364","record_id":"<urn:uuid:0168dbc0-3fee-4851-b4ea-2dd654145581>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00021.warc.gz"} |
The HASHKEY signature
The HASHKEY signature defines a hashable key type which is totally ordered. Note that anything ascribing to HASHKEY implicitly ascribes to both ORDKEY and EQKEY as well.
type t
val equal : t * t → bool
val compare : t * t → order
val hash : t → int
val toString : t → string
type t
The abstract key type.
val equal : t * t → bool
Determine whether or not the argument pair is considered equal. This operation is reflexive, symmetric, and transitive.
val compare : t * t → order
Return one of LESS, EQUAL, or GREATER as appropriate for the argument pair. This operation is transitive.
val hash : t → int
Hash the argument into an integer.
val toString : t → string
Return a string representation. | {"url":"http://www.cs.cmu.edu/afs/cs/academic/class/15210-s16/www/docs/sig/key/HASHKEY.html","timestamp":"2024-11-07T09:26:17Z","content_type":"text/html","content_length":"3975","record_id":"<urn:uuid:ca6e08a7-b5df-470b-839c-2358d1cf845f>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00192.warc.gz"} |
KNIME Interactive Views
One of my responsibilities at KNIME is the design and implementation of interactive views.
KNIME, pronounced [naim], is a modular data exploration platform that allows the user to visually create data flows (often referred to as pipelines).
This article should shortly introduce the views I developed so far and some which are still under construction. I describe the following views:
each with a screenshot and how the information is displayed in each visualization.
As the dataset for this article I used my marks of my studies, which can be viewed here.
In KNIME there is the possibility to assign visual properties to the data throughout the whole workflow. I used a color coding for the marks (in Germany a 1 is the best, 4 is worst but still passed)
ranging from dark green for 4 and light green for 1.
Additionally, I used different shapes: a diamond for all my courses in my bachelor studies, a circle for my courses of my master studies at the Technical University of Cottbus and a triangular for my
master courses at the University of Konstanz.
Another advantage of KNIME is the possibility to highlight some data points. What we call highlighting is also often referred to as Linking&Brushing. The visual properties of highlighted data points
is on the one hand the orange color and on the other hand the rectangle around the data point, clearly identifying it as highlighted. Linking&Brushing enables the user to select some points of
interest in one view and highlight them. Then this information is propagated to all other related views, such that the points of interest are clearly identifiable in each view. I highlighted all
marks achieved at the University of Konstanz.
Scatterplot (with a linear regression line):
This view plots the data with the marks ranging from 1 to 4 at the y axis (vertically) and the years ranging from 2001 to 2006 on the x axis (horizontally).
As one can see at the regression line, the marks definitively improved during my studies. The different visual properties like color and shape are also visible.
Scatterplot Matrix:
There are some problems related to the scatterplot. First, if several data points share the same coordinate it is not visible to the user. This problem is also known as overplotting Second, a
scatterplot can always use only two dimensions. If the data has more dimensions (attributes) the user must constantly switch the displayed dimensions in order to understand the data.
The use of a scatter matrix overcomes the second problem (but is also restricted to low dimensional data). All dimensions of the dataset are plotted as a matrix, where each attribute is plotted
against each other.
Line Plot:
The line plot is a very well known view. In contrast to a scatterplot or parallel coordinates not the values of one data point (instance or row of the data) are plotted but the values of one
attribute (or column). Thus, the color and shape information are not available, since they describe the values of one data point. Here are all marks plotted as they occur in the data set from left to
right (I ordered the data set in advance so they are also plotted from 2001 to 2006).
Parallel Coordinates:
Parallel Coordinates are an attempt to overcome the problem of displaying data with more than 2 (or 3 if you want) dimensions. Each attribute is displayed as a vertical line, ranging from the lowest
value of that attribute to the highest. As one can see the axis representing the year attribute (which ranges from 2001 – 2006) is as long as the axis of the mark attribute (1- 4). Each data point is
plotted as a line. The line intersects the axis at the values the data point has in this dimension. If one looks, for example, at the two 4 marks at the top, one can follow the line and see that one
was obtained in 2002 and one in 2003.
Since the highlighted lines represent my marks obtained at the University of Konstanz one can easily see, that here the marks range from 1.0 to 2.0 (with the intermediate steps of 1.3 and 1.7).
One problem related to parallel coordinates is that if several lines intersect at one point of one axis it is hard to follow the lines after the intersection (if the are not color coded). One
possibility to overcome this is to plot the lines as curves. While this is not always true one can see when looking at the 2.0 (the highest of the highlighted points), that one can clearly
distinguish the curves coming from 2001, 2002 and 2005.
Box Plot:
The box plot (or box and whisker plot) displays robust statistics (you may also be interested in my article about box plots). Why robust? In contrast to the mean, which is calculated as the sum of
the values divided by the number of values, the median is determined by sorting the data and then choosing the one in the middle (if the number of data points is odd the mean of the two data points
left and right the middle is taken).
A simple example should make the idea clear. The mean of my marks is 1.8825. If the marks are sorted, they look like:
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.3, 1.3, 1.3, 1.3, 1.3, 1.3, 1.3, 1.3, 1.7, 1.7, 1.7, 1.7,
2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.3, 2.3, 2.3, 2.7, 2.7, 2.7, 3.7, 3.7, 3.7, 4.0, 4.0.
All together 40 marks and the one in the middle (the 20th) is a 1.7, thus the median of my marks is 1.7 in contrast to the mean of 1.8825. For the sake of this example, assume the two 4.0s were two
5.0s. Beside the fact that then I would have failed the courses, the mean of my marks would then be 1.9325. The median remains 1.7. This is meant by robust statistics: it is less sensible to
However, a box plot displays much more information. Here is the list of parameters which are necessary to calculate:
• lower quartile Q1 (which is the value at the lower 1/4 of the data)
• upper quartile Q3 (which is the value at the 3/4 of the data)
• the interquartile range (IRQ), which is the range between the two quartiles, thus Q3 – Q1
• the minimum value: the smallest observation which is not smaller than 1.5 * Q1
• the maximum value: the largest observation which is not larger than 1.5 * Q3
• the mild outliers: all values between 3*Q1 - 1.5*Q1 and 1.5*Q3 – 3*Q3
• extreme outliers: all values smaller than 3*Q1 or larger than 3*Q3
Now we are ready to draw the box:
The bold line in the middle of the box is the median, the lower border of the box is Q1, the upper border Q3. The whiskers (the horizontal lines outside the box) display the values which are closest
to 1.5* Q1 or 1.5*Q3 but still are inside this interval. All values outside this interval are mild outliers and are plotted with a dot. Extreme outliers may be displayed using a different shape. As
one can see: my worst marks are definitively outliers ;0)
A dendrogram displays the result of a hierarchical clustering. This starts with every data point as one cluster. In the next step those two data points (i.e. clusters) are combined, which are
closest. This is done for all remaining data points. In the next steps those clusters are combined which have the closest distance (of course, there are several methods to measure the distance of two
clusters – but this goes beyond the scope of this article). Eventually, all data points are in one cluster.
A dendrogram displays all results of the process of hierarchical clustering. At the bottom there are all data points and at the top is the super cluster which contains all data points. Each
combination of two clusters is displayed as two vertical lines starting at the clusters, connected by a horizontal line. The height of this horizontal line corresponds to the distance between the
combined clusters.
Since my marks are more or less one dimensional, I used for the hierarchical clustering the Iris data set.
Rule 2D View
I also implemented a chart for rules. Rules are a common classification technique in Knowledge Discovery. As with most classification tasks the algorithm is trained with already labeled data. When
the performance is good enough it is able to classify unseen instances based on the rules learned from the training data.
Basically, rules consist of an interval for each dimension and the referring label, saying if the data point lies within this interval it belongs to this and that class.
A problem related to this kind of rules are those data points located at the border of the intervals. If they are close to the border but outside – are they really not instances of this class? If
they are close to the border but inside the interval – do they belong as much to the class as those points in the center of the interval.
A well-known technique to model uncertainty, i.e. the data points close to the border belong to that class to a certain degree. Then a fuzzy rule consists of an area, where the degree is equal to 1
(called the core), and an area where the degree lies between 0 and 1 (called the support). A one-dimensional fuzzy rule is typically depicted like this:
Since I wanted to show 2 dimensional rules, the above shown rule is considered to be seen from top, which results in:
You can see the core area as a rectangle and the support area as the regions where color fades such as the degree of memberships decreases. In oder to better understand the rules the data points they
are covering are also displayed and are still visible under the rules.
A more complex system of fuzzy rules may look like this:
Thu, 11/13/2008 - 13:51 — Anand (not verified)
I a interested in predicting the activity of a set of compounds with a training set and descriptors associated with them.I have 10 descriptors and am interested in developing a modelfor a test set
and predcit activity using knime.Kindly suggest
Fri, 11/14/2008 - 18:18 — fabian
Hi Anand,
it is of course possible to build a predictive model in KNIME. I don't know which kind of model you are interested in: decision tree, neural network, etc.
In KNIME you would read in your training set connect it to a learner (e.g. decision tree learner), then connect the test set with the referring predictor (e.g. decision tree predictor) which would
append with an additional column to your test set containing the predicted activity.
By the way: if the activity is not categorical ("yes" or "no") but a number you can't use a decision tree but some other learning models (regression, neural network, etc.)
Hope that helps. For further questions I would suggest to go to the KNIME forum directly and ask there your questions.
Thanks for your interest!
Sun, 04/26/2009 - 22:53 — john Lo (not verified)
I tried to use Knime to connect to my database.
However, you have only one choice for the drive:
and provide the following:
Database URL
User Name
Since I am using DHCP, it does not work in my labtop.
I have sqlDeveloper and TOAD installed in my computer that use either Oracle.jdbc.OracleDriver or oracle.jdbc.driver.OracleDriver. Both work in my computer.
I tried to create a node but it seems that the free version
does not have this option.
Any comment?
Wed, 09/02/2009 - 16:43 — Thomas (not verified)
Hi John,
All database nodes in KNIME allow the registration of additional database drivers from file (jar or zip), see the node dialog's "LOAD" button. As soon as you have selected the driver file, the list
will be updated with all SQL compatible drivers. Note, other KNIME user- or developer-related questions may also be posted into our forum at www.knime.org
Regards, Thomas
Post new comment | {"url":"http://informationandvisualization.de/blog/knime-interactive-views","timestamp":"2024-11-10T04:16:19Z","content_type":"application/xhtml+xml","content_length":"24810","record_id":"<urn:uuid:40371f39-9778-409e-b8f5-83f09a36cc8c>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00279.warc.gz"} |
Best 10 Second-Hand Tractors Under 6 Lakhs
It has a 50 Hp engine power with 3 cylinders. It has a lifting capacity of 2000 kg.
It has a 50 Hp engine power with 3 cylinders. It has a lifting capacity of 1800 kg.
It has a 50 Hp engine power with 3 cylinders. It has a lifting capacity of 1600 kg.
It has a 37 Hp engine power with 3 cylinders. It has a lifting capacity of 1500 kg.
It has a 45 Hp engine power with 3 cylinders. It has a lifting capacity of 1850 kg.
It has a 46 Hp engine power with 3 cylinders. It has a lifting capacity of 1700 kg.
It has a 45 Hp engine power with 4 cylinders. It has a lifting capacity of 1640 kg.
It has a 57 Hp engine power with 4 cylinders. It has a lifting capacity of 2200 kg. | {"url":"https://tractorgyan.com/web-story-in-india/best-second-hand-tractors-under-6-lakhs-in-india","timestamp":"2024-11-06T20:40:26Z","content_type":"text/html","content_length":"28195","record_id":"<urn:uuid:4f52fccf-2701-4cf7-9d3a-2d30a3754c2f>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00271.warc.gz"} |
Numerical Methods: Solving Complex Problems - Case Studies
1. Numerical Methods: Solving Complex Problems - Case Studies
Solving Complex Problems with Numerical Methods: Case Studies
June 15, 2024
Steven Hamilton
United States
Numerical Methods
Steven Hamilton, a mathematics expert with a degree from Columbia University, brings a decade of expertise in providing exceptional assistance to students. With a passion for mathematical
problem-solving, Steven's commitment to academic excellence has empowered countless students to navigate complex assignments and achieve success in their mathematical pursuits.
In the ever-evolving landscape of science and engineering, solving complex problems often requires a sophisticated approach that transcends conventional analytical solutions. At the forefront of this
innovative approach is the realm of numerical methods, a dynamic branch of applied mathematics that assumes a pivotal role in addressing challenges deemed too intricate or time-consuming for
traditional methodologies. Within the confines of this comprehensive blog, we embark on an exploration of the multifaceted world of numerical methods, unraveling the intricate algorithms and
computational techniques that underpin their efficacy. Delving deeper, we scrutinize real-world case studies that stand as testaments to the instrumental role these numerical techniques play in
unraveling and solving complex problems across diverse domains. From the intricacies of fluid dynamics in aircraft design to the structural analysis inherent in civil engineering, numerical methods
emerge as indispensable tools, facilitating simulations and computations that pave the way for enhanced problem-solving capabilities. This examination not only sheds light on the versatility of
numerical methods but also underscores their practical significance in pushing the boundaries of what is achievable in the fields of science and engineering. This blog will provide valuable insights
and guidance to help you master your numerical methods .
As we navigate through these case studies, it becomes evident that numerical methods are not mere computational tools; they are enablers of innovation, providing engineers and scientists with the
means to analyze, optimize, and design intricate systems with unprecedented precision. The synergy between theoretical understanding and computational prowess becomes apparent as numerical
simulations facilitate a deeper comprehension of complex phenomena, guiding decision-making processes and design iterations. Within the dynamic landscape of numerical methods, the amalgamation of
Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), linear programming, genetic algorithms, and Fourier Transforms showcases the diversity of techniques employed to surmount specific
challenges. Yet, amidst the triumphs lie challenges and limitations that demand meticulous attention—numerical stability, convergence, and dimensionality complexities necessitate a judicious
approach to ensure the reliability and accuracy of results. As we reflect on the present and gaze into the future, the trajectory of numerical methods seems poised for continued evolution. The
integration of high-performance computing, parallel processing, and machine learning promises to push the boundaries further, enabling the resolution of increasingly intricate and large-scale
problems. Moreover, emerging technologies like meshless methods and quantum computing tantalize with the prospect of transforming the landscape of numerical solutions. In essence, this exploration
serves as a testament to the enduring significance of numerical methods in shaping the trajectory of problem-solving in the realms of science and technology, heralding a future where innovation and
numerical ingenuity converge to unravel the mysteries of the most complex challenges.
Understanding Numerical Methods:
Understanding numerical methods is paramount in addressing complex problems that transcend the realm of analytical solutions. Numerical methods, a branch of applied mathematics, provide a systematic
approach to solving mathematical problems through computational techniques and algorithms. In scenarios where equations lack closed-form solutions, numerical methods offer a practical means of
obtaining approximate answers. This section delves into the foundational aspects of numerical methods, exploring their role in tackling intricate problems across diverse fields. These methods become
particularly crucial when dealing with real-world phenomena characterized by complex equations, such as fluid dynamics, structural analysis, optimization, and data fitting. By breaking down these
intricate problems into manageable components, numerical methods empower scientists and engineers to simulate and analyze systems that would be otherwise intractable. The discussion encompasses the
significance of numerical simulations in fields like aircraft design, where Computational Fluid Dynamics (CFD) revolutionizes aerodynamic analyses, and civil engineering, where the Finite Element
Method (FEM) aids in structural optimization. Through a deeper understanding of numerical methods, professionals can harness these tools to navigate the complexities of their respective fields,
pushing the boundaries of problem-solving capabilities and contributing to advancements in science and engineering.
Case Study 1: Fluid Dynamics Simulation for Aircraft Design:
Delves into the realm of Fluid Dynamics Simulation, specifically focusing on its pivotal role in aircraft design. The aerodynamics of aircraft, governed by the complex Navier-Stokes equations, pose
intricate challenges that lack analytical solutions, particularly when dealing with intricate geometries and turbulent flows. Numerical methods, such as Finite Element Analysis (FEA) and
Computational Fluid Dynamics (CFD), have become indispensable tools in this domain. Engineers leverage these methods to simulate the airflow around an aircraft by inputting its geometry, providing
essential data on lift, drag, and stability. This approach revolutionizes the traditional aircraft design process, replacing time-consuming and costly wind tunnel testing with efficient computational
simulations. The ability to iterate designs rapidly enhances the optimization of aerodynamic performance and fuel efficiency, crucial factors in the aerospace industry. Fluid dynamics simulations not
only aid in understanding the complex interactions between air and aircraft surfaces but also contribute to the development of innovative designs that push the boundaries of aerodynamic efficiency.
As a result, numerical methods in Fluid Dynamics Simulation have become integral to the continuous evolution of aircraft design, showcasing the transformative power of computational techniques in
addressing intricate challenges in engineering and pushing the boundaries of what is achievable in the aerospace sector.
Case Study 2: Structural Analysis in Civil Engineering:
In the realm of civil engineering, where the safety and reliability of structures are paramount, numerical methods have proven instrumental in tackling the intricacies of structural analysis. The
complexity of real-world structures often leads to mathematical models that defy analytical solutions. Enter the Finite Element Method (FEM), a numerical technique that has revolutionized the field.
Engineers employ FEM to break down complex structures into smaller, manageable elements, allowing them to simulate and understand the behavior of the entire structure under various loads and
environmental conditions. This approach provides crucial insights into factors such as stress distribution, deformation, and potential failure points. By leveraging numerical simulations, civil
engineers can optimize designs for factors like safety, cost-effectiveness, and structural efficiency. From skyscrapers to bridges, FEM enables engineers to explore diverse scenarios and refine
designs iteratively. The ability to predict how a structure will respond to different forces and conditions is invaluable, reducing the need for costly physical prototypes and ensuring that
structures meet stringent safety standards. As urbanization accelerates and infrastructure demands intensify, the role of numerical methods in structural analysis becomes increasingly vital, offering
a sophisticated means to design and assess structures with a level of detail and accuracy that analytical methods alone cannot provide. Through numerical simulations, civil engineers are not merely
constructing buildings and bridges; they are crafting structures with a profound understanding of their dynamic behavior and resilience in the face of real-world challenges.
Case Study 3: Optimization in Operations Research:
Case Study 3 focuses on the critical role of numerical methods in optimization problems within the realm of operations research. In the complex landscape of supply chain management, where numerous
variables influence decision-making, numerical methods offer indispensable solutions. Algorithms like the Simplex method for linear programming and genetic algorithms for nonlinear optimization prove
instrumental in streamlining logistics, minimizing costs, and enhancing overall operational efficiency. Consider a scenario where a company aims to optimize its supply chain by balancing
transportation costs, inventory levels, and production rates. Numerical optimization techniques allow for the identification of optimal solutions that minimize expenses while maximizing efficiency.
The Simplex method, a widely used linear programming algorithm, iteratively refines the solution space until an optimal solution is reached. On the other hand, genetic algorithms, inspired by natural
selection, provide robust solutions in nonlinear optimization scenarios by mimicking evolutionary processes. These numerical methods empower businesses to make data-driven decisions, leading to
improved resource allocation, reduced wastage, and heightened competitiveness. As industries become increasingly complex and interconnected, the ability to leverage numerical optimization techniques
becomes paramount in navigating the intricate web of logistical challenges, ensuring sustainable and efficient operations in the dynamic landscape of operations research.
Case Study 4: Image and Signal Processing:
Image and signal processing constitute pivotal domains in the application of numerical methods, where computational techniques are harnessed to extract meaningful information from visual or auditory
data. In image processing, algorithms such as convolution, edge detection, and image segmentation play a crucial role in tasks ranging from facial recognition to medical imaging. Numerical methods,
like Fourier Transforms, are indispensable in signal processing, allowing the analysis and manipulation of signals in the frequency domain. This proves particularly beneficial in fields such as
telecommunications, where noise reduction, compression, and modulation are essential for reliable data transmission. Moreover, the fusion of numerical methods with machine learning techniques has
revolutionized image and signal processing, enabling the development of sophisticated algorithms for pattern recognition and feature extraction. The application of these methods extends to medical
diagnostics, where accurate processing of images and signals is critical for identifying anomalies or diseases. As technology advances, the integration of numerical methods with artificial
intelligence continues to push the boundaries of what can be achieved in image and signal processing. The quest for faster and more accurate algorithms, coupled with the increasing availability of
computational resources, promises a future where these numerical techniques will continue to play a transformative role in extracting valuable information from diverse data sources, ultimately
contributing to advancements in fields as varied as healthcare, communications, and multimedia.
Challenges and Limitations:
In the realm of numerical methods, despite their remarkable utility, challenges and limitations persist. One of the primary concerns is the issue of numerical stability, where small errors in
calculations can accumulate and lead to significant discrepancies in results. Convergence, the tendency of iterative numerical algorithms to approach a solution, is another challenge, as some methods
may struggle to converge within reasonable time frames or fail to converge altogether. The curse of dimensionality poses a substantial limitation, especially in optimization problems with a large
number of variables, making computations exponentially more complex. Additionally, the choice of algorithms and parameters can significantly impact the accuracy and efficiency of numerical solutions,
necessitating a careful balance between computational resources and precision. Validation and verification become critical, requiring a thorough understanding of both the numerical methods employed
and the underlying physics or mathematics of the problem. Ensuring the reliability of numerical models is a continual challenge, demanding diligence in the face of potential inaccuracies. Despite
these challenges, the field of numerical methods continues to evolve, driven by the pursuit of more robust algorithms and innovative approaches. As researchers strive to overcome these limitations,
the future holds promise for enhanced accuracy and efficiency in solving increasingly intricate and large-scale problems through numerical techniques.
Future Trends and Innovations:
Future trends and innovations in numerical methods promise to reshape the landscape of problem-solving across diverse domains. High-performance computing, marked by exponential increases in
processing power, enables simulations of unprecedented complexity. The integration of parallel processing further accelerates computations, paving the way for real-time simulations and analyses.
Machine learning algorithms, with their ability to recognize patterns and learn from data, are becoming integral to numerical methods, offering a paradigm shift in optimization and decision-making
processes. As technology evolves, researchers are exploring unconventional numerical techniques, including meshless methods that eliminate the need for a predefined mesh, providing more flexibility
in handling complex geometries. Quantum computing, with its capacity for parallelism and superposition, holds the potential to revolutionize numerical simulations by solving certain types of problems
exponentially faster than classical computers. These advancements not only enhance the accuracy and efficiency of numerical solutions but also enable the exploration of larger problem spaces
previously deemed impractical. Collaborative efforts between mathematicians, scientists, and engineers are crucial for pushing the boundaries of numerical methods. As we look ahead, the synergy
between numerical methods and emerging technologies promises a future where solving increasingly complex problems becomes not only achievable but also routine, ushering in a new era of innovation and
In conclusion, the realm of numerical methods emerges as a transformative force in solving intricate problems across diverse fields. From simulating fluid dynamics for aircraft design to optimizing
logistics in operations research, these methods have proven indispensable in addressing challenges that defy analytical solutions. The case studies presented underscore the versatility and efficacy
of numerical techniques, showcasing their ability to unravel complex phenomena and facilitate informed decision-making. Despite their undeniable advantages, it is crucial to acknowledge the
challenges inherent in numerical methods, such as numerical stability and convergence issues, emphasizing the need for meticulous validation and verification processes. Looking ahead, the integration
of high-performance computing, parallel processing, and machine learning signals a promising future for numerical methods, pushing the boundaries of problem-solving capabilities. As technology
advances, the collaboration between mathematicians, scientists, and engineers will continue to drive innovation, potentially unlocking new frontiers in numerical simulations. In essence, numerical
methods stand as pillars of progress, offering a robust framework for tackling the intricate challenges that define the ever-evolving landscape of science and engineering. Through ongoing
exploration, refinement, and application, the significance of numerical methods in shaping the future of problem-solving remains resolute, promising continued advancements and breakthroughs on the
frontier of scientific inquiry and technological innovation. | {"url":"https://www.mathsassignmenthelp.com/blog/solving-complex-problems-numerical-methods-case-studies/","timestamp":"2024-11-02T04:40:32Z","content_type":"text/html","content_length":"85525","record_id":"<urn:uuid:17e506ec-2239-4f84-b86f-aa7618059754>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00237.warc.gz"} |
Asymptotically exact probability distribution for the Sinai model with finite drift
We obtain the exact asymptotic result for the disorder-averaged probability distribution function for a random walk in a biased Sinai model and show that it is characterized by a creeping behavior of
the displacement moments with time, <x(n)> similar to v(mu n), where mu <1 is dimensionless mean drift. We employ a method originated in quantum diffusion which is based on the exact mapping of the
problem to an imaginary-time Schrodinger equation. For nonzero drift such an equation has an isolated lowest eigenvalue separated by a gap from quasicontinuous excited states, and the eigenstate
corresponding to the former governs the long-time asymptotic behavior.
Bibliographical note
© 2010 The American Physical Society
Dive into the research topics of 'Asymptotically exact probability distribution for the Sinai model with finite drift'. Together they form a unique fingerprint. | {"url":"https://research-test.aston.ac.uk/en/publications/asymptotically-exact-probability-distribution-for-the-sinai-model","timestamp":"2024-11-06T10:39:26Z","content_type":"text/html","content_length":"55868","record_id":"<urn:uuid:fa02c1c8-e6a2-4832-8eb7-47c1a494fb3a>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00551.warc.gz"} |
Get started with educineq!
What is educineq?
educineq is an R package to compute education inequality measures for any group of countries using the dataset developed by Vanesa Jordá and José Manuel Alonso, which covers 142 countries over the
period 1970 to 2010. Our estimates rely on a number of assumptions, so we suggest you read this paper carefully before start using the package.
The package offers the possibility to easily compute not only the Gini index, which has been the main indicator used to measure education inequality, but also generalized entropy measures (GE(α)) for
different values of the α parameter. This family of inequality measures presents two main advantages:
1. They allow the user to change the sensitivity if the measure to differences in specific parts of the distribution. In particular, we provide functions to compute the mean log deviation (MLD),
which is more sensitive to the bottom part of the distribution; the Theil’s entropy measure, equally sensitive to all parts of the distribution; and finally, the GE measure when the sensitivity
parameter is set equal to 2, which gives more weight to differences in higher education.
2. These measures are additively decomposable, being overall inequality the sum of the following in two components:
• Between-country inequality: The amount of inequality that would exist in an imaginary world where all the all the citizens of a country had the same number of years of schooling, so the only
differences would be observed in across countries.
• Within-country inequality: Constructed as a weighted average of inequality measures for the individual countries, the disparities are exclusively derived from the differences in education among
the citizens of the same country.
How to use educineq?
The first step is to download R and install it in your computer. You can just work with R, but I recommend using RStudio for new users, since it provides a friendlier environment to process R
programming language. Once you verify R is installed and working in your computer, download RStudio and install it.
The next step is to install the package using the following command:
install.packages("educineq", dependencies = TRUE)
You only have to install the package once, but it has to be loaded into R every time you start RStudio, using the command:
That’s it! The following functions, included in educineq, are already available:
• Mean years of schooling: emean
• Gini index: egini
• Theil index: etheil
• MLD: emld
• GE(2): ege2
• Probability distribution function of education: epdf
• Cumulative distribution function of education: ecdf
All these functions go with a list of arguments in parenthesis:
(countries, init.y, final.y, database, plot)
1. countries: The countries to be used, which have to be included using the country code that can be found in the object country_data:
2. init.y: The first year in which the function is calculated. All these years are available: 1970,1975,1980,1985,1990,1995,2000,2005,2010.
3. final.y: The last year in which the function is calculated. The same years as for init.y are available, but it obviously has to be later than the former one.
4. database: The functions can be computed for different populations subgrups:· Total population aged over-15: total15· Total population aged over-25: total25· Male population aged
over-15: male15· Male population aged over-25: male25
· Female population aged over-15: female15
· Female population aged over-25: female25
5. plot: if TRUE (default) shows a graph of the results.
Let’s say we are interested in educational attainments of the Nordic countries (Denmark, Finland, Iceland, Norway and Sweden) since 1990. We can obtain mean years of schooling for the population
aged over-15 easily as follows:
The function first shows mean years of schooling for the Nordic countries, and the list of countries used to construct the previous figures. It is important to check the list in order to certify
if all the countries has been included.
The option plot = TRUE displays the following graph with the evolution of this indicator:
To measure inequality in education during this period using the Theil index we have to use etheil, which displays this inequality measure decomposed by the differences between countries and
disparities within the countries for this region.
As for the mean years of schooling, the option plot = TRUE displays a graph of the previous results:
In the dataset country_data, a classification of the countries by macro-regions is provided, which can me used to specify the countries in all the functions of educineq. As an example, we compute
below the level of educational inequality of Latin America and the Caribbean from 1980 to 2000 using the MLD:
If we set countries = “all”, the whole set of countries included in the dataset will be used to calculate the results. We illustrate how to compute the Gini index for all the countries from 1990 to
2000 for female population aged over-25 in the following figure:
We can also use this package to plot the distribution of educational attainment using epdf for the probability density function and ecdf for the cumulative distribution function.
Error messages
There are a number of ways to specify wrongly each of the arguments of the functions, in which case the package will report an error or a warning message. Some of the most common errors are presented
below along with some recommendations to avoid them.
Regarding the first argument related to the countries, the program will stop if more than two regions are chosen simultaneously (Case A); if any of the countries are correctly included with the
country code (Case B); and if only some of them are incorrectly specified, the program will compute the results for the countries that has been properly included (Case C). Therefore, it is highly
recommended to check the list of countries disliked below the results to avoid possible mistakes due to misspecification of country names.
It is also very easy to get confused with the initial and the final year of the period for which the user wants to compute the results. The first year (init.y) has to be earlier than the last year
(final.y), otherwise the function yields an error (Case D). The both have to be within the range 1970 – 2010, although if an initial year earlier than 1970 is specified, the function will take 1970
as a starting year and the same logic is applied for the final year, which will be 2010 whenever a later year is chosen (Case E). If the first and the last years are both within the available range,
but some (or both) of them are not equal to any of the following values: 1970, 1975, 1980, 1985, 1990, 1995, 2000, 2005, 2010, an error will be displayed (Case F).
Finally, if the database is not correctly specified, an error will come up, as is illustrated bellow: | {"url":"http://vanesajorda.com/dir/blog/2017/03/19/get-started-with-educineq/","timestamp":"2024-11-07T15:25:05Z","content_type":"text/html","content_length":"92023","record_id":"<urn:uuid:2c2ff165-6b91-4b74-a050-faac2e1256b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00363.warc.gz"} |
Congruence in triangles – SSS Rule
Jump to navigation Jump to search
Investigating the possibility of congruence if three sides of two triangles are congruent.
Compare sides in triangles to check for congruence
Estimated Time
30 minutes
Prerequisites/Instructions, prior preparations, if any
Prior knowledge of point, lines, angles, closed figures
Materials/ Resources needed
1. Digital : Computer, geogebra application, projector.
2. Non digital : Worksheet and pencil, triangles of same and different shapes
3. 3. Geogebra files : “SSS congruence.ggb”
Download this geogebra file from this link.
Process (How to do the activity)
Prior hands on activity
• Three triangles are distributed to groups of students.
• Children should identify the triangles that are congruent.
• They can name the vertices in the given triangles.
• Write down the sides and angles that are coinciding in the two triangles.
Use the geogebra file
• How many triangles you observe?
• Are all the triangles same, point out the triangles that are same.
• How can you say they are same? What can you do to check if the two triangles are congruent?
• What parameters of triangles are required to know if they are congruent?
• What about the third triangle is it the same as the other two, what you should do to show the triangle is same as the others – concept pf reflection can be discussed
• Make two triangles of same sizes. Cut it and verify they are congruent.
• Construct one triangle – Base = 3, 4 and 5 are other sides. Another triangle base = 5; and two sides are 3 and 4. Another triangle base = 4; and two sides are 3 and 5. Does the order of sides
matter in a triangle?
Evaluation at the end of the activity
• Students should be able to understand, if 3 corresponding sides of two triangles are same then the triangles are congruent.
• Students should also understand that the sequence of sides examined in the triangles need not be same for the triangles to be congruent. | {"url":"https://karnatakaeducation.org.in/KOER/en/index.php/Congruence_in_triangles_%E2%80%93_SSS_Rule","timestamp":"2024-11-02T14:07:37Z","content_type":"text/html","content_length":"35829","record_id":"<urn:uuid:6b93a797-c362-42ef-b6e8-7c377820cb05>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00240.warc.gz"} |
C++ program to convert binary to decimal using functionsC++ program to convert binary to decimal using functions
C++ program to convert binary to decimal using functions
In this chapter of C++ program tutorial our task is to write simple:
• c++ program to convert binary number to decimal
C++ program to convert binary to decimal using functions
Below is the c++ program to convert binary to decimal using functions:
/* Program Name: To convert a binary number to its decimal equivalent
Program Author: Sayantan Bose
#include <iostream>
#include <math.h>
using namespace std;
//Function to make the conversion
int solve(long long num) {
int decimal = 0, i = 0, r;
while (num != 0) {
r = num % 10;
num /= 10;
decimal += r * pow(2, i);
return decimal;
//Main function
int main() {
long long num;
cout<<"Enter a binary number: ";
int decimal_num=solve(num);
cout<<num<< " in binary = "<< decimal_num<< " in decimal";
return 0;
Enter a binary number:
101 in binary = 5 in decimal
Enter a binary number:
110011 in binary = 51 in decimal
In the above c++ program to convert binary to decimal using while loop:
• A binary number can be converted to its decimal equivalent by multiplying each digit of the binary number with 2 powered to the position of the digit in the number.
• In the above program, a given binary number is passed on to the function solve where it is converted to its decimal equivalent.
The position of the digits of a binary number always starts from the left side, with 0.
You can see the above code execution and output in codeblocks IDE:
Would you like to see your article here on tutorialsinhand. Join
program by tutorialsinhand.com
About the Author
Sayantan Bose
- š Iā m currently working on DS Algo skills - š ± Iā m currently learning web develeopement - š Æ Iā m looking to collaborate with Oppia - š « Reach me at https://www.linkedin.com/in/
sayantan-bose-14134a1a6/ - š Pronouns: his/him
Page Views : Published Date : Jan 20,2021 | {"url":"https://tutorialsinhand.com/Articles/cpp-program-to-convert-a-binary-to-decimal.aspx","timestamp":"2024-11-10T14:15:13Z","content_type":"text/html","content_length":"38206","record_id":"<urn:uuid:013abd31-06e0-4c88-ad17-a82527afb463>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00471.warc.gz"} |
Signal denoising through topographic modularity of neural circuits
Information from the sensory periphery is conveyed to the cortex via structured projection pathways that spatially segregate stimulus features, providing a robust and efficient encoding strategy.
Beyond sensory encoding, this prominent anatomical feature extends throughout the neocortex. However, the extent to which it influences cortical processing is unclear. In this study, we combine
cortical circuit modeling with network theory to demonstrate that the sharpness of topographic projections acts as a bifurcation parameter, controlling the macroscopic dynamics and representational
precision across a modular network. By shifting the balance of excitation and inhibition, topographic modularity gradually increases task performance and improves the signal-to-noise ratio across the
system. We demonstrate that in biologically constrained networks, such a denoising behavior is contingent on recurrent inhibition. We show that this is a robust and generic structural feature that
enables a broad range of behaviorally relevant operating regimes, and provide an in-depth theoretical analysis unraveling the dynamical principles underlying the mechanism.
This manuscript puts forward a new idea that topography in neural networks helps to remove noise from inputs. The authors show that there is a critical level of topography that is needed for network
to denoise inputs.
Sensory inputs are often ambiguous, noisy, and imprecise. Due to volatility in the environment and inaccurate peripheral representations, the sensory signals that arrive at the neocortical circuitry
are often incomplete or corrupt (Faisal et al., 2008; Renart and Machens, 2014). However, from these noisy input streams, the system is able to acquire reliable internal representations and extract
relevant computable features at various degrees of abstraction (Friston, 2005; Okada et al., 2010; DiCarlo et al., 2012). Sensory perception in the mammalian neocortex thus relies on efficiently
detecting the relevant input signals while minimizing the impact of noise.
Making sense of the environment also requires the estimation of features not explicitly represented by low-level sensory inputs. These inferential processes (Młynarski and Hermundstad, 2018; Parr et
al., 2019) rely on the propagation of internal signals such as expectations and predictions, the accuracy of which must be evaluated against the ground truth, that is the sensory input stream. In a
highly dynamic environment, this translates to a continuous process whose precision hinges on the fidelity with which external stimuli are encoded in the neural substrate. Additionally, as the system
is modular and hierarchical (strikingly so in the sensory and motor components; Meunier et al., 2010; Park and Friston, 2013), it is critical that the external signal permeates the different
processing modules despite the increasing distance from the sensory periphery (the input source) and the various transformations it is exposed to along the way, which degrade the signal via the
interference of task-irrelevant and intrinsic, ongoing activity.
Accurate signal propagation can be achieved in a number of ways. One obvious solution is the direct routing and distribution of the signal, such that direct sensory input can be fed to different
processing modules, which may be partially achieved through thalamocortical projections (Sherman and Guillery, 2002; Nakajima and Halassa, 2017). Another possibility, which we explore in this study,
is to propagate the input signal through tailored pathways that route the information throughout the system, allowing different processing stages to retrieve it without incurring much
representational loss. Throughout the mammalian neocortex, the existence and characteristics of structured projections (topographic maps) present a possible substrate for such signal routing. By
preserving the relative organization of tuned neuronal populations, such maps imprint spatiotemporal features of (noisy) sensory inputs onto the cortex (Kaas, 1997; Bednar and Wilson, 2016; Wandell
and Winawer, 2011). In a previous study (Zajzon et al., 2019), we discovered that structured projections can create feature-specific pathways that allow the external inputs to be faithfully
represented and propagated throughout the system, but it remains unclear which connectivity properties are critical and what the underlying mechanism is. Moreover, beyond mere sensory representation,
there is evidence that such structure-preserving mappings are also involved in more complex cognitive processes in associative and frontal areas (Hagler and Sereno, 2006; Silver and Kastner, 2009;
Patel et al., 2014), suggesting that topographic maps are a prominent structural feature of cortical organization.
In this study, we hypothesize that structured projection pathways allow sensory stimuli to be accurately reconstructed as they permeate multiple processing modules. We demonstrate that, by modulating
effective connectivity and regional E/I balance, topographic projections additionally serve a denoising function, not merely allowing the faithful propagation of input signals, but systematically
improving the system’s internal representations and increasing signal-to-noise ratio. We identify a critical threshold in the degree of modularity in topographic projections, beyond which the system
behaves effectively as a denoising autoencoder (note that the parallel is established here on conceptual, not formal, grounds as the system is capable of retrieving the original, uncorrupted input
from a noisy source, but bears no formal similarity to denoising autoencoder algorithms). Additionally, we demonstrate that this phenomenon is robust, with the qualitative behavior persisting across
very different models. Theoretical considerations and network simulations show that it hinges solely on the modularity of topographic projections and the presence of recurrent inhibition, with the
external input and single-neuron properties influencing where/when, but not if, denoising occurs. Our results suggest that modular structure in feedforward projection pathways can have a significant
effect on the system’s qualitative behavior, enabling a wide range of behaviorally relevant and empirically supported dynamic regimes. This allows the system to: (1) maintain stable representations
of multiple stimulus features (Andersen et al., 2008); (2) amplify features of interest while suppressing others through winner-takes-all (WTA) mechanisms (Douglas and Martin, 2004; Carandini and
Heeger, 2011); and (3) dynamically represent different stimulus features as stable and metastable states and stochastically switch among active representations through a winnerless competition (WLC)
effect (McCormick, 2005; Rabinovich et al., 2008; Rost et al., 2018).
Our key finding, that the modulation of information processing dynamics and the fidelity of stimulus/feature representations results from the structure of topographic feedforward projections,
provides new meaning and functional relevance to the pervasiveness of these projection maps throughout the mammalian neocortex. Beyond routing feature-specific information from sensory transducers
through brainstem, thalamus, and into primary sensory cortices (notably tonotopic, retinotopic, and somatotopic maps), their maintenance within the neocortex (Patel et al., 2014) ensures that even
cortical regions that are not directly engaged with the sensory input (higher-order cortex), can receive faithful representations of it, and that these internal signals, emanating from lower-order
cortical areas, can dramatically skew and modulate the circuit’s E/I balance and local functional connectivity, resulting in fundamental differences in the systems’ responsiveness.
To investigate the role of structured pathways between processing modules in modulating the fidelity of stimulus representations, we study a network comprising up to six sequentially connected
sub-networks (SSNs, see Materials and methods and Figure 1a). Each SSN is a balanced random network (see e.g. Brunel, 2000) of 10,000, sparsely and randomly coupled leaky integrate-and-fire (LIF)
neurons (80% excitatory and 20% inhibitory). In each SSN, neurons are assigned to sub-populations associated with a particular stimulus. Excitatory neurons belonging to such stimulus-specific
sub-populations then project to the subsequent SSN with a varying degree of specificity. We refer to a set of stimulus-specific sub-populations across the network and the structured feedforward
projections among them as a topographic map. The specificity of the map is determined by the degree of modularity of the corresponding projections matrices (see e.g. Figure 1a). Modularity is thus
defined as the relative density of connections within a stimulus-specific pathway (i.e., connecting sub-populations associated to the same stimulus; see Materials and methods and Figure 1a). In the
following, we study the role of topographic specificity in modulating the system’s functional and representational dynamics and its ability to cope with noise-corrupted input signals.
Sequential denoising spiking architecture.
Sequential denoising through structured projections
By systematically varying the degree of modular specialization in the feedforward projections (modularity parameter, $m$, see Materials and methods and Figure 1), we can control the segregation of
stimulus-specific pathways across the network and investigate how it influences the characteristics of neural representations as the signal propagates. If the feedforward projections are unstructured
or moderately structured ($m≲0.8$), information about the input fails to permeate the network, resulting in a chance-level reconstruction accuracy in the last sub-network, SSN[5], even in the absence
of noise (see Figure 1b, c). However, as $m$ approaches a switching value $mswitch≈0.83$, there is a qualitative transition in the system’s behavior, leading to a consistently higher reconstruction
accuracy across the sub-networks (Figure 1b–e), regardless of the amount of noise added to the signal (Figure 1f, g).
Beyond this transition point, reconstruction accuracy improves with depth, that is the signal is more accurately represented in SSN[5] than in the initial sub-network, SSN[0], with an effective
accuracy gain of over 40% (Figure 1d, g). While the addition of noise does impair the absolute reconstruction accuracy in all cases (see Figure 1—figure supplement 1), the denoising effect persists
even if the input is severely corrupted ($σξ=3$, see Figure 1f, g). This is a counter-intuitive result, suggesting that topographic modularity is not only necessary for reliable communication across
multiple populations (see Zajzon et al., 2019), but also supports an effective denoising effect, whereby representational precision increases with depth, even if the signal is profoundly distorted by
Noise suppression and response amplification
The sequential denoising effect observed beyond the transition point $mswitch≈0.83$ results in an increasingly accurate input encoding through progressively more precise internal representations. In
general, such a phenomenon could be achieved either through noise suppression, stimulus-specific response amplification or both. In this section, we examine these possibilities by analyzing and
comparing the input-driven dynamics of the different sub-networks. The strict segregation of stimulus-specific sub-populations in SSN[0] is only fully preserved across the system if $m=1$, in which
case signal encoding and transmission primarily rely on this spatial segregation. Spiking activity across the different SSNs (Figure 2a) demonstrates that the system gradually sharpens the
segregation of stimulus-specific sub-populations; indeed, in systems with fully modular feedforward projections, activity in the last sub-network is concentrated predominantly in the stimulated
sub-populations. This effect can be observed in both excitatory (E) and inhibitory (I) populations, as both are equally targeted by the feedforward excitatory projections. The sharpening effect
consists of both noise suppression and response amplification (Figure 2b), measured as the relative firing rates of the non-stimulated $ν5NS/ν0NS$ and stimulated sub-populations $ν5S/ν0S$,
respectively. For ,$m<mswitch$. noise suppression is only marginal and responses within the stimulated pathways are not amplified ($ν5S/ν0S<1$).
Activity modulation and representational precision.
Mean-field analysis of the stationary network activity (see Materials and methods and Appendix B) predicts that the firing rates of the stimulus-specific sub-populations increase systematically with
modularity, whereas the untuned neurons are gradually silenced (Figure 2c, left). At the transition point $mswitch≈0.83$, mean firing rates across the different sub-networks converge, which
translates into a globally uniform signal encoding capacity, corresponding to the zero-gain convergence point in Figure 1d, g. As the degree of modularity increases beyond this point, the
self-consistent state is lost again as the functional dynamics across the network shifts toward a gradual response sharpening, whereby the activity of stimulus-tuned neurons become increasingly
dominant (Figure 2a–c). The effect is more pronounced for the deeper sub-networks. Note that the analytical results match well with those obtained by numerical simulation (Figure 2c, right).
In the limit of very deep networks (up to 50 SSNs, Figure 2d) the system becomes bistable, with rates converging to either a high-activity state associated with signal amplification or a low-activity
state driven by the background input. The transition point is observed at a modularity value of $m=0.83$, matching the results reported so far. Below this value, elevated activity in the stimulated
sub-populations can be maintained across the initial sub-networks (<10), but eventually dies out; the rate of all neurons decays and information about the input cannot reach the deeper populations.
Importantly, for $m=0.83$, the transition toward the high-activity state is slower. This allows the input signal to faithfully propagate across a large number of sub-networks ($≈15$), without being
driven into implausible activity states.
E/I balance and asymmetric effective couplings
The departure from the balanced activity in the initial sub-networks can be better understood by zooming in at the synaptic level and analyzing how topography influences the synaptic input currents.
The segregation of feedforward projections into stimulus-specific pathways breaks the symmetry between excitation and inhibition (see Figure 3a) that characterizes the balanced state (Haider et al.,
2006; Shadlen and Newsome, 1994), for which the first two sub-networks were tuned (see Materials and methods). E/I balance is thus systematically shifted toward excitation in the stimulated
populations and inhibition in the non-stimulated ones. Neurons belonging to sub-populations associated with the active stimulus receive significantly more net overall excitation, whereas the other
neurons become gradually more inhibited. This disparity grows not only with modularity but also with network depth. Overall, across the whole system, increasing modularity results in an increasingly
inhibition-dominated dynamical regime (inset in Figure 3a), whereby stronger effective inhibition silences non-stimulated populations, thus sharpening stimulus/feature representations by
concentrating activity in the stimulus-driven sub-populations.
Asymmetric effective couplings modulate the E/I balance and support sequential denoising.
To gain an intuitive understanding of these effects from a dynamical systems perspective, we linearize the network dynamics around the stationary working points of the individual populations (
Tetzlaff et al., 2012) in order to obtain the effective connectivity $W$ of the system (see Materials and methods and Appendix B). The effective impact of a single spike from a presynaptic neuron $j$
on the firing rate of a postsynaptic neuron $i$ (the effective weight $wij∈W$) is determined not only by the synaptic efficacies $Jij$, but also by the statistics of the synaptic input fluctuations
to the target cell $i$ that determine its excitability (see Materials and methods, Equation 6). This analysis reveals that there is an increase in the effective synaptic input onto neurons in the
stimulated sub-populations as a function of modularity (Figure 3b). Conversely, non-stimulated neurons effectively receive weaker excitatory (and stronger inhibitory) drive and become increasingly
less responsive (see Figure 3a, b). The role of topographic modularity in denoising can thus be understood as a transient, stimulus-specific change in effective connectivity.
For low and moderate topographic precision ($m≲0.83$), denoising does not occur as the effective weights are sufficiently similar to maintain a stable E/I balance across all populations and
sub-networks (Figure 3a, b), resulting in a relatively uniform global dynamical state (indicated in Figure 3c by a constant spectral radius for $m≲0.83$, see also Materials and methods) and stable
linearized dynamics ($ρ(W)<1$).
However, as the feedforward projections become more structured, the system undergoes qualitative changes: after a weak transient ($0.83≲m≲0.85$) the spectral radius $ρ$ in the deep SSNs expands due
to the increased effective coupling to the stimulated sub-population (Figure 3b); the spectral radius eventually ($m≳0.85$) contracts with increasing modularity (Figure 3c, d). Given that $ρ$ is
determined by the variance of $W$, that is heterogeneity across connections (Rajan and Abbott, 2006), this behavior is expected: most weights are in the non-stimulated pathways, which decrease with
larger $m$ and network depth (Figure 3b). Strong inhibitory currents (Figure 3a) suppress the majority of neurons, thereby reducing noise, as demonstrated by the collapse of the bulk of the
eigenvalues toward the center for larger $m$ (Figure 3d). Indicative of a more constrained state space, this contractive effect suggests that population activity becomes gradually entrained by the
spatially encoded input along the stimulated pathway, whereas the responses of the non-stimulated neurons have a diminishing influence on the overall behavior.
By biasing the effective connectivity of the system, precise topography can thus modulate the balance of excitation and inhibition in the different sub-networks, concentrating the activity along
specific pathways. This results in both a systematic amplification of stimulus-specific responses and a systematic suppression of noise (Figure 2b). The sharpness/precision of topographic specificity
along these pathways thus acts as a critical control parameter that largely determines the qualitative behavior of the system and can dramatically alter its responsiveness to external inputs.
How can the system generate and maintain the elevated inhibition underlying such a noise-suppressing regime? On the one hand, feedforward excitatory input may increase the activity of certain
excitatory neurons in $Ei$ of sub-network $SSNi$, which, in turn, can lead to increased mean inhibition through local recurrent connections. On the other hand, denoising could depend strongly on the
concerted topographic projections onto $Ii$. Such structured feedforward inhibition is known to play important functional roles in, for example, sharpening the spatial contrast of somatosensory
stimuli (Mountcastle and Powell, 1959) or enhancing coding precision throughout the ascending auditory pathways (Roberts et al., 2013).
To investigate whether recurrent activity alone can generate sufficiently strong inhibition for signal transmission and denoising, we maintained the modular structure between the excitatory
populations and randomized the feedforward projections onto the inhibitory ones ($m=0$ for $Ei→Ii+1$, compare top panels of Figure 4a, b). This leads to unstable firing patterns in the downstream
sub-networks, characterized by significant accumulation of synchrony and increased firing rates (see bottom panels of Figure 4a, b and Figure 4—figure supplement 1a, b). These effects, known to
result from shared pre-synaptic excitatory inputs (see e.g. Shadlen and Newsome, 1998; Tetzlaff et al., 2003; Kumar et al., 2008a), are more pronounced for larger $m$ and network depth (see Figure
4—figure supplement 1). Compared with the baseline network, whose activity shows clear spatially encoded stimuli (sequential activation of stimulus-specific sub-populations [Figure 4a, bottom]),
removing structure from the projections onto inhibitory neurons abolishes the effect and prevents accurate signal transmission.
Modular projections to inhibitory populations stabilize network dynamics.
These effects of unstructured inhibitory projections are so marked that they can be observed even if a single set of projections is modified: this can be seen in Figure 4c, where only the $E4→I5$
connections are randomized. It is worth noting, however, that the excessive synchronization that results from unstructured inhibitory projections (Figure 4c, bottom left, no additional input
condition) can be easily counteracted by driving $I5$ (the inhibitory population that receives only unstructured projections) with additional uncorrelated external input. If strong enough (
$νX+≈10spk/sec$), this additional external drive pushes the inhibitory population into an asynchronous regime that restores the sharp, stimulus-specific responses in the excitatory population of
the corresponding sub-network (see Figure 4c, bottom right, and Figure 4—figure supplement 1c).
These results emphasize the control of inhibitory neurons’ responsiveness as the main causal mechanism behind the effects reported. Elevated local inhibition is strictly required, but whether this is
achieved by tailored, stimulus-specific activation of inhibitory sub-populations, or by uncorrelated excitatory drive onto all inhibitory neurons appears to be irrelevant and both conditions result
in sharp, stimulus-tuned responses in the excitatory populations.
A generalizable structural effect
We have demonstrated that, by controlling the different sub-networks’ operating point, the sharpness of feedforward projections allows the architecture to systematically improve the quality of
internal representations and retrieve the input structure, even if profoundly corrupted by noise. In this section, we investigate the robustness of the phenomenon in order to determine whether it can
be entirely ascribed to the topographic projections (a structural/architectural feature) or if the particular choices of models and model parameters for neuronal and synaptic dynamics contribute to
the effect.
To do so, we study two alternative model systems on the signal denoising task. These are structured similar to the baseline system explored so far, comprising separate sequential sub-networks with
modular feedforward projections among them (see Figure 1 and Materials and methods), but vary in total size, neuronal and synaptic dynamics. In the first test case, only the models of synaptic
transmission and corresponding parameters are altered. To increase biological verisimilitude and following Zajzon et al., 2019, synaptic transmission is modeled as a conductance-based process, with
different kinetics for excitatory and inhibitory transmission, corresponding to the responses of $AMPA$ and $GABAa$ receptors, respectively, see Materials and methods and Supplementary file 3 for
details. The results, illustrated in Figure 5a, demonstrate that task performance and population activity across the network follow a similar trend to the baseline model (Figures 1 and 2a, b).
Despite severe noise corruption, the system is able to generate a clear, discernible representation of the input as early as SSN[2] and can accurately reconstruct the signal. Importantly, the
relative improvement with increasing modularity and network depth is retained. In comparison to the baseline model, the transition occurs for a slightly different topographic configuration,
$mswitch≈0.85$, at which point the network dynamics converges toward a low-rate, stable asynchronous irregular regime across all populations, facilitating a linear firing rate propagation along the
topographic maps (Figure 5—figure supplement 1).
Denoising through modular topography is a robust structural effect.
The second test case is a smaller and simpler network of nonlinear rate neuron models (see Figure 5b and Materials and methods) which interact via continuous signals (rates) rather than
discontinuities (spikes). Despite these profound differences in the neuronal and synaptic dynamics, the same behavior is observed, demonstrating that sequential denoising is a structural effect,
dependent on the population firing rates and thus less sensitive to fluctuations in the precise spike times. Moreover, the robustness with respect to the network size suggests that denoising could
also be performed in smaller, localized circuits, possibly operating in parallel on different features of the input stimuli.
Despite their ubiquity throughout the neocortex, the characteristics of structured projection pathways is far from uniform (Bednar and Wilson, 2016), exhibiting marked differences in spatial
precision and specificity, aligned with macroscopic gradients of cortical organization. This non-uniformity may play an important functional role supporting feature aggregation (Hagler and Sereno,
2006) and the development of mixed representations (Patel et al., 2014) in higher (more anterior) cortical areas. Here, we consider two scenarios in the baseline (current-based) model to examine the
robustness of our findings to more complex topographic configurations.
First, we varied the size of stimulus-tuned sub-populations (parametrized by $di$, see Materials and methods) but kept them fixed across the network. For small sub-populations and intermediate
degrees of topographic modularity, the activity along the stimulated pathway decays with network depth, suggesting that input information does not reach the deeper SSNs (see Figure 6a and Figure
6—figure supplement 1). These results place a lower bound on the size of stimulus-tuned sub-populations below which no signal propagation can occur, as reflected by the negative gain in performance
for $d=0.01$ (Figure 6b). Whereas denoising is robust to variation around the baseline value of $d=0.1$ that yielded perfect partitioning of the feedforward projections (see Supplementary Materials),
an upper bound may emerge due to increasing overlap between the maps ($d=0.2$ in Figure 6b). In this case, the activity may ‘spill over’ to other pathways than the stimulated one, corrupting the
input representations and hindering accurate transmission and decoding. This can be alleviated by reduced or no overlap (as in Figure 6a), in which case signal propagation and denoising is successful
for larger map sizes ($ν5S/ν0S>1$ also for $d>0.1$). We thus observe a trade-off between map size, overlap and the degree of topographic precision that is required to accurately propagate stimulus
representations (see Discussion).
Variation in the map sizes.
Second, we took into account the fact that these structural features are known to vary with hierarchical depth resulting in increasingly larger sub-populations and, consequently, increasingly
overlapping stimulus selectivity (Smith et al., 2001; Patel et al., 2014; Bednar and Wilson, 2016). To capture this effect, we introduce a linear scaling of map size with depth ($di+1=δ+di$ for $i≥1$
, see Materials and methods). The ability of the circuit to gradually clean the signal’s representation is fully preserved, as illustrated in Figure 6c. In fact, for intermediate modularity ($m<0.9$)
broadening the projections can further sharpen the reconstruction precision (compare curves for $δ=0.02$ and $δ=0$).
Taken together, these observations demonstrate that a gradual denoising of stimulus inputs can occur entirely as a consequence of the modular wiring between the subsequent processing circuits.
Importantly, this effect generalizes well across diverse neuron and synapse models, as well as key system properties, making modular topography a potentially universal circuit feature for handling
noisy data streams.
Modularity as a bifurcation parameter
The results so far indicate that the modular topographic projections, more so than the individual characteristics of neurons and synapses, lead to a sequential denoising effect through a joint
process of signal amplification and noise suppression. To better understand how the system transitions to such an operating regime, it is helpful to examine its macroscopic dynamics in the limit of
many sub-networks (Toyoizumi, 2012; Cayco-Gajic and Shea-Brown, 2013; Kadmon and Sompolinsky, 2016). We apply standard mean-field techniques (Fourcaud and Brunel, 2002; Helias et al., 2013; Schuecker
et al., 2015) to find the asymptotic firing rates (fixed points across sub-networks) of the stimulated and non-stimulated sub-populations as a function of topography (Figure 2d). For this, we can
approximate the input μ to a group of neurons as a linear function of its firing rate $ν$ with a slope $κ$ that is determined by the coupling within the group and an offset given by inputs from other
groups of neurons (orange line in Figure 7a). With an approximately sigmoidal rate transfer function, the self-consistent solutions are at the intersections marked in Figure 7a.
Modularity changes the fixed point structure of the system.
Formally, all neurons in the deep sub-networks of one topographic map form such a group as they share the same firing rate (asymptotic value). The coupling $κ$ within this group comprises not only
recurrent connections of one sub-network but also modular feedforward projections across sub-networks. For small modularity, the group is in an inhibition-dominated regime ($κ<0$) and we obtain only
one fixed point at low activity (Figure 7a, left). Importantly, the firing rate of this fixed point is the same for stimulated and non-stimulated topographic maps. Any influence of input signals
applied to SSN[0] therefore vanishes in the deeper sub-networks and the signal cannot be reconstructed (fading regime). As topographic projections become more concentrated (larger $m$), $κ$ changes
sign and gradually leads to two additional fixed points (as conceptually illustrated in Figure 7a and quantified in Figure 7b by numerically solving the self-consistent mean-field equations, see also
Appendix B): an unstable one (red) that eventually vanishes with increasing $m$ and a stable high-activity fixed point (black). The bistability opens the possibility to distinguish between stimulated
and non-stimulated topographic maps and thereby reconstruct the signal in deep sub-networks: in the active regime beyond the critical modularity threshold (here $m≥mcrit=0.76$), a sufficiently strong
input signal can drive the activity along the stimulated map to the high-activity fixed point, such that it can permeate the system, while the non-stimulated sub-populations still converge to the
low-activity fixed point. Note that this critical modularity represents the minimum modularity value for which bistability emerges. It typically differs from the actual switching point $mswitch$,
which additionally depends on the input intensity.
In the potential energy landscape $U$ (see Materials and methods), where stable fixed points correspond to minima, the bistability that emerges for more structured topography $m≥mcrit=0.76$ can be
understood as a transition from a single minimum at low rates (Figure 7c, inset) to a second minimum associated with the high-activity state (Figure 7c). Even though the full dynamics of the spiking
network away from the fixed point cannot be entirely understood in this simplified potential picture (see Appendix B), qualitatively, more strongly modular networks cause deeper potential wells,
corresponding to more attractive dynamical states and higher firing rates (see Figure 9—figure supplement 2).
Because the intensity of the input signal dictates the rate of different populations in the initial sub-network SSN[0] (Figure 7d), it also determines, for any given modularity, whether the rate of
the stimulated sub-population is in the basin of attraction of the high-activity (see Figure 7e, solid markers and arrows) or low-activity (dashed, blue marker and arrow) fixed point. Denoising, and
therefore increasing signal reconstruction, is thus achieved by successively (across sub-networks) pushing the population states toward the self-consistent firing rates.
As reported above, for the baseline network and (standard) input ($λ=0.05$) used in Figures 1 and 2, the switching point between low and high activity is at $m=0.83$ (blue markers in Figure 7d, f).
Stronger input signals move the switching point toward the minimal modularity $m=0.76$ of the active regime (black markers in Figure 7d, f), while weaker inputs only induce a switch at larger
modularities (gray markers in Figure 7d, f).
Noise in the input simply shifts the transition point to the high-activity state in a similar manner, with more modular connectivity required to compensate for stronger jitter (Figure 7g). However,
as long as the mean firing rate of the stimulated sub-population in SSN[0] is slightly higher than that of the non-stimulated ones (up to 0.5 spks/sec), it is sufficient to position the system in the
attracting basin of the high-rate fixed point and the system is able to clean the signal representation. This indicates a remarkably robust denoising mechanism.
Critical modularity for denoising
In addition to properties of the input, the critical modularity marking the onset of the active regime is also influenced by neuronal and connectivity features. To build some intuition, it is helpful
to consider the sigmoidal activation function of spiking neurons (Figure 8a). The nonlinearity of this function prohibits us from obtaining quantitative, closed-form analytical expressions for the
critical modularity and requires a numerical solution of the self-consistency equations (Figure 7b). However, since the continuous rate model shows a qualitatively similar behavior to the spiking
baseline model (see Section ‘A generalizable structural effect’), we can study a fully analytically tractable model with piecewise linear activation function (Figure 8a, b) to expose the dependence
of the critical modularity on both neuron and network properties (see detailed derivations in Appendix B).
Figure 8
with 2 supplements
see all
Dependence of critical modularity on neuron and connectivity features.
In this simple model, the output is zero for inputs below $μmin=15$ and at maximum rate $νmax=150$ for inputs above $μmax=400$. In between these two bounds, the output is linearly interpolated $ν(μ)
=νmax(μ-μmin)/(μmax-μmin)$. As discussed before, successful denoising is achieved if the non-stimulated sub-populations are silent, $νNS=0$, and the stimulated sub-populations are active, $νS>0$.
Note that in the following we focus on this ideal scenario representing perfect denoising, but, in principle, intermediate solutions with $νS≫νNS>0$ may also occur and could still be considered as
successful denoising. Analyzing for which neuron, network and input properties this scenario is achieved, we obtain multiple conditions for the modularity that need to be fulfilled.
The first condition illustrates the dependence of the critical modularity on the neuron model (Figure 8c, purple horizontal line)
(1) $\begin{array}{l}m\ge \frac{\left({\mu }_{\mathrm{m}\mathrm{a}\mathrm{x}}-{\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}\right){N}_{\mathrm{C}}}{\left(1-\alpha \right)\mathcal{J}{u }_{\mathrm{m}\mathrm
{a}\mathrm{x}}+\left({\mu }_{\mathrm{m}\mathrm{a}\mathrm{x}}-{\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}\right)\left({N}_{\mathrm{C}}-1\right)},\end{array}$
where $NC$ is the number of stimulus-specific sub-populations and $α≤1$ (typically with a value of 0.25) represents the (reduced) noise ratio in the deeper sub-networks, with $α$ scaling the noise
and $1-α$ scaling the feedforward connections (see Materials and methods). This is necessary to ensure that the total excitatory input to each neuron is consistent across the network. In particular,
the critical modularity depends on the dynamic range of input $μmax-μmin$ and output $νmax$. The condition represents a lower bound on the modularity required for denoising. Importantly, while it
depends on the effective coupling strength $J$, the noise ratio $α$ and the number of maps $NC$ (see Materials and methods), it does not depend on the nature of the recurrent interactions (E/I ratio)
and the strength of the external background input. In addition, we find two additional critical values of the modularity (cyan and green curves in Figure 8c–e), both of which do depend on the
strength of the external background input $νX$ and the recurrent connectivity (E/I ratio $γg$):
(2) $\begin{array}{l}m=\frac{{N}_{\mathrm{C}}}{{N}_{\mathrm{C}}-1}-\frac{1}{{N}_{\mathrm{C}}-1}\frac{\left(1-\alpha \right)\mathcal{J}{u }_{\mathrm{m}\mathrm{a}\mathrm{x}}}{{\mu }_{\mathrm{m}\mathrm
{a}\mathrm{x}}-\alpha \mathcal{J}{u }_{\mathrm{X}}-\frac{\mathcal{J}}{{N}_{\mathrm{C}}}\left(1+\gamma g\right){u }_{\mathrm{m}\mathrm{a}\mathrm{x}}}\end{array}$
(3) $\begin{array}{l}m=1-\frac{\left({\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}-\alpha \mathcal{J}{u }_{\mathrm{X}}-\frac{\mathcal{J}}{{N}_{\mathrm{C}}}\left(1+\gamma g\right){u }_{\mathrm{m}\mathrm{a}\
mathrm{x}}\right)}{\mathcal{J}\left(1-\alpha \right){u }_{\mathrm{m}\mathrm{a}\mathrm{x}}-\left({N}_{\mathrm{C}}-1\right)\left({\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}-\alpha \mathcal{J}{u }_{\mathrm
{X}}-\frac{\mathcal{J}}{{N}_{\mathrm{C}}}\left(1+\gamma g\right){u }_{\mathrm{m}\mathrm{a}\mathrm{x}}\right)}\end{array}$
Depending on the external input strength $νX$, these are either upper or lower bounds. In the denominator of these expressions, the total input (recurrent and external) is compared to the limits of
the dynamic range of the neuron model. The cancellation between recurrent and external inputs in the inhibition-dominated baseline model typically yields a total input within the dynamic range of the
neuron, such that modularity in feedforward connections can decrease the input of the non-stimulated sub-populations to silence them, and increase the input of the stimulated sub-populations to
support their activity. The competition between the excitatory and inhibitory contributions ensures that the total input does not lead to a saturating output activity. Thus, for inhibitory
recurrence, denoising can be achieved at a moderate level of modularity over a large range of external background inputs (shaded black and hatched regions in Figure 8c), which demonstrates a robust
denoising mechanism even in the presence of changes in the input environment.
In contrast, if recurrent connections are absent, strong inhibitory external background input is required to counteract the excitatory feedforward input and achieve a denoising scenario (Figure 8d).
Fixed points at non-saturated activity $νS>0$ are also present for low excitatory external input, but unstable due to the positive recurrent feedback. This is because in networks without recurrence,
there is no competition between the recurrent input and the external and feedforward inputs. As a result, the input to both the stimulated and non-stimulated sub-populations is typically high, such
that modulation of the feedforward input via topography cannot lead to a strong distinction between the pathways as required for denoising. In these networks, one typically observes high activity in
all populations. A similar behavior can be observed in excitation-dominated networks (Figure 8e), where the inhibitory external background input must be even stronger to compensate the excitatory
feedforward and recurrent connectivity and reach a stable denoising regime.
Note that inhibitory external input is not in line with the excitatory nature of external inputs to local circuits in the brain and is therefore biologically implausible. One way to achieve denoising
in excitation-dominated networks for excitatory background inputs would be to shift the dynamic range of the activation function (see Figure 8—figure supplement 1), which is, however, not consistent
with the biophysical properties of real neurons (distance between threshold and rest as compared to typical strengths of postsynaptic potentials). In summary, we find that recurrent inhibition is
crucial to achieve denoising in biologically plausible settings.
These results on the role of recurrence and external input can be transferred to the behavior of the spiking model. While details of the fixed point behavior depend on the specific choice of the
activation function, Figure 8f, h shows that there is also no denoising regime for the spiking model in case of no or excitation-dominated recurrence and a biologically plausible level of external
input. Instead, one finds high activity in both stimulated and non-stimulated sub-populations, as confirmed by network simulations (Figure 8g, i). Figure 8—figure supplement 2 further confirms that
even reducing the external input to zero does not avoid this high-activity state in both stimulated and non-stimulated sub-populations for $m<1$.
Input integration and multi-stability
The analysis considered in the sections above is restricted to a system driven with a single external stimulus. However, to adequately understand the system’s dynamics, we need to account for the
fact that it can be concurrently driven by multiple input streams. If two simultaneously active stimuli drive the system (see illustration in Figure 9a), the qualitative behavior where the responses
along the stimulated (non-stimulated) maps are enhanced (silenced) is retained if the strength of the two input channels is sufficiently different (Figure 9b, top panel). In this case, the weaker
stimulus is not strong enough to drive the sub-population it stimulates toward the basin of attraction of the high-activity fixed point. Consequently, the sub-population driven by this second
stimulus behaves as a non-stimulated sub-population and the system remains responsive to only one of the two inputs, acting as a WTA circuit. If, however, the ratio of stimulus intensities varies,
two active sub-populations may co-exist (Figure 9b, center) and/or compete (bottom panel), depending also on the degree of topographic modularity.
Figure 9
with 2 supplements
see all
For multiple input streams, topography may elicit a wide range of dynamical regimes.
To quantify these variations in macroscopic behavior, we focus on the dynamics of SSN[5] and measure the similarity (correlation coefficient) between the firing rates of the two stimulus-specific
sub-populations as a function of modularity and ratio of input intensities $λ2/λ1$ (see Materials and methods and Figure 9c). In the case that both inputs have similar intensities but the feedforward
projections are not sufficiently modular, both sub-populations are activated simultaneously (Co-Ex, red area in Figure 9c). This is the dynamical regime that dominates the earlier sub-networks.
However, this is a transient state, and the Co-Ex region gradually shrinks with network depth until it vanishes completely after approximately 9–10 SSNs (see Figure 9d).
For low modularity, the system settles in the single stable state associated with near-zero firing rates, as illustrated schematically in the energy landscape in Figure 9e, (1) (see Materials and
methods, Appendix B, and Supplementary Materials for derivations and numerical simulations). Above the critical modularity value, the system enters one of two different regimes. For $m>0.84$ and an
input ratio below 0.7 (Figure 9c, gray area), one stimulus dominates (WTA) and the responses in the two populations are uncorrelated (Figure 9b, top panel). Although the potential landscape contains
two minima corresponding to either population being active, the system always settles in the high-activity attractor state corresponding to the dominating input (Figure 9e, (2)).
If, however, the two inputs have comparable intensities and the topographic projections are sharp enough ($m>0.84$), the system transitions into a different dynamical state where neither
stimulus-specific sub-population can maintain an elevated firing rate for extended periods of time. In the extreme case of nearly identical intensities ($λ2/λ1≥0.9)$ and high modularity, the
responses become anti-correlated (Figure 9b, bottom panel), that is the activation of the two stimulus-specific sub-populations switches, as they engage in a dynamic behavior reminiscent of WLC
between multiple neuronal groups (Lagzi and Rotter, 2015; Rost et al., 2018). The switching between the two states is driven by stochastic fluctuations (Figure 9e, (3)). The depth of the wells and
width of barrier (distance between fixed points) increase with modularity (see Figure 9e, (4) and Figure 9—figure supplement 2), suggesting a greater difficulty in moving between the two attractors
and consequently fewer state changes. Numerical simulations confirm this slowdown in switching (Figure 9f).
We wish to emphasize that the different dynamical states arise primarily from the feedforward connectivity profile. Nevertheless, even though the synaptic weights are not directly modified, varying
the topographic modularity does translate to a modification of the effective connectivity weights (Figure 3b). The ratio of stimulus intensities also plays a role in determining the dynamics, but
there is a (narrow) range (approximately between 0.75 and 0.8) for which all 3 regions can be reached through sole modification of the modularity. Together, these results demonstrate that topography
can not only lead to spatial denoising but also enable various, functionally important network operating points.
Reconstruction and denoising of dynamical inputs
Until now, we have considered continuous but piecewise constant, step signals, with each step lasting for a relatively long and fixed period of $200ms$. This may give the impression that the
denoising effects we report only works for static or slowly changing inputs, whereas naturalistic stimuli are continuously varying. Nevertheless, sensory perception across modalities relies on
varying degrees of temporal and spatial discretization (VanRullen and Koch, 2003), with individual (sub-)features of the input encoded by specific (sub-)populations of neurons in the early stages of
the sensory hierarchy. In this section, we will demonstrate that denoising is robust to the temporal properties of the input and its encoding, as we relax many of the assumptions made in previous
We consider a sinusoidal input signal, which we discretize and map onto the network according to the depiction in Figure 10a. This approach is similar to previous works, for instance it can mimic the
movement of a light spot across the retina (Klos et al., 2018). By varying the sampling interval $dt$ and number of channels $k$, we can change the coarseness of the discretization from step-like
signals to more continuous approximations of the input. If we choose a high sampling rate ($dt=1ms$) and sufficient channels ($k=40$), we can accurately encode even fast changing signals (Figure
10b). Given that each input-driven SSN is inhibition-dominated and therefore close to the balanced state, the network exhibits a fast tracking property (van Vreeswijk and Sompolinsky, 1996) and can
accurately represent and denoise the underlying continuous signal in the spiking activity (Figure 10c, top). This is also captured by the readout, with the tracking precision increasing with network
depth (Figure 10c, bottom). In this condition, there is a performance gain of up to 50% in the noiseless case (Figure 10d, top) and similar values for varying levels of noise (Figure 10d, bottom).
Figure 10
with 1 supplement
see all
Reconstruction of a dynamic, continuous input signal.
Note that due to the increased number of input channels (40 compared to 10) projecting to the same number of neurons in SSN[0] as before $(800)$, for the same $σξ$ the effective amount of noise each
neuron receives is, on average, four times larger than in the baseline network. Moreover, the task was made more difficult by the significant overlap between the maps ($NC=20$) as well as the
resulting decrease in neuronal input selectivity. Nevertheless, similar results were obtained for slower and more coarsely sampled signals (Figure 10e–g).
We found comparable denoising dynamics for a large range of parameter combinations involving the map size, number of maps, number of channels, and signal complexity. Although there are limits with
respect to the frequencies (and noise intensity) the network can track (see Figure 10—figure supplement 1), these findings indicate a very robust and flexible phenomenon for denoising spatially
encoded sensory stimuli.
The presence of stimulus- or feature-tuned sub-populations of neurons in primary sensory cortices (as well as in downstream areas) provides an efficient spatial encoding strategy (Pouget et al., 1999
; Seriès et al., 2004; Tkacik et al., 2010) that ensures the relevant computable features are accurately represented. Here, we propose that beyond primary sensory areas, modular topographic
projections play a key role in preserving accurate representations of sensory inputs across many processing modules. Acting as a structural scaffold for a sequential denoising mechanism, we show how
they simultaneously enhance relevant stimulus features and remove noisy interference. We demonstrate this phenomenon in a variety of network models and provide a theoretical analysis that indicates
its robustness and generality.
When reconstructing a spatially encoded input signal corrupted by noise in a network of sequentially connected populations, we find that a convergent structure in the feedforward projections is not
only critical for successfully solving the task, but that the performance increases significantly with network depth beyond a certain modularity (Figure 1). Through this mechanism, the response
selectivity of the stimulated sub-populations is sharpened within each subsequent sub-network, while others are silenced (Figure 2). Such wiring may support efficient and robust information
transmission from the thalamus to deeper cortical centers, retaining faithful representations even in the presence of strong noise. We demonstrate that this holds for a variety of signals, from
approximately static (stepwise) to smoothly and rapidly changing dynamic inputs (Figure 10). Thanks to the balance of excitation and inhibition, the network is able to track spatially encoded signals
on very short timescales, and is flexible with respect to the level of spatial and temporal discretization. Accurate tracking and denoising requires that the encoding is locally static/
semi-stationary for only a few tens of milliseconds, which is roughly in line with psychophysics studies on the limits of sensory perception (Borghuis et al., 2019).
More generally, topographic modularity, in conjunction with other top-down processes (Kok et al., 2012), could provide the anatomical substrate for the implementation of a number of behaviorally
relevant processes. For example, feedforward topographic projections on the visual pathway could contribute, together with various attentional control processes, to the widely observed pop-out effect
in the later stages of the visual hierarchy (Brefczynski-Lewis et al., 2009; Itti et al., 1998). The pop-out effect, at its core, assumes that in a given context some neurons exhibit sharper
selectivity to their preferred stimulus feature than the neighboring regions, which can be achieved through a winner-take-all (WTA) mechanism (see Figure 9 and Himberger et al., 2018).
The WTA behavior underlying the denoising is caused by a re-shaping of the E/I balance across the network (see Figure 3). As the excitatory feedforward projections become more focused, they modulate
the system’s effective connectivity and thereby the gain on the stimulus-specific pathways, gating or allowing (and even enhancing) signal propagation. This change renders the stimulated pathway
excitatory in the active regime (see Figure 7), leading to multiple fixed points such as those observed in networks with local recurrent excitation (Renart et al., 2007; Litwin-Kumar and Doiron, 2012
). While the high-activity fixed point of such clustered networks is reached over time, in our model it unfolds progressively in space, across multiple populations. Importantly, in the range of
biologically plausible numbers of cortical areas relevant for signal transmission (up to 10 for some visual stimuli, see Felleman and Van Essen, 1991; Hegdé and Felleman, 2007) and intermediate
modularity, the firing rates remain within experimentally observed limits and do not saturate. The basic principle is similar to other approaches that alter the gain on specific pathways to
facilitate stimulus propagation, for example through stronger synaptic weights (Vogels and Abbott, 2005), stronger nonlinearity (Toyoizumi, 2012), tuning of connectivity strength, and neuronal
thresholds (Cayco-Gajic and Shea-Brown, 2013), via detailed balance of local excitation and inhibition (amplitude gating; Vogels and Abbott, 2009) or with additional subcortical structures (Cortes
and van Vreeswijk, 2015). Additionally, our model also displays some activity characteristics reported previously, such as the response sharpening observed for synfire chains (Diesmann et al., 1999)
or (almost) linear firing rate propagation (Kumar et al., 2010) (for intermediate modularity).
However, due to the reliance on increasing inhibitory activity at every stage, we speculate that denoising, as studied here, would not occur in such a system containing a single, shared inhibitory
pool with homogeneous connectivity. In this case, inhibition would affect all excitatory populations uniformly, with stronger activity potentially preventing accurate stimulus transmission from the
initial sub-networks. Nevertheless, this problem could be alleviated using a more realistic, localized spatial connectivity profile as in Kumar et al., 2008a, or by adding shadow pools (groups of
inhibitory neurons) for each layer of the network, carefully wired in a recurrent or feedforward manner (Aviel et al., 2003; Aviel et al., 2005; Vogels and Abbott, 2009). In such networks with
non-random or spatially dependent connectivity, structured (modular) topographic projections onto the inhibitory populations will likely be necessary to maintain stable dynamics and attain the
appropriate inhibition-dominated regimes (Figure 3). Alternatively, these could be achieved through additional, targeted inputs from other areas (Figure 4), with feedforward inhibition known to
provide a possible mechanism for context-dependent gating or selective enhancement of certain stimulus features (Ferrante et al., 2009; Roberts et al., 2013).
While our findings build on the above results, we here show that the experimentally observed topographic maps may serve as a structural denoising mechanism for sensory stimuli. In contrast to most
works on signal propagation where noise mainly serves to stabilize the dynamics and is typically avoided in the input, here the system is driven by a continuous signal severely corrupted by noise.
Taking a more functional approach, this input is reconstructed using linear combinations of the full network responses, rather than evaluating the correlation structure of the activity or relying on
precise firing rates. Focusing on the modularity of such maps in recurrent spiking networks, our model also differs from previous studies exploring optimal connectivity profiles for minimizing
information loss in purely feedforward networks (Renart and van Rossum, 2012; Zylberberg et al., 2017), also in the context of sequential denoising autoencoders (Kadmon and Sompolinsky, 2016) and
stimulus classification (Babadi and Sompolinsky, 2014), which used simplified neuron models or shallow networks, made no distinction between excitatory and inhibitory connections, or relied on
specific, trained connection patterns (e.g., chosen by the pseudo-inverse model). Although the bistability underlying denoising can, in principle, also be achieved in such feedforward or networks
without inhibition, our theoretical predictions and network simulations indicate that for biologically constrained circuits (i.e., where the background and long-range feedforward input is
excitatory), inhibitory recurrence is indispensable for the spatial denoising studied here (see Section ‘Critical modularity for denoising’). Recurrent inhibition compensates for the feedforward and
external excitation, generating competition between the topographic pathways and allowing the populations to rapidly track their input.
Moreover, our findings provide an explanation for how low-intensity stimuli (1–2 spks/sec above background activity, see Figure 2 and Supplementary Materials) could be amplified across the cortex
despite significant noise corruption, and relies on a generic principle that persists across different network models (Figure 5) while also being robust to variations in the map size (Figure 6). We
demonstrated both the existence of a lower and upper (due to increased overlap) bound on their spatial extent for signal transmission, as well as an optimal region for which denoising was most
pronounced. These results indicate a trade-off between modularity and map size, with larger maps sustaining stimulus propagation at lower modularity values, whereas smaller maps must compensate
through increased topographic density (see Figure 6a and Supplementary Materials). In the case of smaller maps, progressively enlarging the receptive fields enhanced the denoising effect and improved
task performance (Figure 6c), suggesting a functional benefit for the anatomically observed decrease in topographic specificity with hierarchical depth (Bednar and Wilson, 2016; Smith et al., 2001).
One advantage of such a wiring could be spatial efficiency in the initial stages of the sensory hierarchy due to anatomical constraints, for instance the retina or the lateral geniculate nucleus.
While we get a good qualitative description of how the spatial variation of topographic maps influences the system’s computational properties, the numerical values in general are not necessarily
representative. Cortical maps are highly dynamic and exhibit more complex patterning, making (currently scarce) precise anatomical data a prerequisite for more detailed investigations. For instance,
despite abundant information on the size of receptive fields (Smith et al., 2001; Liu et al., 2016; Keliris et al., 2019), there is relatively little data on the connectivity between neurons tuned to
related or different stimulus features across distinct cortical circuits. Should such experiments become feasible in the future, our model provides a testable prediction: the projections must be
denser (or stronger) between smaller maps to allow robust communication whereas for larger maps fewer connections may be sufficient.
Finally, our model relates topographic connectivity to competition-based network dynamics. For two input signals of comparable intensities, moderately structured projections allow both
representations to coexist in a decodable manner up to a certain network depth, whereas strongly modular connections elicit WLC like behavior characterized by stochastic switching between the two
stimuli (see Figure 9). Computation by switching is a functionally relevant principle (McCormick, 2005; Schittler Neves and Timme, 2012), which relies on fluctuation- or input-driven competition
between different metastable (unstable) or stable attractor states. In the model studied here, modular topography induced multi-stability (uncertainty) in representations, alternating between two
stable fixed points corresponding to the two input signals. Structured projections may thus partially explain the experimentally observed competition between multiple stimulus representations across
the visual pathway (Li et al., 2016), and is conceptually similar to an attractor-based model of perceptual bistability (Moreno-Bote et al., 2007). Moreover, this multi-stability across sub-networks
can be ‘exploited’ at any stage by control signals, that is additional modulation (inihibitory) could suppress one and amplify (bias) another.
Importantly, all these different dynamical regimes emerge progressively through the hierarchy and are not discernible in the initial modules. Previous studies reporting on similar dynamical states
have usually considered either the synaptic weights as the main control parameter (Lagzi and Rotter, 2015; Lagzi et al., 2019; Vogels and Abbott, 2005) or studied specific architectures with
clustered connectivity (Schaub et al., 2015; Litwin-Kumar and Doiron, 2012; Rost et al., 2018). Our findings suggest that in a hierarchical circuit a similar palette of behaviors can be also obtained
given appropriate effective connectivity patterns modulated exclusively through modular topography. Although we used fixed projections throughout this study, these could also be learned and shaped
continuously through various forms of synaptic plasticity (see e.g. Tomasello et al., 2018). To achieve such a variety of dynamics, cortical circuits most likely rely on a combination of all these
mechanisms, that is, pre-wired modular connections (within and between distant modules) and heterogeneous gain adaptation through plasticity, along with more complex processes such as targeted
inhibitory gating.
Overall, our results highlight a novel functional role for topographically structured projection pathways in constructing reliable representations from noisy sensory signals, and accurately routing
them across the cortical circuitry despite the plethora of noise sources along each processing stage.
We consider a feedforward network architecture where each sub-network (SSN) is a balanced random network (Brunel, 2000) composed of $N=10000$ homogeneous LIF neurons, grouped into a population of $NE
=0.8N$ excitatory and $NI=0.2N$ inhibitory units. Within each sub-network, neurons are connected randomly and sparsely, with a fixed number of $KE=ϵNE$ local excitatory and $KI=ϵNI$ local
inhibitory inputs per neuron. The sub-networks are arranged sequentially, that is the excitatory neurons $Ei$ in $SSNi$ project to both $Ei+1$ and $Ii+1$ populations in the subsequent sub-network
$SSNi+1$ (for an illustrative example, see Figure 1a). There are no inhibitory feedforward projections. Although projections between sub-networks have a specific, non-uniform structure (see next
section), each neuron in $SSNi+1$ receives the same total number of synapses from the previous SSN, $KFF$.
In addition, all neurons receive $KX$ inputs from an external source representing stochastic background noise. For the first sub-network, we set $KX=KE$, as it is commonly assumed that the number of
background input synapses modeling local and distant cortical input is in the same range as the number of recurrent excitatory connections (see e.g. Brunel, 2000; Kumar et al., 2008b; Duarte and
Morrison, 2014). To ensure that the total excitatory input to each neuron is consistent across the network, we scale $KX$ by a factor of $α=0.25$ for the deeper SSNs and set $KFF=(1-α)KE$, resulting
in a ratio of 3:1 between the number of feedforward and background synapses.
Modular feedforward projections
Within each SSN, each neuron is assigned to one or more of $NC$ sub-populations SP associated with a specific stimulus ($NC=10$ unless otherwise stated). This is illustrated in Figure 1a for $NC=2$.
We choose these sub-populations so as to minimize their overlap within each $SSNi$, and control their effective size $Ciβ=diNβ,β∈[E,I]$, through the scaling parameter $di∈[0,1]$. Depending on the
size and number of sub-populations, it is possible that some neurons are not part of any or that some neurons belong to multiple such sub-populations (overlap).
In what follows, a topographic map refers to the sequence of sub-populations in the different sub-networks associated with the same stimulus. To enable a flexible manipulation of the map sizes, we
constrain the scaling factor $di$ by introducing a step-wise linear increment $δ$, such that $di=d0+iδ,i≥1$. Unless otherwise stated, we set $d0=0.1$ and $δ=0$. Note that all SPs within a given SSN
have the same size. In this study, we will only explore values in the range $0≤δ≤0.02$ to ensure consistent map sizes across the system, that is, $0≤di≤1$ for all $SSNi$ (see constraints in Appendix
To systematically modify the degree of modular segregation in the topographic projections, we define a modularity parameter that determines the relative probability for feedforward connections from a
given SP in $SSNi$ to target the corresponding SP in $SSNi+1$. Specifically, we follow (Newman, 2009; Pradhan et al., 2011) and define $m=1-p0pc∈[0,1]$ as the ratio of the feedforward projection
probabilities between neurons belonging to different SPs $(p0)$ and between neurons on the same topographic map $(pc)$. According to the above definition, the feedforward connectivity matrix is
random and homogeneous (Erdős-Rényi graph) if $m=0$ or $di=1$ (see Figure 1a). For $m=1$ it is a block-diagonal matrix, where the individual SPs overlap only when $di>1/NC$. In order to isolate the
effects on the network dynamics and computational performance attributable exclusively to the topographic structure, the overall density of the feedforward connectivity matrix is kept constant at $
(1-α)*ϵ=0.075$ (see also previous section). We note that, while providing the flexibility to implement the variations studied in this manuscript, this formalism has limitations (see Appendix A).
We study networks composed of LIF neurons with fixed voltage threshold and static synapses with exponentially decaying postsynaptic currents or conductances. The sub-threshold membrane potential
dynamics of such a neuron evolves according to:
(4) ${\tau }_{\mathrm{m}}\frac{dV\left(t\right)}{dt}=\left({V}_{\mathrm{rest}}-V\left(t\right)\right)+R\left({I}^{\mathrm{E}}\left(t\right)+{I}^{\mathrm{I}}\left(t\right)+{I}^{\mathrm{X}}\left(t\
where $τm$ is the membrane time constant, and $RIβ$ is the total synaptic input from population $β∈[E,I]$. The background input $IX$ is assumed to be excitatory and stochastic, modeled as a
homogeneous Poisson process with constant rate $νX$. Synaptic weights $Jij$, representing the efficacy of interaction from presynaptic neuron $j$ to postsynaptic neuron $i$, are equal for all
realized connections of a given type, that is, $JEE=JIE=J$ for excitatory and $JEI=JII=gJ$ for inhibitory synapses. All synaptic delays and time constants are equal in this setup. For a complete,
tabular description of the models and model parameters used throughout this study, see Supplementary files 1–5.
Following previous works (Zajzon et al., 2019; Duarte and Morrison, 2014), we choose the intensity of the stochastic input $νX$ and the E–I ratio $g$ such that the first two sub-networks operate in a
balanced, asynchronous irregular regime when driven solely by background input. This is achieved with $νX=12spikes/s$ and $g=-12$, resulting in average firing rates of $∼3spikes/s$, coefficient of
variation ($CVISI$) in the interval $[1.0,1.5]$ and Pearson cross-correlation (CC) ≤0.01 in $SSN0$ and $SSN1$.
In Section ‘A generalizable structural effect’ we consider two additional systems, a network of LIF neurons with conductance-based synapses and a continuous firing rate model. The LIF network is
described in detail in Zajzon et al., 2019. Spike-triggered synaptic conductances are modeled as exponential functions, with fixed and equal conduction delays for all synapses. Key differences to the
current-based model include, in addition to the biologically more plausible synapse model, longer synaptic time constants and stronger input (see also Zajzon et al., 2019 and Supplementary file 3 for
the numerical values of all parameters).
The continuous rate model contains $N=3000$ nonlinear units, the dynamics of which are governed by:
(5) $\begin{array}{lll}{\tau }_{x}\frac{\mathrm{d}\mathbit{x}}{\mathrm{d}\mathrm{t}}& =-\mathbit{x}+J\mathbit{r}+{J}^{\mathrm{i}\mathrm{n}}\mathbit{u}-{\mathbit{b}}^{\mathrm{r}\mathrm{e}\mathrm{c}}+\
sqrt{2{\tau }_{\mathbit{x}}}{\sigma }_{\mathrm{X}}\mathbit{\xi }& \\ \mathbit{r}& =0.5\left(1+\mathrm{t}\mathrm{a}\mathrm{n}\mathrm{h}\left(\mathbit{x}\right)\right)\end{array}$
where $x$ represents the activation and $r$ the output of all units, commonly interpreted as the synaptic current variable and the firing rate estimate, respectively. The rates $ri$ are obtained by
applying the nonlinear transfer function $tanh(xi)$, modified here to constrain the rates to the interval $[0,1]$ is the neuronal time constant, $brec$ is a vector of individual neuronal bias terms
(i.e., a baseline activation), and $J$ and $Jin$ are the recurrent (including feedforward) and input weight matrices, respectively. These are constructed in the same manner as for the spiking
networks, such that the overall connectivity, including the input mapping onto $SSN0$, is identical for all three models. Input weights are drawn from a uniform distribution, while the rest follow a
normal distribution. Finally, $ξ$ is a vector of $N$ independent realizations of Gaussian white noise with zero mean and variance scaled by $σX$. The differential equations are integrated
numerically, using the Euler–Maruyama method with step $δt=1ms$, with specific parameter values given in Supplementary file 5.
Signal reconstruction task
Request a detailed protocol
We evaluate the system’s ability to recover a simple, continuous step signal from a noisy variation using linear combinations of the population responses in the different SSNs (Maass et al., 2002).
This is equivalent to probing the network’s ability to function as a denoising autoencoder (Bengio et al., 2013).
To generate the $NC$-dimensional input signal $u(t)$, we randomly draw stimuli from a predefined set $S={S1,S2,…,SNC}$ and set the corresponding channel to active for a fixed duration of 200 ms (
Figure 1a, left). This binary step signal $u(t)$ is also the target signal to be reconstructed. The effective input is obtained by adding a Gaussian white noise process with zero mean and variance
$σξ2$ to $u(t)$, and scaling the sum with the input rate $νin$. Rectifying the resulting signal leads to the final form of the continuous input signal $z(t)=[νin(u(t)+ξ(t))]+$. This allows us to
control the amount of noise in the input, and thus the task difficulty, through a single parameter $σξ$.
To deliver the input to the circuit, the analog signal $z(t)$ is converted into spike trains, with its amplitude serving as the rate of an inhomogeneous Poisson process generating independent spike
trains. We set the scaling amplitude to $νin=KEλνX$, modeling stochastic input with fixed rate $λνX$ from $KE=800$ neurons. If not otherwise specified, $λ=0.05$ holds, resulting in a mean firing
rate below 8 spks/sec in $SSN0$ (see Figure 2c).
Each input channel $k$ is mapped onto one of the $NC$ stimulus-specific sub-populations of excitatory and inhibitory neurons in the first (input) sub-network $SSN0$, chosen according to the procedure
described above (see also Figure 1a). This way, each stimulus $Sk$ is mapped onto a specific set of sub-populations in the different sub-networks, that is, the topographic map associated with $Sk$.
For each stimulus in the sequence, we sample the responses of the excitatory population in each $SSNi$ at fixed time points (once every ms) relative to stimulus onset. We record from the membrane
potentials $Vm$ as they represent a parameter-free and direct measure of the population state (Duarte et al., 2018; Uhlmann et al., 2017). The activity vectors are then gathered in a state matrix
$XSSNi∈RNE×T$, which is then used to train a linear readout to approximate the target output of the task (Lukoševičius and Jaeger, 2009). We divide the input data, containing a total of 100 stimulus
presentations (yielding $T=20,000$ samples), into a training and a testing set (80/20%), and perform the training using ridge regression (L2 regularization), with the regularization parameter chosen
by leave-one-out cross-validation on the training dataset.
Reconstruction performance is measured using the normalized root mean squared error (NRMSE). For this particular task, the effective delay in the build-up of optimal stimulus representations varies
greatly across the sub-networks. In order to close in on the optimal delay for each $SSNi$, we train the state matrix $XSSNi$ on a larger interval of delays and choose the one that minimizes the
error, averaged across multiple trials.
In Section ‘Reconstruction and denoising of dynamical inputs’, we generalize the input to a sinusoidal signal $x(t)=sin(a⋅t)+cos(b⋅t)$, with parameters $a$ and $b$. From this, we obtain $u(t)$
through the sampling and discretization process described in the respective section, and compute the final input $z(t)=[νin(u(t)+ξ(t))]+$ as above.
Effective connectivity and stability analysis
Request a detailed protocol
To better understand the role of structural variations on the network’s dynamics, we determine the network’s effective connectivity matrix $W$ analytically by linear stability analysis around the
system’s stationary working points (see Appendix B for the complete derivations). The elements $wij∈W$ represent the integrated linear response of a target neuron $i$, with stationary rate $νi$, to
a small perturbation in the input rate $νj$ caused by a spike from presynaptic neuron $j$. In other words, $wij$ measures the average number of additional spikes emitted by a target neuron $i$ in
response to a spike from the presynaptic neuron $j$, and its relation to the synaptic weights is defined by Tetzlaff et al., 2012; Helias et al., 2013:
(6) $\begin{array}{ll}\phantom{\rule{32pt}{0ex}}{w}_{ij}& =\frac{\mathrm{\partial }{u }_{i}}{\mathrm{\partial }{u }_{j}}=\stackrel{~}{\alpha }{J}_{ij}+\stackrel{~}{\beta }{J}_{ij}^{2}\\ \text{ with }
\phantom{\rule{12pt}{0ex}}\stackrel{~}{\alpha }& =\sqrt{\pi }{\left({\tau }_{\mathrm{m}}{u }_{i}\right)}^{2}\frac{1}{{\sigma }_{i}}\left(f\left({y}_{\theta }\right)-f\left({y}_{\mathrm{r}}\right)\
right)\\ \text{ and }\phantom{\rule{15pt}{0ex}}\stackrel{~}{\beta }& =\sqrt{\pi }{\left({\tau }_{\mathrm{m}}{u }_{i}\right)}^{2}\frac{1}{2{\sigma }_{i}^{2}}\left(f\left({y}_{\theta }\right){y}_{\
theta }-f\left({y}_{\mathrm{r}}\right){y}_{\mathrm{r}}\right).\end{array}$
Note that in Figure 3 we ignore the contribution $β~$ resulting from the modulation in the input variance $σj2$ which is significantly smaller due to the additional factor $1/σi∼O(1/N)$. Importantly,
the effective connectivity matrix $W$ allows us to gain insights into the stability of the system by eigenvalue decomposition. For large random coupling matrices, the effective weight matrix has a
spectral radius $ρ=maxk(Re{λk})$ which is determined by the variances of $W$ (Rajan and Abbott, 2006). For inhibition-dominated systems, such as those we consider, there is a single negative
outlier representing the mean effective weight, given the eigenvalue $λk*$ associated with the unit vector. The stability of the system is thus uniquely determined by the spectral radius $ρ$: values
smaller than unity indicate stable dynamics, whereas $ρ>1$ lead to unstable linearized dynamics.
For the mean-field analysis, the $NC$ sub-populations in each sub-network can be reduced to only two groups of neurons, the first one comprising all neurons of the stimulated SPs and the second one
comprising all neurons in all non-stimulated SPs. This is possible because (1) the firing rates of the excitatory and inhibitory neurons within one SP are identical, owing to homogeneous neuron
parameters and matching incoming connection statistics, and (2) all neurons in non-stimulated SPs have the same rate $νNS$ that is in general different from the rate of the stimulated SP $νS$. Here
we only sketch the main steps, with a detailed derivation given in Appendix B.
The mean inputs to the first sub-network can be obtained via
(7) $\begin{array}{l}{\mu }^{\mathrm{S}}\phantom{\rule{.5em}{0ex}}=\left(1+\lambda \right)\mathcal{J}{u }_{\mathrm{x}}+\frac{1}{{N}_{\mathrm{C}}}\mathcal{J}\left(1+\gamma g\right){u }^{\mathrm{S}}+\
frac{{N}_{\mathrm{C}}-1}{{N}_{\mathrm{C}}}\mathcal{J}\left(1+\gamma g\right){u }^{\mathrm{N}\mathrm{S}}\phantom{\rule{thinmathspace}{0ex}},\\ {\mu }^{\mathrm{N}\mathrm{S}}=\phantom{\left(1+\lambda \
right)}\mathcal{J}{u }_{\mathrm{x}}+\frac{1}{{N}_{\mathrm{C}}}\mathcal{J}\left(1+\gamma g\right){u }^{\mathrm{S}}+\frac{{N}_{\mathrm{C}}-1}{{N}_{\mathrm{C}}}\mathcal{J}\left(1+\gamma g\right){u }^{\
where $γ=KI/KE$ and $J=τmKEJ$. Both equations are of the form
(8) $\kappa u =\mu -I$
where $κ$ is the effective self-coupling of a group of neurons with rate $ν$ and input μ, and $I$ denotes the external inputs from other groups. Equation 8 describes a linear relationship between the
rate $ν$ and the input μ. To find a self-consistent solution for the rates $νS$ and $νNS$, the above equations need to be solved numerically, taking into account in addition the f–I curve $ν(μ)$ of
the neurons that in the case of LIF model neurons also depends on the variance $σ2$ of inputs. The latter can be obtained analogous to the mean input μ (see Appendix B). Note that for general
nonlinearity $ν(μ)$ there is no analytical closed-form solution for the fixed points.
Starting from $SSN1$, networks are connected in a fixed pattern such that the rate $νi$ in $SSNi$ also depends on the excitatory input from the previous sub-network $SSNi-1$ with rate $νi-1$. For a
fixed point, we have $νi=νi-1$ (Toyoizumi, 2012). In this case, we can effectively group together stimulated/non-stimulated neurons in successive sub-networks and re-group equations for the mean
input in the limit of many sub-networks, obtaining the simplified description (details see Appendix B)
(9) $\begin{array}{l}{\mu }^{\mathrm{S}}=\alpha \mathcal{J}{u }_{\mathrm{x}}+{\kappa }_{\mathrm{S},\mathrm{S}}\text{ }{u }^{\mathrm{S}}+{\kappa }_{\mathrm{S},\mathrm{N}\mathrm{S}}\text{ }{u }^{\
(10) $\begin{array}{l}{\mu }^{\mathrm{N}\mathrm{S}}=\alpha \mathcal{J}{u }_{\mathrm{x}}+{\kappa }_{\mathrm{N}\mathrm{S},\mathrm{S}}\text{ }{u }^{\mathrm{S}}+{\kappa }_{\mathrm{N}\mathrm{S},\mathrm{N}
\mathrm{S}}\text{ }{u }^{\mathrm{N}\mathrm{S}}\end{array}$
The scaling terms of the firing rates incorporate the recurrent and feedforward contributions from the stimulated and non-stimulated groups of neurons. They depend solely on some fixed parameters of
the system, including modularity $m$ (see Appendix B). Importantly, Equations 9 and 10 and have the same linear form as (Equation 8) Equation 8 and can be solved numerically as described above.
Again, for general nonlinear $ν(μ)$ there is no closed-form analytical solution, but see below for a piecewise linear activation function $ν(μ)$. The numerical solutions for fixed points are
obtained using the root finding algorithm root of the scipy.optimize package (Virtanen et al., 2020). The stability of the fixed points is obtained by inserting the corresponding firing rates into
the effective connectivity Equation 6. On the level of stimulated and non-stimulated sub-populations, the effective connectivity matrix reads
(11) $\begin{array}{l}\frac{1}{{\tau }_{\mathrm{m}}}\left(\begin{array}{cc}{\kappa }_{\mathrm{S},\mathrm{S}}\left(m\right)\stackrel{˜}{\alpha }\left({u }^{\mathrm{S}}\right)& {\kappa }_{\mathrm{S},\
mathrm{N}\mathrm{S}}\left(m\right)\stackrel{˜}{\alpha }\left({u }^{\mathrm{N}\mathrm{S}}\right)\\ {\kappa }_{\mathrm{N}\mathrm{S},\mathrm{S}}\left(m\right)\stackrel{˜}{\alpha }\left({u }^{\mathrm{S}}
\right)& {\kappa }_{\mathrm{N}\mathrm{S},\mathrm{N}\mathrm{S}}\left(m\right)\stackrel{˜}{\alpha }\left({u }^{\mathrm{N}\mathrm{S}}\right)\end{array}\right)\end{array}$
from which we obtain the maximum eigenvalue $ρ$, which for stable fixed points must be smaller than 1.
The structure of fixed points for the stimulated sub-population (see discussion in ‘Modularity as a bifurcation parameter’) can furthermore be intuitively understood by studying the potential
landscape of the system. The potential $U$ is thereby defined via the conservative force $F=-dUdνS=-νS+ν(μ,σ2)$ that drives the system toward its fixed points via the equation of motion $dνSdt=
F$ (Wong and Wang, 2006; Litwin-Kumar and Doiron, 2012; Schuecker et al., 2017). Note that μ and $σ2$ are again functions of $νS$ and $νNS$, where the latter is the self-consistent rate of the
non-stimulated sub-populations for given rate $νS$ of the stimulated sub-population, $νNS=νNS(νS)$ (details see Appendix B).
Multiple inputs and correlation-based similarity score
Request a detailed protocol
In Figure 9, we consider two stimuli $S1$ and $S2$ to be active simultaneously for 10 s. Let $SP1$ and $SP2$ be the two corresponding SPs in each sub-network. The firing rate of each SP is
estimated from spike counts in time bins of 10 ms and smoothed with a Savitzky-Golay filter (length 21 and polynomial order 4). We compute a similarity score based on the correlation between these
rates, scaled by the ratio of the input intensities $λ2/λ1$ (with $λ1$ fixed). This scaling is meant to introduce a gradient in the similarity score based on the firing rate differences, ensuring
that high (absolute) scores require comparable activity levels in addition to strong correlations. To ensure that both stimuli are decodable where appropriate, we set the score to 0 when the
difference between the rate of $SP2$ and the non-stimulated SPs was <1 spks/sec ($SP1$ had significantly higher rates). The curves in Figure 9c mark the regime boundaries: coexisting (Co-Ex) where
score is >0.1 (red curve); WLC where score is <−0.1 (blue); WTA (gray) and where the score is in the interval (−0.1, 0.1), and either $λ2/λ1<0.5$ holds or the score is 0. While the Co-Ex region is a
dynamical regime that only occurs in the initial sub-networks (Figure 9d), the WTA and WLC regimes persist and can be understood again with the help of a potential $U$, which is in this case a
function of the rates of the two SPs (details see Appendix B).
Numerical simulations and analysis
Request a detailed protocol
All numerical simulations were conducted using the Neural Microcircuit Simulation and Analysis Toolkit (NMSAT) v0.2 (Duarte et al., 2017), a high-level Python framework for creating, simulating and
evaluating complex, spiking neural microcircuits in a modular fashion. It builds on the PyNEST interface for NEST (Gewaltig and Diesmann, 2007), which provides the core simulation engine. To ensure
the reproduction of all the numerical experiments and figures presented in this study, and abide by the recommendations proposed in Pauli et al., 2018, we provide a complete code package that
implements project-specific functionality within NMSAT (see Data availability) using NEST 2.18.0 (Jordan et al., 2019).
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Constraints on feedforward connectivity
This section expands on the limitations arising from the definitions of topographic modularity and map sizes used in this study. By imposing a fixed connection density on the feedforward connection
matrices, the projection probabilities between neurons tuned to the same $(pc)$ and different $(p0)$ stimuli are uniquely determined by the modularity $m$ and the parameter $d0$ and $δ$, which
control the size of stimulus-specific sub-populations (see Materials and methods). For notational simplicity, here we consider the merged excitatory and inhibitory sub-populations tuned to a
particular stimulus in a given sub-network $SSNi$, with a total size $Ci=CiE+CiI$.
Under the constraints applied in this work, the total density of a feedforward adjacency matrix between $SSNi$ and $SSNi+1$ can be computed as:
(12) ${\sigma }_{\mathrm{i}}=\frac{{p}_{\mathrm{c}}{U}_{\mathrm{c}}^{\mathrm{i}}+{p}_{0}{U}_{0}^{\mathrm{i}}}{{N}^{2}}$
where $U0i$ and $Uci$ are the number of realizable connections between similarly and differently tuned sub-populations, respectively. Since $Uci=N2-U0i$, we can simplify the notation and focus only
on $U0i$. We distinguish between the cases of non-overlapping and overlapping stimulus-specific sub-populations:
$\begin{array}{r}{U}_{0}^{\mathrm{i}}=\left\{\begin{array}{ll}{N}^{2}-{N}_{\mathrm{C}}{C}_{\mathrm{i}}{C}_{\mathrm{i}+1}& \text{if }{d}_{\mathrm{i}}<\frac{1}{{N}_{\mathrm{C}}}\\ \frac{{N}_{\mathrm
{C}}}{{N}_{\mathrm{C}}-1}\left(N-{C}_{\mathrm{i}}\right)\left(N-{C}_{\mathrm{i}+1}\right)& \text{if }{d}_{\mathrm{i}}\ge \frac{1}{{N}_{\mathrm{C}}}\end{array},\end{array}$
where each potential synapse is counted only once, regardless of whether the involved neurons belong to any or multiple overlapping sub-populations. This ensures consistency with the definitions of
the probabilities $pc$ and $p0$. Alternatively, we can express $U0i$ as:
$\begin{array}{cc}\hfill {U}_{0}^{\mathrm{i}}=\frac{{N}^{2}{N}_{\mathrm{stim}}}{{N}_{\mathrm{stim}}-1}\left(1-i\delta -{d}_{0}\right)\left(1-\left(i-1\right)\delta -{d}_{0}\right)& \end{array}$
For the case with no overlap, we can derive an additional constraint on the minimum sub-populations size $Ci$ for the required density $σi$ to be satisfied, which we define in relation to the total
number of sub-populations $NC$:
(13) ${d}_{\mathrm{i}}\ge \sqrt{\frac{{\sigma }_{\mathrm{i}}}{{N}_{\mathrm{C}}}}$
The equality holds in the case of $m=1$ and all-to-all feedforward connectivity between similarly tuned sub-populations, that is, $pc=1$.
Mean-field analysis of network dynamics
For an analytical investigation of the role of topographic modularity on the network dynamics, we used mean-field theory (Fourcaud and Brunel, 2002; Helias et al., 2013; Schuecker et al., 2015).
Under the assumptions that each neuron receives a large number of small amplitude inputs at every time step, the synaptic time constants $τs$ are small compared to the membrane time constant $τm$,
and that the network activity is sufficiently asynchronous and irregular, we can make use of theoretical results obtained from the diffusion approximation of the LIF neuron model to determine the
stationary population dynamics. The equations in this section were partially solved using a modified version of the LIF Meanfield Tools library (Layer et al., 2020).
Stationary firing rates and fixed points
In the circumstances described above, the total synaptic input to each neuron can be replaced by a Gaussian white noise process (independent across neurons) with mean $μ(t)$ and variance $σ2(t)$.
In the stationary state, these quantities, along with the firing rates of each afferent, can be well approximated by their constant time average. The stationary firing rate of the LIF neuron in
response to such input is:
(14) $u ={\left({\tau }_{\text{ref}}+\sqrt{\pi }{\tau }_{\mathrm{eff}}{\int }_{{y}_{\mathrm{r}}}^{{y}_{\theta }}\mathrm{exp}\left({u}^{2}\right)\left[1+\text{erf}\left(u\right)\right]du\right)}^{-1}$
where erf is the error function and the integration limits are defined as $yr=(Vreset-μ)/σ+q2τs/τeff$ and $yθ=(θ-μ)/σ+q2τs/τeff$, with $q=2|ζ(1/2)|$ and Riemann zeta function $ζ$ (see Fourcaud
and Brunel, 2002, Eq. 4.33). As we will see below, the mean μ and variance $σ2$ of the input also depend on the stationary firing rate $ν$, rendering Equation 14 an implicit equation that needs to be
solved self-consistently using fixed-point iteration.
For simplicity, throughout the mean-field analyses we consider perfectly partitioned networks where each neuron belongs to exactly one topographic map, that is, to one of the $NC$ stimulus-specific,
identically sized sub-populations SP (no overlap condition). We denote the firing rate of a neuron in the currently stimulated SP (receiving stimulus input in $SSN0$) in sub-network $SSNi$ by $νiS$,
and by $νiNS$ that of neurons not associated with the stimulated pathway. Since the firing rates of excitatory and inhibitory neurons are equal (due to identical synaptic time constants and input
statistics), we can write the constant mean synaptic input to neurons in the input sub-network as
(15) $\begin{array}{ll}{\mu }_{0}^{\mathrm{S}}& =\left(\stackrel{\text{noise}}{\stackrel{⏞}{{K}_{\mathrm{X}}{J}_{\mathrm{X}}{u }_{\mathrm{X}}}}+\stackrel{\text{rec. stimulated}}{\stackrel{⏞}{\left(\
frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{E}}{J}_{\mathrm{E}}+\frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{I}}{J}_{\mathrm{I}}\right){u }_{0}^{\mathrm{S}}}}+\stackrel{\text{rec. non-stimulated}}{\stackrel{⏞}{\
left({N}_{\mathrm{C}}-1\right)\left(\frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{E}}{J}_{\mathrm{E}}+\frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{I}}{J}_{\mathrm{I}}\right){u }_{0}^{\mathrm{N}\mathrm{S}}}}+\
stackrel{\text{stimulus}}{\stackrel{⏞}{{J}_{\mathrm{X}}{u }_{\mathrm{i}\mathrm{n}}}}\right){\tau }_{\mathrm{m}}\\ {\mu }_{0}^{\mathrm{N}\mathrm{S}}& =\left(\stackrel{\text{noise}}{\stackrel{⏞}{{K}_{\
mathrm{X}}{J}_{\mathrm{X}}{u }_{\mathrm{X}}}}+\stackrel{\text{rec. stimulated}}{\stackrel{⏞}{\left(\frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{E}}{J}_{\mathrm{E}}+\frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm
{I}}{J}_{\mathrm{I}}\right){u }_{0}^{\mathrm{S}}}}+\stackrel{\text{rec. non-stimulated}}{\stackrel{⏞}{\left({N}_{\mathrm{C}}-1\right)\left(\frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{E}}{J}_{\mathrm{E}}+\
frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{I}}{J}_{\mathrm{I}}\right){u }_{0}^{\mathrm{N}\mathrm{S}}}}\right){\tau }_{\mathrm{m}}\end{array}$
The variances $(σ0S)2$ and $(σ0NS)2$ can be obtained by squaring each weight $J$ in the above equation. To derive these equations for the deeper sub-networks $SSNi>0$, it is helpful to include
auxiliary variables $KS$ and $KNS$, representing the number of feedforward inputs to a neuron in $SSNi$ from its own SP in $SSNi-1$, and from one different SP (there are $NC-1$ such sub-populations),
respectively. Both $KS$ and $KNS$ are uniquely defined by the modularity $m$ and projection density $d$, and $KNS=(1-m)KS=(1-m)(1-α)KE$ holds as well. The mean synaptic inputs to the neurons in
the deeper sub-networks can thus be written as:
(16) $\begin{array}{ll}{\mu }_{\mathrm{i}}^{\mathrm{S}}& =\left(\stackrel{\text{noise}}{\stackrel{⏞}{\alpha {K}_{\mathrm{X}}{J}_{\mathrm{X}}{u }_{\mathrm{X}}}}+\stackrel{\text{rec. stimulated}}{\
stackrel{⏞}{\left(\frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{E}}{J}_{\mathrm{E}}+\frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{I}}{J}_{\mathrm{I}}\right){u }_{\mathrm{i}}^{\mathrm{S}}}}\\ & \phantom{\rule{2em}
{0ex}}+\stackrel{\text{rec. non-stimulated}}{\stackrel{⏞}{\left({N}_{\mathrm{C}}-1\right)\left(\frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{E}}{J}_{\mathrm{E}}+\frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{I}}{J}
_{\mathrm{I}}\right){u }_{\mathrm{i}}^{\mathrm{N}\mathrm{S}}}}\\ & \phantom{\rule{2em}{0ex}}+\stackrel{\text{stimulated FF}}{\stackrel{⏞}{{K}_{\mathrm{S}}{J}_{\mathrm{E}}{u }_{\mathrm{i}-1}^{\mathrm
{S}}}}+\stackrel{\text{non-stimulated FF}}{\stackrel{⏞}{\left({N}_{\mathrm{C}}-1\right){K}_{\mathrm{N}\mathrm{S}}{J}_{\mathrm{E}}{u }_{\mathrm{i}-1}^{\mathrm{N}\mathrm{S}}}}\right){\tau }_{\mathrm
{m}}\\ {\mu }_{\mathrm{i}}^{\mathrm{N}\mathrm{S}}& =\left(\stackrel{\text{noise}}{\stackrel{⏞}{\alpha {K}_{\mathrm{X}}{J}_{\mathrm{X}}{u }_{\mathrm{X}}}}+\stackrel{\text{rec. stimulated}}{\stackrel
{⏞}{\left(\frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{E}}{J}_{\mathrm{E}}+\frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{I}}{J}_{\mathrm{I}}\right){u }_{\mathrm{i}}^{\mathrm{S}}}}\\ & \phantom{\rule{2em}{0ex}}+\
stackrel{\text{rec. non-stimulated}}{\stackrel{⏞}{\left({N}_{\mathrm{C}}-1\right)\left(\frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{E}}{J}_{\mathrm{E}}+\frac{1}{{N}_{\mathrm{C}}}{K}_{\mathrm{I}}{J}_{\
mathrm{I}}\right){u }_{\mathrm{i}}^{\mathrm{N}\mathrm{S}}}}\\ & \phantom{\rule{2em}{0ex}}+{K}_{\mathrm{N}\mathrm{S}}{J}_{\mathrm{E}}{u }_{1}^{\mathrm{S}}+\left(\left({N}_{\mathrm{C}}-2\right){K}_{\
mathrm{N}\mathrm{S}}+{K}_{\mathrm{S}}\right){J}_{\mathrm{E}}{u }_{\mathrm{i}-1}^{\mathrm{N}\mathrm{S}}\right){\tau }_{\mathrm{m}}\end{array}$
Again, one can obtain the variances by squaring each weight $J$. The stationary firing rates for the stimulated and non-stimulated sub-populations in all sub-networks are then found by first solving
Equations 14 and 15 for the first sub-network and then (Equation 16) Equations 14 and 16 successively for deeper sub-networks.
For very deep networks, one can ask the question, whether firing rates approach fixed points across sub-networks. If there are multiple fixed points, the initial condition, that is the externally
stimulated activity of sub-populations in the first sub-network, decides in which of the fixed points the rates evolve, in a similar spirit as in recurrent networks after a start-up transient. For a
fixed point, we have $νi-1=νi$. In effect, we can re-group terms in Equation 16 that have the same rates such that formally we obtain an effective new group of neurons from the excitatory and
inhibitory SPs of the current sub-network and the corresponding excitatory SPs of the previous sub-network, as indicated by the square brackets in the following formulas:
(17) $\begin{array}{ll}{\mu }^{\mathrm{S}}& =\alpha \beta \mathcal{J}{u }_{\mathrm{X}}+\underset{{\kappa }_{\mathrm{S},\mathrm{S}}}{\underset{⏟}{\mathcal{J}\left[\frac{1}{{N}_{\mathrm{C}}}\left(1+\
gamma g\right)+\left(1-\alpha \right)\frac{1}{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)+1}\right]}}{u }^{\mathrm{S}}\\ & +\underset{{\kappa }_{\mathrm{S},\mathrm{N}\mathrm{S}}}{\underset{⏟}{\
mathcal{J}\left[\frac{{N}_{\mathrm{C}}-1}{{N}_{\mathrm{C}}}\left(1+\gamma g\right)+\left(1-\alpha \right)\frac{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)}{\left({N}_{\mathrm{C}}-1\right)\left
(1-m\right)+1}\right]}}{u }^{\mathrm{N}\mathrm{S}}\end{array}$
(18) $\begin{array}{ll}{\mu }^{\mathrm{N}\mathrm{S}}& =\alpha \beta \mathcal{J}{u }_{\mathrm{X}}+\underset{{\kappa }_{\mathrm{N}\mathrm{S},\mathrm{S}}}{\underset{⏟}{\mathcal{J}\left[\frac{1}{{N}_{\
mathrm{C}}}\left(1+\gamma g\right)+\left(1-\alpha \right)\frac{\left(1-m\right)}{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)+1}\right]}}{u }^{\mathrm{S}}\\ & +\underset{{\kappa }_{\mathrm{N}\
mathrm{S},\mathrm{N}\mathrm{S}}}{\underset{⏟}{\mathcal{J}\left[\frac{{N}_{\mathrm{C}}-1}{{N}_{\mathrm{C}}}\left(1+\gamma g\right)+\left(1-\alpha \right)\frac{1+\left({N}_{\mathrm{C}}-2\right)\left
(1-m\right)}{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)+1}\right]}}{u }^{\mathrm{N}\mathrm{S}}\end{array}$
with $β=KX/KE$, $γ=KI/KE$, and $J=τKEJ$.
For the parameters $g$ and $γ$ chosen here, $κS,NS$, $κNS,S$, and $κNS,NS$ in Equations 17 and 18 are always negative for any modularity $m$ due to the large recurrent inhibition. Therefore, for the
non-stimulated group, $κ<0$ in Equation 8 (see main text), such that one always finds a single fixed point, which, as desired, is at a low rate. Interestingly, the excitatory feedforward connections
can switch the sign of $κS,S$ from negative to positive for large values of $m$, thereby rendering the active group effectively excitatory, leading to a saddle-node bifurcation and the emergence of a
stable high-activity fixed point (see Figure 7b in the main text).
The structure of fixed points can also be understood by studying the potential landscape of the system: Equation 14 can be regarded as the fixed-point solution of the following evolution equations
for the stimulated and non-stimulated sub-populations (Wong and Wang, 2006; Schuecker et al., 2017)
(19) $\begin{array}{cc}\hfill {\tau }_{\mathrm{S}}\frac{d{u }^{\mathrm{S}}}{dt}=-{u }^{\mathrm{S}}+{\mathrm{\Phi }}_{\mathrm{S}}\left({u }^{\mathrm{S}},{u }^{\mathrm{NS}}\right),& \end{array}$
(20) $\begin{array}{cc}\hfill {\tau }_{\mathrm{NS}}\frac{d{u }^{\mathrm{NS}}}{dt}=-{u }^{\mathrm{NS}}+{\mathrm{\Phi }}_{\mathrm{NS}}\left({u }^{\mathrm{S}},{u }^{\mathrm{NS}}\right),& \end{array}$
where $ΦS$ and $ΦNS$ are defined via the right-hand side of Equation 14 with $μS$ and $μNS$ inserted as defined in Equations 17 and 18 (and likewise for $σS$ and $σNS$). Due to the asymmetry in
connections between stimulated and non-stimulated sub-populations, the right-hand side of Equations 19 and 20 cannot be interpreted as a conservative force. Following the idea of effective response
functions (Mascaro and Amit, 1999), a potential $U(νS)$ for the stimulated sub-population alone can, however, be defined by inserting the solution $νNS=f(νS)$ of Equation 20 into Equation 19
(21) ${\tau }_{\mathrm{S}}\frac{d{u }^{\mathrm{S}}}{dt}=-{u }^{\mathrm{S}}+{\mathrm{\Phi }}_{\mathrm{S}}\left({u }^{\mathrm{S}},f\left({u }^{\mathrm{S}}\right)\right)$
and interpreting the right-hand side as a conservative force $F=-dUdνS$ (Litwin-Kumar and Doiron, 2012). The potential then follows from integration as
(22) $U\left({u }^{\mathrm{S}}\right)-U\left(0\right)=\frac{1}{2}{\left({u }^{\mathrm{S}}\right)}^{2}-{\int }_{0}^{{u }^{\mathrm{S}}}{\mathrm{\Phi }}_{\mathrm{S}}\left(u ,f\left(u \right)\right)du ,$
where $U(0)$ is an inconsequential constant. We solved the latter integral numerically using the scipy.integrate.trapz function of SciPy (Virtanen et al., 2020). The minima and maxima of the
resulting potential correspond to locally stable and unstable fixed points, respectively. Note that while this single-population potential is useful to study the structure of fixed points, the full
dynamics of all populations and global stability cannot be straight-forwardly infered from this reduced picture (Mascaro and Amit, 1999; Rost et al., 2018), here for two reasons: (1) For spiking
networks, Equation 19 and Equation 20 do not describe the real dynamics of the mean activity. Their right-hand side only defines the stationary state solution. (2) The global stability of fixed
points also depends on the time constants of all sub-populations’ mean activities (here $τS$ and $τNS$), but the temporal dynamics of the non-stimulated sub-populations is neglected here.
Mean-field analysis for two input streams
In the case of two simultaneously active stimuli (see Section ‘Input integration and multi-stability’), if the stimulated group 1 is in the high-activity state with rate $νS1$, the second stimulated
group 2 will receive an additional non-vanishing input of the form
(23) $\left[\frac{1}{{N}_{\mathrm{C}}}\left(1+\gamma g\right)+\left(1-\alpha \right)\frac{\left(1-m\right)}{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)+1}\right]{u }^{\mathrm{S}1}<0,$
which is negative for all values of $m$ and can therefore lead to the silencing of group 2. If the stimuli are similarly strong, network fluctuations can dynamically switch the roles of the
stimulated groups 1 and 2.
The dynamics and fixed-point structure in deep sub-networks can be studied using a two-dimensional potential landscape that is defined via the following evolution equation
(24) $\begin{array}{l}\frac{d{u }^{\mathrm{S}1}}{dt}=-{u }^{\mathrm{S}1}+{\mathrm{\varphi }}_{\mathrm{S}1}\left({u }^{\mathrm{S}1},{u }^{\mathrm{S}2},f\left({u }^{\mathrm{S}1},{u }^{\mathrm{S}2}\
(25) $\begin{array}{l}\frac{d{u }^{\mathrm{S}2}}{dt}=-{u }^{\mathrm{S}2}+{\mathrm{\varphi }}_{\mathrm{S}2}\left({u }^{\mathrm{S}1},{u }^{\mathrm{S}2},f\left({u }^{\mathrm{S}1},{u }^{\mathrm{S}2}\
where $f(νS1,νS2)=νNS$ is the fixed point of the non-stimulated sub-populations for given rates $νS1,νS2$ of the two stimulated sub-populations, respectively. The functions $ΦS1$ and $ΦS2$ are again
defined via the right-hand side of Equation 14 with inserted $μS1$, $μS2$ and $μNS$ that are defined as follows (derivation analogous to the single-input case):
(26) $\begin{array}{ll}{\mu }^{\mathrm{S}1}& =\alpha \mathcal{J}{u }_{\mathrm{X}}+\underset{{\kappa }_{\mathrm{S}1,\mathrm{S}1}}{\underset{⏟}{\mathcal{J}\left[\frac{1}{{N}_{\mathrm{C}}}\left(1+\gamma
g\right)+\left(1-\alpha \right)\frac{1}{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)+1}\right]}}{u }^{\mathrm{S}1}\\ & +\underset{{\kappa }_{\mathrm{S}1,\mathrm{S}2}}{\underset{⏟}{\mathcal{J}\left
[\frac{1}{{N}_{\mathrm{C}}}\left(1+\gamma g\right)+\left(1-\alpha \right)\frac{1-m}{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)+1}\right]}}{u }^{\mathrm{S}2}\\ & +\underset{{\kappa }_{\mathrm{S}
1,\mathrm{N}\mathrm{S}}}{\underset{⏟}{\mathcal{J}\left[\frac{{N}_{\mathrm{C}}-2}{{N}_{\mathrm{C}}}\left(1+\gamma g\right)+\left(1-\alpha \right)\frac{\left({N}_{\mathrm{C}}-2\right)\left(1-m\right)}
{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)+1}\right]}}{u }^{\mathrm{N}\mathrm{S}}\end{array}$
(27) $\begin{array}{ll}{\mu }^{\mathrm{S}2}& =\alpha \mathcal{J}{u }_{\mathrm{X}}+\underset{{\kappa }_{\mathrm{S}2,\mathrm{S}1}}{\underset{⏟}{\mathcal{J}\left[\frac{1}{{N}_{\mathrm{C}}}\left(1+\gamma
g\right)+\left(1-\alpha \right)\frac{1-m}{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)+1}\right]}}{u }^{\mathrm{S}1}\\ & +\underset{{\kappa }_{\mathrm{S}2,\mathrm{S}2}}{\underset{⏟}{\mathcal{J}\
left[\frac{1}{{N}_{\mathrm{C}}}\left(1+\gamma g\right)+\left(1-\alpha \right)\frac{1}{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)+1}\right]}}{u }^{\mathrm{S}2}\\ & +\underset{{\kappa }_{\mathrm
{S}1,\mathrm{N}\mathrm{S}}}{\underset{⏟}{\mathcal{J}\left[\frac{{N}_{\mathrm{C}}-2}{{N}_{\mathrm{C}}}\left(1+\gamma g\right)+\left(1-\alpha \right)\frac{\left({N}_{\mathrm{C}}-2\right)\left(1-m\
right)}{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)+1}\right]}}{u }^{\mathrm{N}\mathrm{S}}\end{array}$
(28) $\begin{array}{ll}{\mu }^{\mathrm{N}\mathrm{S}}\text{ }=& \alpha \mathcal{J}{u }_{\mathrm{X}}+\underset{{\kappa }_{\mathrm{N}\mathrm{S},\mathrm{S}1}}{\underset{⏟}{\mathcal{J}\left[\frac{1}{{N}_
{\mathrm{C}}}\left(1+\gamma g\right)+\left(1-\alpha \right)\frac{\left(1-m\right)}{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)+1}\right]}}{u }^{\mathrm{S}1}\end{array}$
(29) $\begin{array}{ll}& +\underset{{\kappa }_{\mathrm{N}\mathrm{S},\mathrm{S}2}}{\underset{⏟}{\mathcal{J}\left[\frac{1}{{N}_{\mathrm{C}}}\left(1+\gamma g\right)+\left(1-\alpha \right)\frac{\left(1-m
\right)}{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)+1}\right]}}{u }^{\mathrm{S}2}\\ & +\underset{{\kappa }_{\mathrm{N}\mathrm{S},\mathrm{N}\mathrm{S}}}{\underset{⏟}{\mathcal{J}\left[\frac{{N}_{\
mathrm{C}}-2}{{N}_{\mathrm{C}}}\left(1+\gamma g\right)+\left(1-\alpha \right)\frac{1+\left({N}_{\mathrm{C}}-3\right)\left(1-m\right)}{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)+1}\right]}}{u }^
Due to the symmetry between the two stimulated sub-populations, the right-hand side of Equations 24 and 25 can be viewed as a conservative force $F$ of the potential $U(νS1,νS2)=−∫CFds$, where we
parameterized the line integral along the path $ν:[0,1]→C,t↦t⋅(νS1,νS2)$, which yields
(30) $U\left({u }^{\mathrm{S1}},{u }^{\mathrm{S2}}\right)=\frac{1}{2}{\left({u }^{\mathrm{S1}}\right)}^{2}+\frac{1}{2}{\left({u }^{\mathrm{S2}}\right)}^{2}-{\int }_{0}^{{u }^{\mathrm{S1}}}{\mathrm{\
Phi }}_{\mathrm{S1}}\left(u ,u \frac{{u }^{\mathrm{S2}}}{{u }^{\mathrm{S1}}},f\left(u ,u \frac{{u }^{\mathrm{S2}}}{{u }^{\mathrm{S1}}}\right)\right)-{\int }_{0}^{{u }^{\mathrm{S2}}}{\mathrm{\Phi }}_
{\mathrm{S2}}\left(u \frac{{u }^{\mathrm{S1}}}{{u }^{\mathrm{S2}}},u ,f\left(u \frac{{u }^{\mathrm{S1}}}{{u }^{\mathrm{S2}}},u \right)\right).$
The numerical evaluation of this two-dimensional potential is shown in Figure 9—figure supplement 2, whereas sketches in Figure 9e show a one-dimensional section (gray lines in Figure 9—figure
supplement 2) that goes anti-diagonal through the two minima corresponding to one population being in the high-activity state and the other one being in the low-activity state.
Critical modularity for piecewise linear activation function
To obtain a closed-form analytic solution for the critical modularity, in the following we consider a neuron model with piecewise linear activation function
(31) $u \left(\mu \right)={u }_{\mathrm{max}}\frac{\mu -{\mu }_{\mathrm{min}}}{{\mu }_{\mathrm{max}}-{\mu }_{\mathrm{min}}}$
for $μ∈[μmin,μmax]$, $ν(μ)=0$ for $μ<μmin$ and $ν(μ)=νmax$ for $μ>μmax$ (Figure 8a). Successful denoising requires the non-stimulated sub-populations to be silent, $νNS=0$, and the stimulated
sub-populations to be active, $νS>0$. We first study solutions where $0<νS<νmax$ and afterwards the case where $νS=νmax$. Inserting Equation 31 into Equations 9 and 10, we obtain
$\begin{array}{ll}{\mu }^{\mathrm{S}}& =\alpha \mathcal{J}{u }_{\mathrm{X}}+{\kappa }_{\mathrm{S},\mathrm{S}}\left(m\right)\phantom{\rule{thinmathspace}{0ex}}{u }_{\mathrm{m}\mathrm{a}\mathrm{x}}\
frac{{\mu }_{S}-{\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}}{{\mu }_{\mathrm{m}\mathrm{a}\mathrm{x}}-{\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}}\phantom{\rule{thinmathspace}{0ex}},\\ {\mu }^{\mathrm{N}\
mathrm{S}}& =\alpha \mathcal{J}{u }_{\mathrm{X}}+{\kappa }_{\mathrm{N}\mathrm{S},\mathrm{S}}\left(m\right)\phantom{\rule{thinmathspace}{0ex}}{u }_{\mathrm{m}\mathrm{a}\mathrm{x}}\frac{{\mu }_{S}-{\mu
}_{\mathrm{m}\mathrm{i}\mathrm{n}}}{{\mu }_{\mathrm{m}\mathrm{a}\mathrm{x}}-{\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}}\phantom{\rule{thinmathspace}{0ex}}.\end{array}$
The first equation can be solved for $μS$
(32) $\begin{array}{ll}\frac{{\mu }^{\mathrm{S}}}{{\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}}& =1+\frac{\alpha \mathcal{J}{u }_{\mathrm{X}}-{\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}}{{\mu }_{\mathrm{m}\
mathrm{i}\mathrm{n}}-{\kappa }_{\mathrm{S},\mathrm{S}}\left(m\right)\phantom{\rule{thinmathspace}{0ex}}{u }_{\mathrm{m}\mathrm{a}\mathrm{x}}\frac{{\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}}{{\mu }_{\
mathrm{m}\mathrm{a}\mathrm{x}}-{\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}}}\phantom{\rule{thinmathspace}{0ex}},\end{array}$
which holds for
(33) $\begin{array}{l}{\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}\le {\mu }^{\mathrm{S}}\le {\mu }_{\mathrm{m}\mathrm{a}\mathrm{x}}\phantom{\rule{thinmathspace}{0ex}},\end{array}$
(34) $\begin{array}{l}{\mu }^{\mathrm{N}\mathrm{S}}\le {\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}\phantom{\rule{thinmathspace}{0ex}}.\end{array}$
Requirement (Equation 33) is equivalent to an inequality for $m$
$\begin{array}{l}0\le \frac{\alpha \mathcal{J}{u }_{\mathrm{X}}-{\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}}{{\mu }_{\mathrm{m}\mathrm{a}\mathrm{x}}-\frac{\mathcal{J}}{{N}_{\mathrm{C}}}\left(1+\gamma g\
right){u }_{\mathrm{m}\mathrm{a}\mathrm{x}}-\frac{\left(1-\alpha \right)\mathcal{J}{u }_{\mathrm{m}\mathrm{a}\mathrm{x}}}{\left({N}_{\mathrm{C}}-1\right)\left(1-m\right)+1}-{\mu }_{\mathrm{m}\mathrm
{i}\mathrm{n}}}\le 1\end{array}$
that, depending on the dynamic range of the neuron, the strength of the external background input and the recurrence, yields
(35) $\begin{array}{l}m=\frac{{N}_{\mathrm{C}}}{{N}_{\mathrm{C}}-1}-\frac{1}{{N}_{\mathrm{C}}-1}\frac{\left(1-\alpha \right)\mathcal{J}{u }_{\mathrm{m}\mathrm{a}\mathrm{x}}}{{\mu }_{\mathrm{m}\mathrm
{a}\mathrm{x}}-\alpha \mathcal{J}{u }_{\mathrm{X}}-\frac{\mathcal{J}}{{N}_{\mathrm{C}}}\left(1+\gamma g\right){u }_{\mathrm{m}\mathrm{a}\mathrm{x}}}\end{array}$
as an upper or lower bound for the modularity (Figure 8). Requirement (Equation 34) with the solution (Equation 32) for $μS$ inserted yields a further lower bound
(36) $m\ge \frac{\left({\mu }_{\mathrm{m}\mathrm{a}\mathrm{x}}-{\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}\right){N}_{\mathrm{C}}}{\left(1-\alpha \right)\mathcal{J}{u }_{\mathrm{m}\mathrm{a}\mathrm{x}}+\
left({\mu }_{\mathrm{m}\mathrm{a}\mathrm{x}}-{\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}\right)\left({N}_{\mathrm{C}}-1\right)}$
for the modularity that is required for denoising. This criterion is independent of the external background input and the recurrence of the SSN.
Now we turn to the saturated scenario $νS=νmax$ and $νNS=0$ and obtain
$\begin{array}{ll}{\mu }^{\mathrm{S}}& =\alpha \mathcal{J}{u }_{\mathrm{X}}+{\kappa }_{\mathrm{S},\mathrm{S}}\left(m\right)\phantom{\rule{thinmathspace}{0ex}}{u }_{\mathrm{m}\mathrm{a}\mathrm{x}}\
phantom{\rule{thinmathspace}{0ex}},\\ {\mu }^{\mathrm{N}\mathrm{S}}& =\alpha \mathcal{J}{u }_{\mathrm{X}}+{\kappa }_{\mathrm{N}\mathrm{S},\mathrm{S}}\left(m\right)\phantom{\rule{thinmathspace}{0ex}}
{u }_{\mathrm{m}\mathrm{a}\mathrm{x}}\phantom{\rule{thinmathspace}{0ex}},\end{array}$
with the criteria
(37) $\begin{array}{r}{\mu }^{\mathrm{S}}\ge {\mu }_{\mathrm{m}\mathrm{a}\mathrm{x}}\phantom{\rule{thinmathspace}{0ex}},\end{array}$
(38) $\begin{array}{l}{\mu }^{\mathrm{N}\mathrm{S}}\le {\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}\phantom{\rule{thinmathspace}{0ex}}.\end{array}$
The first criterion (Equation 37) yields the same critical value (Equation 35) that for $μmax−αJνX−JNC(1+γg)νmax≥0$ is a lower bound and otherwise an upper bound. The second criterion (Equation 38)
yields an additional lower bound for $J(1−α)νmax−(NC−1)(μmin−αJνX−JNC(1+γg)νmax)≥0$ (Figure 8):
(39) $m\ge 1-\frac{\left({\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}-\alpha \mathcal{J}{u }_{\mathrm{X}}-\frac{\mathcal{J}}{{N}_{\mathrm{C}}}\left(1+\gamma g\right){u }_{\mathrm{m}\mathrm{a}\mathrm{x}}\
right)}{\mathcal{J}\left(1-\alpha \right){u }_{\mathrm{m}\mathrm{a}\mathrm{x}}-\left({N}_{\mathrm{C}}-1\right)\left({\mu }_{\mathrm{m}\mathrm{i}\mathrm{n}}-\alpha \mathcal{J}{u }_{\mathrm{X}}-\frac{\
mathcal{J}}{{N}_{\mathrm{C}}}\left(1+\gamma g\right){u }_{\mathrm{m}\mathrm{a}\mathrm{x}}\right)}\phantom{\rule{thinmathspace}{0ex}}.$
The above criteria yield necessary conditions for the existence of a fixed point with $νS>0$ and $νNS=0$. Next we study the stability of such solutions. This works analogous to the stability in the
spiking models discussed in Section ‘Effective connectivity and stability analysis’ by studying the spectrum of the effective connectivity matrix. For the model Equation 31, the effective
connectivity is given by
(40) ${w}_{ij}=\frac{\mathrm{\partial }{u }_{i}}{\mathrm{\partial }{u }_{j}}={u }^{\mathrm{\prime }}\left({\mu }_{i}\right)\frac{\mathrm{\partial }{\mu }_{i}}{\mathrm{\partial }{u }_{j}}={u }^{\
mathrm{\prime }}\left({\mu }_{i}\right){\mathcal{J}}_{ij}$
with $ν′(μ)=dνdμ(μ)$ and $Jij=τxJij$. On the level of stimulated and non-stimulated sub-populations across layers, the effective connectivity becomes
(41) $W=\left(\begin{array}{cc}\hfill {\kappa }_{\mathrm{S},\mathrm{S}}\left(m\right){u }^{\prime }\left({\mu }^{\mathrm{S}}\right)\hfill & \hfill {\kappa }_{\mathrm{S},\mathrm{NS}}\left(m\right){u }
^{\prime }\left({\mu }^{\mathrm{NS}}\right)\hfill \\ \hfill {\kappa }_{\mathrm{NS},\mathrm{S}}\left(m\right){u }^{\prime }\left({\mu }^{\mathrm{S}}\right)\hfill & \hfill {\kappa }_{\mathrm{NS},\
mathrm{NS}}\left(m\right){u }^{\prime }\left({\mu }^{\mathrm{NS}}\right)\hfill \end{array}\right)$
with eigenvalues
(42) $\begin{array}{rl}{\lambda }_{±}& =\frac{{\kappa }_{\mathrm{S},\mathrm{S}}\left(m\right){u }^{\mathrm{\prime }}\left({\mu }^{\mathrm{S}}\right)+{\kappa }_{\mathrm{N}\mathrm{S},\mathrm{N}\mathrm
{S}}\left(m\right){u }^{\mathrm{\prime }}\left({\mu }^{\mathrm{N}\mathrm{S}}\right)}{2}\\ & ±\sqrt{{\left(\frac{{\kappa }_{\mathrm{S},\mathrm{S}}\left(m\right){u }^{\mathrm{\prime }}\left({\mu }^{\
mathrm{S}}\right)+{\kappa }_{\mathrm{N}\mathrm{S},\mathrm{N}\mathrm{S}}\left(m\right){u }^{\mathrm{\prime }}\left({\mu }^{\mathrm{N}\mathrm{S}}\right)}{2}\right)}^{2}-\left({\kappa }_{\mathrm{S},\
mathrm{S}}\left(m\right){u }^{\mathrm{\prime }}\left({\mu }^{\mathrm{S}}\right){\kappa }_{\mathrm{N}\mathrm{S},\mathrm{N}\mathrm{S}}\left(m\right){u }^{\mathrm{\prime }}\left({\mu }^{\mathrm{N}\
mathrm{S}}\right)-{\kappa }_{\mathrm{S},\mathrm{N}\mathrm{S}}\left(m\right){u }^{\mathrm{\prime }}\left({\mu }^{\mathrm{N}\mathrm{S}}\right){\kappa }_{\mathrm{N}\mathrm{S},\mathrm{S}}\left(m\right){u
}^{\mathrm{\prime }}\left({\mu }^{\mathrm{S}}\right)\right)}\phantom{\rule{thinmathspace}{0ex}}.\end{array}$
The saturated fixed point $νS=νmax$ and $νNS=0$ has $ν′(μS)=ν′(μNS)=0$, leading to $λ±=0$. This fixed point is always stable. The non-saturated fixed point also has $ν′(μNS)=0$. Consequently,
Equation 42 simplifies to $λ-=0$ and
(43) ${\lambda }_{+}=\frac{{u }_{\mathrm{max}}}{{\mu }_{\mathrm{max}}-{\mu }_{\mathrm{min}}}{\kappa }_{\mathrm{S},\mathrm{S}}\left(m\right).$
For $λ>1$ fluctuations in the stimulated sub-population are being amplified. These fluctuations also drive fluctuations of the non-stimulated sub-population via the recurrent coupling. The fixed
point thus becomes unstable and the necessary distinction between the stimulated and non-stimulated sub-populations vanishes. For inhibition-dominated recurrence, $κS,S(m)$ is small enough to obtain
stable fixed points at non-saturated rates (Figure 8c). In the case of no recurrence or excitation-dominated recurrence, $κS,S(m)$ is much larger, typically driving $λ+$ across the line of
instability and preventing non-saturated fixed points to be stable. In such networks, only the saturated fixed point at $νS=νmax$ is stable and reachable (Figure 8d and e).
The current manuscript is a computational study, so no data have been generated for this manuscript. Modelling code can be found at https://doi.org/10.5281/zenodo.6326496 (see also Supplementary
Files). Source data and code files are also attached as zip folders to the individual main figures of this manuscript.
Advances in Neural Information Processing Systems
Optimal architectures in a solvable model of deep networks, Advances in Neural Information Processing Systems, Curran Associates, Inc.
Effective neural response function for collective population states
Network 10:351–373.
Neural mechanisms subserving cutaneous sensibility, with special reference to the role of afferent inhibition in sensory perception and discrimination
Bulletin of the Johns Hopkins Hospital 105:201–232.
85. Conference
The Best Spike Filter Kernel Is a Neuron
Conference on Cognitive Computational Neuroscience.
Article and author information
Author details
Initiative and Networking Fund of the Helmholtz Association
• Barna Zajzon
• Abigail Morrison
• Renato Duarte
• David Dahmen
Helmholtz Portfolio theme Supercomputing and Modeling for the Human Brain
• Barna Zajzon
• Abigail Morrison
• Renato Duarte
Excellence Initiative of the German federal and state governments (G:(DE-82)EXS-SF-neuroIC002)
• Barna Zajzon
• Abigail Morrison
• Renato Duarte
Helmholtz Association (VH-NG-1028)
European Commission HBP (945539)
The funders had no role in study design, data collection, and interpretation, or the decision to submit the work for publication.
The authors gratefully acknowledge the computing time granted by the JARA-HPC Vergabegremium on the supercomputer JURECA at Forschungszentrum Jülich.
© 2023, Zajzon et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
A two-part list of links to download the article, or parts of the article, in various formats.
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
1. Barna Zajzon
2. David Dahmen
3. Abigail Morrison
4. Renato Duarte
Signal denoising through topographic modularity of neural circuits
eLife 12:e77009.
Further reading
1. Brain water homeostasis not only provides a physical protection, but also determines the diffusion of chemical molecules key for information processing and metabolic stability. As a major type of
glia in brain parenchyma, astrocytes are the dominant cell type expressing aquaporin water channel. How astrocyte aquaporin contributes to brain water homeostasis in basal physiology remains to
be understood. We report that astrocyte aquaporin 4 (AQP4) mediates a tonic water efflux in basal conditions. Acute inhibition of astrocyte AQP4 leads to intracellular water accumulation as
optically resolved by fluorescence-translated imaging in acute brain slices, and in vivo by fiber photometry in mobile mice. We then show that aquaporin-mediated constant water efflux maintains
astrocyte volume and osmotic equilibrium, astrocyte and neuron Ca^2+ signaling, and extracellular space remodeling during optogenetically induced cortical spreading depression. Using
diffusion-weighted magnetic resonance imaging (DW-MRI), we observed that in vivo inhibition of AQP4 water efflux heterogeneously disturbs brain water homeostasis in a region-dependent manner. Our
data suggest that astrocyte aquaporin, though bidirectional in nature, mediates a tonic water outflow to sustain cellular and environmental equilibrium in brain parenchyma.
2. Neural implants have the potential to restore lost sensory function by electrically evoking the complex naturalistic activity patterns of neural populations. However, it can be difficult to
predict and control evoked neural responses to simultaneous multi-electrode stimulation due to nonlinearity of the responses. We present a solution to this problem and demonstrate its utility in
the context of a bidirectional retinal implant for restoring vision. A dynamically optimized stimulation approach encodes incoming visual stimuli into a rapid, greedily chosen, temporally
dithered and spatially multiplexed sequence of simple stimulation patterns. Stimuli are selected to optimize the reconstruction of the visual stimulus from the evoked responses. Temporal
dithering exploits the slow time scales of downstream neural processing, and spatial multiplexing exploits the independence of responses generated by distant electrodes. The approach was
evaluated using an experimental laboratory prototype of a retinal implant: large-scale, high-resolution multi-electrode stimulation and recording of macaque and rat retinal ganglion cells ex
vivo. The dynamically optimized stimulation approach substantially enhanced performance compared to existing approaches based on static mapping between visual stimulus intensity and current
amplitude. The modular framework enabled parallel extensions to naturalistic viewing conditions, incorporation of perceptual similarity measures, and efficient implementation for an implantable
device. A direct closed-loop test of the approach supported its potential use in vision restoration.
3. Neuromodulatory inputs to the hippocampus play pivotal roles in modulating synaptic plasticity, shaping neuronal activity, and influencing learning and memory. Recently, it has been shown that
the main sources of catecholamines to the hippocampus, ventral tegmental area (VTA) and locus coeruleus (LC), may have overlapping release of neurotransmitters and effects on the hippocampus.
Therefore, to dissect the impacts of both VTA and LC circuits on hippocampal function, a thorough examination of how these pathways might differentially operate during behavior and learning is
necessary. We therefore utilized two-photon microscopy to functionally image the activity of VTA and LC axons within the CA1 region of the dorsal hippocampus in head-fixed male mice navigating
linear paths within virtual reality (VR) environments. We found that within familiar environments some VTA axons and the vast majority of LC axons showed a correlation with the animals’ running
speed. However, as mice approached previously learned rewarded locations, a large majority of VTA axons exhibited a gradual ramping-up of activity, peaking at the reward location. In contrast, LC
axons displayed a pre-movement signal predictive of the animal’s transition from immobility to movement. Interestingly, a marked divergence emerged following a switch from the familiar to novel
VR environments. Many LC axons showed large increases in activity that remained elevated for over a minute, while the previously observed VTA axon ramping-to-reward dynamics disappeared during
the same period. In conclusion, these findings highlight distinct roles of VTA and LC catecholaminergic inputs in the dorsal CA1 hippocampal region. These inputs encode unique information, with
reward information in VTA inputs and novelty and kinematic information in LC inputs, likely contributing to differential modulation of hippocampal activity during behavior and learning. | {"url":"https://elifesciences.org/articles/77009","timestamp":"2024-11-10T19:03:06Z","content_type":"text/html","content_length":"735627","record_id":"<urn:uuid:fa578288-5c22-485f-8066-28f1fde2634c>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00452.warc.gz"} |
The “Tail Modulo Constructor” program transformation
Chapter 26 The “Tail Modulo Constructor” program transformation
(Introduced in OCaml 4.14)
Note: this feature is considered experimental, and its interface may evolve, with user feedback, in the next few releases of the language.
Consider this natural implementation of the List.map function:
let rec map f l = match l with | [] -> [] | x :: xs -> let y = f x in y :: map f xs
A well-known limitation of this implementation is that the recursive call, map f xs, is not in tail position. The runtime needs to remember to continue with y :: r after the call returns a value r,
therefore this function consumes some amount of call-stack space on each recursive call. The stack usage of map f li is proportional to the length of li. This is a correctness issue for large lists
on systems configured with limited stack space – the dreaded Stack_overflow exception.
# let with_stack_limit stack_limit f = let old_gc_settings = Gc.get () in Gc.set { old_gc_settings with stack_limit }; Fun.protect ~finally:(fun () -> Gc.set old_gc_settings) f ;;
val with_stack_limit : int -> (unit -> 'a) -> 'a = <fun>
# with_stack_limit 20_000 (fun () -> List.length (map Fun.id (List.init 1_000_000 Fun.id)) );;
Stack overflow during evaluation (looping recursion?).
In this implementation of map, the recursive call happens in a position that is not a tail position in the program, but within a datatype constructor application that is itself in tail position. We
say that those positions, that are composed of tail positions and constructor applications, are tail modulo constructor (TMC) positions – we sometimes write tail modulo cons for brevity.
It is possible to rewrite programs such that tail modulo cons positions become tail positions; after this transformation, the implementation of map above becomes tail-recursive, in the sense that it
only consumes a constant amount of stack space. The OCaml compiler implements this transformation on demand, using the [@tail_mod_cons] or [@ocaml.tail_mod_cons] attribute on the function to
let[@tail_mod_cons] rec map f l = match l with | [] -> [] | x :: xs -> let y = f x in y :: map f xs
# List.length (map Fun.id (List.init 1_000_000 Fun.id));;
- : int = 1000000
This transformation only improves calls in tail-modulo-cons position, it does not improve recursive calls that do not fit in this fragment:
(* does *not* work: addition is not a data constructor *) let[@tail_mod_cons] rec length l = match l with | [] -> 0 | _ :: xs -> 1 + length xs
Warning 71 [unused-tmc-attribute]: This function is marked @tail_mod_cons but is never applied in TMC position.
It is of course possible to use the [@tail_mod_cons] transformation on functions that contain some recursive calls in tail-modulo-cons position, and some calls in other, arbitrary positions. Only the
tail calls and tail-modulo-cons calls will happen in constant stack space.
General design
This feature is provided as an explicit program transformation, not an implicit optimization. It is annotation-driven: the user is expected to express their intent by adding annotations in the
program using attributes, and will be asked to do so in any ambiguous situation.
We expect it to be used mostly by advanced OCaml users needing to get some guarantees on the stack-consumption behavior of their programs. Our recommendation is to use the [@tailcall] annotation on
all callsites that should not consume any stack space. [@tail_mod_cons] extends the set of functions on which calls can be annotated to be tail calls, helping establish stack-consumption guarantees
in more cases.
A standard approach to get a tail-recursive version of List.map is to use an accumulator to collect output elements, and reverse it at the end of the traversal.
let rec map f l = map_aux f [] l and map_aux f acc l = match l with | [] -> List.rev acc | x :: xs -> let y = f x in map_aux f (y :: acc) xs
This version is tail-recursive, but it is measurably slower than the simple, non-tail-recursive version. In contrast, the tail-mod-cons transformation provides an implementation that has comparable
performance to the original version, even on small inputs.
Evaluation order
Beware that the tail-modulo-cons transformation has an effect on evaluation order: the constructor argument that is transformed into tail-position will always be evaluated last. Consider the
following example:
type 'a two_headed_list = | Nil | Consnoc of 'a * 'a two_headed_list * 'a let[@tail_mod_cons] rec map f = function | Nil -> Nil | Consnoc (front, body, rear) -> Consnoc (f front, map f body, f rear)
Due to the [@tail_mod_cons] transformation, the calls to f front and f rear will be evaluated before map f body. In particular, this is likely to be different from the evaluation order of the
unannotated version. (The evaluation order of constructor arguments is unspecified in OCaml, but many implementations typically use left-to-right or right-to-left.)
This effect on evaluation order is one of the reasons why the tail-modulo-cons transformation has to be explicitly requested by the user, instead of being applied as an automatic optimization.
Why tail-modulo-cons?
Other program transformations, in particular a transformation to continuation-passing style (CPS), can make all functions tail-recursive, instead of targeting only a small fragment. Some reasons to
provide builtin support for the less-general tail-mod-cons are as follows:
• The tail-mod-cons transformation preserves the performance of the original, non-tail-recursive version, while a continuation-passing-style transformation incurs a measurable constant-factor
• The tail-mod-cons transformation cannot be expressed as a source-to-source transformation of OCaml programs, as it relies on mutable state in type-unsafe ways. In contrast,
continuation-passing-style versions can be written by hand, possibly using a convenient monadic notation.
Note: OCaml call stack size
In OCaml 4.x and earlier, bytecode programs respect the stack_limit runtime parameter configuration (as set using Gc.set in the example above), or the l setting of the OCAMLRUNPARAM variable. Native
programs ignore these settings and only respect the operating system native stack limit, as set by ulimit on Unix systems. Most operating systems run with a relatively low stack size limit by
default, so stack overflow on non-tail-recursive functions are a common programming bug.
Starting from OCaml 5.0, native code does not use the native system stack for OCaml function calls anymore, so it is not affected by the operating system native stack size; both native and bytecode
programs respect the OCaml runtime’s own limit. The runtime limit is set to a much higher default than most operating system native stacks, with a limit of at least 512MiB, so stack overflow should
be much less common in practice. There is still a stack limit by default, as it remains useful to quickly catch bugs with looping non-tail-recursive functions. Without a stack limit, one has to wait
for the whole memory to be consumed by the stack for the program to crash, which can take a long time and make the system unresponsive.
This means that the tail modulo constructor transformation is less important on OCaml 5: it does improve performance noticeably in some cases, but it is not necessary for basic correctness for most
1 Disambiguation
It may happen that several arguments of a constructor are recursive calls to a tail-modulo-cons function. The transformation can only turn one of these calls into a tail call. The compiler will not
make an implicit choice, but ask the user to provide an explicit disambiguation.
Consider this type of syntactic expressions (assuming some pre-existing type var of expression variables):
type var (* some pre-existing type of variables *) type exp = | Var of var | Let of binding * exp and binding = var * exp
Consider a map function on variables. The direct definition has two recursive calls inside arguments of the Let constructor, so it gets rejected as ambiguous.
let[@tail_mod_cons] rec map_vars f exp = match exp with | Var v -> Var (f v) | Let ((v, def), body) -> Let ((f v, map_vars f def), map_vars f body)
Error: [@tail_mod_cons]: this constructor application may be TMC-transformed in several different ways. Please disambiguate by adding an explicit [@tailcall] attribute to the call that should be made
tail-recursive, or a [@tailcall false] attribute on calls that should not be transformed. This call could be annotated. This call could be annotated.
To disambiguate, the user should add a [@tailcall] attribute to the recursive call that should be transformed to tail position:
let[@tail_mod_cons] rec map_vars f exp = match exp with | Var v -> Var (f v) | Let ((v, def), body) -> Let ((f v, map_vars f def), (map_vars[@tailcall]) f body)
Be aware that the resulting function is not tail-recursive, the recursive call on def will consume stack space. However, expression trees tend to be right-leaning (lots of Let in sequence, rather
than nested inside each other), so putting the call on body in tail position is an interesting improvement over the naive definition: it gives bounded stack space consumption if we assume a bound on
the nesting depth of Let constructs.
One would also get an error when using conflicting annotations, asking for two of the constructor arguments to be put in tail position:
let[@tail_mod_cons] rec map_vars f exp = match exp with | Var v -> Var (f v) | Let ((v, def), body) -> Let ((f v, (map_vars[@tailcall]) f def), (map_vars[@tailcall]) f body)
Error: [@tail_mod_cons]: this constructor application may be TMC-transformed in several different ways. Only one of the arguments may become a TMC call, but several arguments contain calls that are
explicitly marked as tail-recursive. Please fix the conflict by reviewing and fixing the conflicting annotations. This call is explicitly annotated. This call is explicitly annotated.
2 Danger: getting out of tail-mod-cons
Due to the nature of the tail-mod-cons transformation (see Section26.3 for a presentation of transformation):
• Calls from a tail-mod-cons function to another tail-mod-cons function declared in the same recursive-binding group are transformed into tail calls, as soon as they occur in tail position or
tail-modulo-cons position in the source function.
• Calls from a function not annotated tail-mod-cons to a tail-mod-cons function or, conversely, from a tail-mod-cons function to a non-tail-mod-cons function are transformed into non-tail calls,
even if they syntactically appear in tail position in the source program.
The fact that calls in tail position in the source program may become non-tail calls if they go from a tail-mod-cons to a non-tail-mod-cons function is surprising, and the transformation will warn
about them.
For example:
let[@tail_mod_cons] rec flatten = function | [] -> [] | xs :: xss -> let rec append_flatten xs xss = match xs with | [] -> flatten xss | x :: xs -> x :: append_flatten xs xss in append_flatten xs xss
Warning 71 [unused-tmc-attribute]: This function is marked @tail_mod_cons but is never applied in TMC position. Warning 72 [tmc-breaks-tailcall]: This call is in tail-modulo-cons position in a TMC
function, but the function called is not itself specialized for TMC, so the call will not be transformed into a tail call. Please either mark the called function with the [@tail_mod_cons] attribute,
or mark this call with the [@tailcall false] attribute to make its non-tailness explicit.
Here the append_flatten helper is not annotated with [@tail_mod_cons], so the calls append_flatten xs xss and flatten xss will not be tail calls. The correct fix here is to annotate append_flatten to
be tail-mod-cons.
let[@tail_mod_cons] rec flatten = function | [] -> [] | xs :: xss -> let[@tail_mod_cons] rec append_flatten xs xss = match xs with | [] -> flatten xss | x :: xs -> x :: append_flatten xs xss in
append_flatten xs xss
The same warning occurs when append_flatten is a non-tail-mod-cons function of the same recursive group; using the tail-mod-cons transformation is a property of individual functions, not whole
recursive groups.
let[@tail_mod_cons] rec flatten = function | [] -> [] | xs :: xss -> append_flatten xs xss and append_flatten xs xss = match xs with | [] -> flatten xss | x :: xs -> x :: append_flatten xs xss
Warning 71 [unused-tmc-attribute]: This function is marked @tail_mod_cons but is never applied in TMC position. Warning 72 [tmc-breaks-tailcall]: This call is in tail-modulo-cons position in a TMC
function, but the function called is not itself specialized for TMC, so the call will not be transformed into a tail call. Please either mark the called function with the [@tail_mod_cons] attribute,
or mark this call with the [@tailcall false] attribute to make its non-tailness explicit.
Again, the fix is to specialize append_flatten as well:
let[@tail_mod_cons] rec flatten = function | [] -> [] | xs :: xss -> append_flatten xs xss and[@tail_mod_cons] append_flatten xs xss = match xs with | [] -> flatten xss | x :: xs -> x ::
append_flatten xs xss
Non-recursive functions can also be annotated [@tail_mod_cons]; this is typically useful for local bindings to recursive functions.
Incorrect version:
let[@tail_mod_cons] rec map_vars f exp = let self exp = map_vars f exp in match exp with | Var v -> Var (f v) | Let ((v, def), body) -> Let ((f v, self def), (self[@tailcall]) body)
Warning 51 [wrong-tailcall-expectation]: expected tailcall Warning 51 [wrong-tailcall-expectation]: expected tailcall Warning 71 [unused-tmc-attribute]: This function is marked @tail_mod_cons but is
never applied in TMC position.
Recommended fix:
let[@tail_mod_cons] rec map_vars f exp = let[@tail_mod_cons] self exp = map_vars f exp in match exp with | Var v -> Var (f v) | Let ((v, def), body) -> Let ((f v, self def), (self[@tailcall]) body)
In other cases, there is either no benefit in making the called function tail-mod-cons, or it is not possible: for example, it is a function parameter (the transformation only works with direct calls
to known functions).
For example, consider a substitution function on binary trees:
type 'a tree = Leaf of 'a | Node of 'a tree * 'a tree let[@tail_mod_cons] rec bind (f : 'a -> 'a tree) (t : 'a tree) : 'a tree = match t with | Leaf v -> f v | Node (left, right) -> Node (bind f
left, (bind[@tailcall]) f right)
Warning 72 [tmc-breaks-tailcall]: This call is in tail-modulo-cons position in a TMC function, but the function called is not itself specialized for TMC, so the call will not be transformed into a
tail call. Please either mark the called function with the [@tail_mod_cons] attribute, or mark this call with the [@tailcall false] attribute to make its non-tailness explicit.
Here f is a function parameter, not a direct call, and the current implementation is strictly first-order, it does not support tail-mod-cons arguments. In this case, the user should indicate that
they realize this call to f v is not, in fact, in tail position, by using (f[@tailcall false]) v.
type 'a tree = Leaf of 'a | Node of 'a tree * 'a tree let[@tail_mod_cons] rec bind (f : 'a -> 'a tree) (t : 'a tree) : 'a tree = match t with | Leaf v -> (f[@tailcall false]) v | Node (left, right)
-> Node (bind f left, (bind[@tailcall]) f right)
3 Details on the transformation
To use this advanced feature, it helps to be aware that the function transformation produces a specialized function in destination-passing-style.
Recall our map example:
let rec map f l = match l with | [] -> [] | x :: xs -> let y = f x in y :: map f xs
Below is a description of the transformed program in pseudo-OCaml notation: some operations are not expressible in OCaml source code. (The transformation in fact happens on the Lambda intermediate
representation of the OCaml compiler.)
let rec map f l =
match l with
| [] -> []
| x :: xs ->
let y = f x in
let dst = y ::{mutable} Hole in
map_dps f xs dst 1;
and map_dps f l dst idx =
match l with
| [] -> dst.idx <- []
| x :: xs ->
let y = f x in
let dst' = y ::{mutable} Hole in
dst.idx <- dst';
map_dps f xs dst' 1
The source version of map gets transformed into two functions, a direct-style version that is also called map, and a destination-passing-style version (DPS) called map_dps. The
destination-passing-style version does not return a result directly, instead it writes it into a memory location specified by two additional function parameters, dst (a memory block) and i (a
position within the memory block).
The source call y :: map f xs gets transformed into the creation of a mutable block y ::{mutable} Hole, whose second parameter is an un-initialized hole. The block is then passed to map_dps as a
destination parameter (with offset 1).
Notice that map does not call itself recursively, it calls map_dps. Then, map_dps calls itself recursively, in a tail-recursive way.
The call from map to map_dps is not a tail call (this is something that we could improve in the future); but this call happens only once when invoking map f l, with all list elements after the first
one processed in constant stack by map_dps.
This explains the “getting out of tail-mod-cons” subtleties. Consider our previous example involving mutual recursion between flatten and append_flatten.
let[@tail_mod_cons] rec flatten l =
match l with
| [] -> []
| xs :: xss ->
append_flatten xs xss
The call to append_flatten, which syntactically appears in tail position, gets transformed differently depending on whether the function has a destination-passing-style version available, that is,
whether it is itself annotated [@tail_mod_cons]:
(* if append_flatten_dps exists *)
and flatten_dps l dst i =
match l with
| [] -> dst.i <- []
| xs :: xss ->
append_flatten_dps xs xss dst i
(* if append_flatten_dps does not exist *)
and rec flatten_dps l dst i =
match l with
| [] -> dst.i <- []
| xs :: xss ->
dst.i <- append_flatten xs xss
If append_flatten does not have a destination-passing-style version, the call gets transformed to a non-tail call.
4 Current limitations
Purely syntactic criterion
Just like tail calls in general, the notion of tail-modulo-constructor position is purely syntactic; some simple refactoring will move calls out of tail-modulo-constructor position.
(* works as expected *) let[@tail_mod_cons] rec map f li = match li with | [] -> [] | x :: xs -> let y = f x in y :: (* this call is in TMC position *) map f xs
(* not optimizable anymore *) let[@tail_mod_cons] rec map f li = match li with | [] -> [] | x :: xs -> let y = f x in let ys = (* this call is not in TMC position anymore *) map f xs in y :: ys
Warning 71 [unused-tmc-attribute]: This function is marked @tail_mod_cons but is never applied in TMC position.
Local, first-order transformation
When a function gets transformed with tail-mod-cons, two definitions are generated, one providing a direct-style interface and one providing the destination-passing-style version. However, not all
calls to this function in tail-modulo-cons position will use the destination-passing-style version and become tail calls:
• The transformation is local: only tail-mod-cons calls to foo within the same compilation unit as foo become tail calls.
• The transformation is first-order: only direct calls to known tail-mod-cons functions become tail calls when in tail-mod-cons position, never calls to function parameters.
Consider the call Option.map foo x for example: even if foo is called in tail-mod-cons position within the definition of Option.map, that call will never become a tail call. (This would be the case
even if the call to Option.map was inside the Option module.)
In general this limitation is not a problem for recursive functions: the first call from an outside module or a higher-order function will consume stack space, but further recursive calls in
tail-mod-cons position will get optimized. For example, if List.map is defined as a tail-mod-cons function, calls from outside the List module will not become tail calls when in tail positions, but
the recursive calls within the definition of List.map are in tail-modulo-cons positions and do become tail calls: processing the first element of the list will consume stack space, but all further
elements are handled in constant space.
These limitations may be an issue in more complex situations where mutual recursion happens between functions, with some functions not annotated tail-mod-cons, or defined across different modules, or
called indirectly, for example through function parameters.
Non-exact calls to tupled functions
OCaml performs an implicit optimization for “tupled” functions, which take a single parameter that is a tuple: let f (x, y, z) = .... Direct calls to these functions with a tuple literal argument
(like f (a, b, c)) will call the “tupled” function by passing the parameters directly, instead of building a tuple of them. Other calls, either indirect calls or calls passing a more complex tuple
value (like let t = (a, b, c) in f t) are compiled as “inexact” calls that go through a wrapper.
The [@tail_mod_cons] transformation supports tupled functions, but will only optimize “exact” calls in tail position; direct calls to something other than a tuple literal will not become tail calls.
The user can manually unpack a tuple to force a call to be “exact”: let (x, y, z) = t in f (x, y, z). If there is any doubt as to whether a call can be tail-mod-cons-optimized or not, one can use the
[@tailcall] attribute on the called function, which will warn if the transformation is not possible.
let rec map (f, l) = match l with | [] -> [] | x :: xs -> let y = f x in let args = (f, xs) in (* this inexact call cannot be tail-optimized, so a warning will be raised *) y :: (map[@tailcall]) args
Warning 51 [wrong-tailcall-expectation]: expected tailcall | {"url":"https://staging.ocaml.org/manual/5.2/tail_mod_cons.html","timestamp":"2024-11-02T04:40:45Z","content_type":"text/html","content_length":"40995","record_id":"<urn:uuid:4aa5d419-a567-4ce8-b4d3-c1a11011eb48>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00763.warc.gz"} |
Tetralemma: Beyond the Binary Logic
Made from Midjourney
Tetralemma is a philosophical concept that originated in Indian logic and metaphysics. It refers to a logical analysis that categorizes a proposition into four possible truths or negations. The four
categories are:
1. The proposition is true and is established (Sāstvānumāna)
2. The proposition is false and is established (Asāstvānumāna)
3. The proposition is true but is not established (Sāstvānāstika)
4. The proposition is false but is not established (Asāstvānāstika)
An example of the tetralemma applied to the proposition “All men are mortal” would be:
Category 1 would refer to the established truth that all men are indeed mortal;
Category 2 would refer to a hypothetical situation in which all men were not mortal;
Category 3 would refer to a situation in which the truth of the proposition is not established, for example, in a culture where belief in immortality is prevalent;
Category 4 would refer to a situation in which the falsity of the proposition is not established, such as in a culture where belief in reincarnation is widespread.
This is different from Aristotelian logic.
Aristotelian logic, also known as syllogistic logic, was developed by the ancient Greek philosopher Aristotle. It is based on the idea of syllogisms, which are arguments that consist of three parts:
a major premise, a minor premise, and a conclusion. The premises and conclusion are related to each other in a specific way, and the conclusion is deduced from the premises. Aristotelian logic is
based on the principle of non-contradiction, which states that something cannot both be and not be at the same time.
The tetralemma is different from the binary truth table in that it provides a more comprehensive analysis of a proposition by taking into account the truth value of the proposition and its status as
an establishment. This difference in truth values allows the tetralemma to address certain issues and philosophical questions not easily addressed by Aristotelian logic, such as the relationship
between appearance and reality and the nature of paradoxical statements.
Consider the relationship between appearance and reality:
In many philosophical traditions, there is a debate about the relationship between how things appear and how they truly are. Buddhist and Vedantic philosophy is known as the “two truths doctrine.” In
this context, tetralemma can provide a framework for exploring the relationship between the way things appear to us and the way they truly are.
Suppose we are considering the appearance of a mirage in the desert. To an onlooker, the mirage appears to be a pool of water, but in reality, it is an optical illusion caused by the bending of
light. Using the tetralemma, we can analyze the relationship between appearance and reality by recognizing four possible truth values for the proposition “the mirage is a pool of water”:
1. True: The mirage appears to be a pool of water, and it truly is a pool of water.
2. False: The mirage appears to be a pool of water, but it is not truly a pool of water.
3. Both true and false: The mirage appears to be a pool of water, and it is both a pool of water and not a pool of water.
4. Neither true nor false: The mirage appears to be a pool of water, but it is neither a pool of water nor not a pool of water in some ultimate sense.
In this example, truth value 2 is the most commonly accepted in Aristotelian logic, but truth values 3 and 4 allow for a more nuanced understanding of the relationship between appearance and reality.
They recognize that appearances can be deceiving and that there may be a deeper truth beyond appearances that are not easily grasped through our senses.
Consider another example where the tetralemma provides a more nuanced understanding: paradoxical statements.
A paradoxical statement is a statement that contradicts itself, or that appears to be self-contradictory. For example, the statement “This statement is false” is paradoxical because if the statement
is true, it is false, and if it is false, it is true.
Tetralemma can be useful in resolving paradoxical statements by providing four possible truth values for such statements:
1. True: The statement is true and does not contradict itself.
2. False: The statement is false and does not contradict itself.
3. Both true and false: The statement is true and false, which means it contradicts itself.
4. Neither true nor false: The statement is neither true nor false, meaning it cannot be evaluated using traditional binary truth values.
In the case of the paradoxical statement “This statement is false,” truth value 3 (both true and false) captures the self-contradictory nature of the statement. In contrast, truth value 4 (neither
true nor false) acknowledges that the statement cannot be evaluated using traditional binary truth values.
Consider the statement, “X is a bachelor.” On the surface, this statement seems to be either true or false (Aristotelian Logic), but it can be neither true nor false in certain situations.
For instance, if X is unmarried, the statement “X is a bachelor” is true. However, if X is married, the statement “X is a bachelor” is false. But what if X is a widower or has been divorced? In these
cases, the statement “X is a bachelor” is neither true nor false, as it does not accurately reflect John’s marital status.
In the case of the statement, “X is a bachelor,” truth value 3 (both true and false) acknowledges the ambiguity and complexity of the statement. In contrast, truth value 4 (neither true nor false)
captures the idea that the statement cannot be evaluated as true or false under certain circumstances.
One of the limitations of tetralemma is its lack of falsifiability. The tetralemma does not provide a clear criterion for falsifiability, an important aspect of scientific investigation.
Falsifiability refers to the ability of a theory or hypothesis to be tested and potentially proven false.
Advances in mathematics allowed us to go beyond ‘fixed value’ logic systems. Fuzzy and probabilistic logical systems are a mathematical framework for representing and processing uncertain or vague
information. Fuzzy logic allows values to be assigned to propositions that lie between true and false, providing a more nuanced evaluation of truth values than binary or tetra-truth tables.
All logic systems devised by humans have limitations. Reality is not only complex, but we often have to deal with limited information. Being aware of these limitations fills us with awe over the
contributions of our ancient saints.
Ekum Sat | {"url":"https://aghora.medium.com/tetralemma-beyond-the-binary-logic-98595d9b451d?source=author_recirc-----3db06b549212----0---------------------49fd9c48_760d_4db4_926a_49e7dd616de2-------","timestamp":"2024-11-13T21:59:44Z","content_type":"text/html","content_length":"112495","record_id":"<urn:uuid:b1fbb908-da25-4a22-8ea9-233a46968df7>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00803.warc.gz"} |
Historical Funding Rates: A Study, with Implications for UXD’s Insurance Fund
At its heart, the UXD stablecoin represents the “principal” position of a basis trade. By the definition of the underlying collateral position being “delta-neutral”, the $1 of principal collateral
underlying the position is always worth $1, regardless of market conditions. However, this $1 in underlying principal is not the whole story: the delta-neutral position inherently generates a yield
known as the “funding rate” due to the perpetual futures underlying the stablecoin.
One of the most frequent questions UXD Protocol is asked: “What have funding rates for perpetual futures looked like historically? How would the insurance fund have performed in different market
regimes?”. Understanding funding rates is critical to understanding the long-term health profile of UXD as a stablecoin, as funding rates determine the probability that UXD may ever become
To better understand funding rate dynamics in the context of UXD specifically, UXD wrote a academic-esque report analysing funding rates for both BTC and SOL perpetual futures. BTC has the longest
historical data for any set of funding rates, with data from BitMEX going back to 2017. SOL’s rise to prominence is quite recent, and therefore historical data is more limited. That being said, UXD
Protocol has included both to give a sense for how UXD’s insurance fund would have performed in either case.
See below for a summarized version of the full report with corresponding Github repo. Note that the below report assumes that 100% of the funding rate goes to the insurance fund. In actuality, it
will be split between UXP holders and the insurance fund. A 50/50 split scenario can be seen in the full report.
Historical Data
In order to answer this question, UXD has collected historical data on both XBTUSD funding rates on BitMEX, as well as SOLPERP funding rates on FTX. One thing to note in the below charts is that
early funding rates are incredibly volatile, and likely aren’t representative of future funding rate conditions.
It’s quite interesting to note the relative volatility of the two funding rates, with SOLPERP funding rates fluctuating within a tight range starting in the summer of 2021.
Taking a quick step back, it’s worth remembering that every stablecoin has its “point of instability”: for DAI, this is liquidation-related as well as having sufficient stableswap liquidity; for FEI,
this is related to its PCV asset reserve; for UST, this is related to the reflexivity of its mint/burn of LUNA. For UXD stablecoin, the potential point of instability is the funding rate. Sustained
negative funding rates that cause the insurance fund to deplete could cause UXD to become undercollateralized over time (though, users are allowed to redeem $1 of crypto collateral at any time before
this happens).
What’s notable here is that UXD’s “point of instability” has the characteristics of a mean-reverting, drift-less process (see: Stationarity). Just a quick glance at the above charts makes it clear
that funding rates don’t seem to “persist” from time step to time step (look at the sharp slope of the funding rate curves; a spike up or down in funding rates is almost immediately brought back
towards zero). These properties are strong points for UXD’s stability. In a world with funding rates equal to zero always, UXD would achieve perfect stability. A strongly mean-reverting, drift-less
process is the next best thing.
Fixed Start Date
Returning to the analysis, the first set of results comes from the question “What would the balance of the insurance fund look like if steady-state funding rates looked like 2021 funding rates?”.
Assuming $500mm of UXD outstanding and adjusting for price appreciation of SOL and BTC over the same time period (it is assumed that SOL and BTC price is constant from the day 1 price*), results
• A 100% SOLPERP backed UXD receiving 100% of the funding rate would have resulted in the insurance fund growing from $57mm to over $180mm in one year’s time.
• A 100% XBTUSD backed UXD receiving 100% of the funding rate would have resulted in the insurance fund growing from $57mm to over $115mm in one year’s time.
• Note this results in a very conservative estimate: since funding payments are computed as the funding rate times the price of the underlying asset, and because positive (negative) funding rates
are generally associated with rising (falling) prices, a constant price assumption understates the size of positive funding payments, and overstates the size of negative funding payments.
Varied Start Dates
Of course, assuming that longer-run funding rates look like they did during 2021 is a very bullish assumption, and so the performance over the entire period (2017-present) is evaluated for which
funding rate data is available as follows:
Assume that UXD Protocol has $500 million of UXD stablecoin outstanding on a date “X”, what would the performance of the insurance fund have been to date? In particular what would the maximum value
of the insurance fund have been since inception? the average value? the minimum value?
So for each date between (2017-present), three numbers are plotted: max, mean, min, which give an excellent sense of the insurance fund’s performance in different market regimes (bear and bull
In the above, although the average performance is quite good, note that some dates show a “bankruptcy” of the insurance fund, though starting dates related to the “bankruptcy” of the insurance fund
occur before Jan 2021 (i.e. before SOL had a market cap above $1bn and was considered a blue-chip crypto asset), due to the extremely high volatility of funding rates. As is clear from the SOLPERP
funding chart at the beginning of this article, SOLPERP funding rate volatility has decreased significantly after achieving a $1bn+ market cap. So, UXD Protocol views these outcomes as quite unlikely
moving forward but wanted to include them for completeness and transparency.
Also note that the “time to bankruptcy” in the few poor performance cases takes between several months and a year to fully deplete the insurance fund. This implies that a UXD holder would have
several months opportunity to withdraw $1 of crypto assets before becoming undercollateralized.
Likewise, the average performance of the BTC perp positions is quite good, with bankruptcy scenarios mostly related to the extremely volatile funding rates of 2018.
Effects of UXD Supply Caps
One thing to note about the above analysis is that is assumes $500mm of UXD stablecoin is outstanding from day 1, so it is closer to a “steady state” analysis. One of the reasons UXD Protocol decided
to implement initial supply caps (beyond reasons of confirming security), was to help lessen the effects of different market regimes initially. This makes the initial launch of the protocol much more
robust, as the probability of depleting the insurance fund significantly decreases. In particular, if instead it is assumed that the outstanding UXD stablecoin supply grows in-line with our proposed
supply cap schedule, the insurance fund is much less likely to ever reach a “bankruptcy” scenario.
UXD Supply Cap schedule: assumes linear growth to $1bn after lifting final $200mm supply cap
Once again, the max, min, and average balance of the insurance fund according to different starting dates are shown:
Note that the balance of the insurance fund never goes below $40mm due to the supply cap schedules, though it achieves a lower maximum value. This implies that a gradual release approach may help
de-risk the initial rollout of UXD stablecoin.
Due to the volatility of XBTUSD rates in 2018 and earlier, this chart looks much the same as the $500mm constant UXD outstanding chart, so there is less of an effect to the gradual release schedule
in this case.
In actuality, UXD will be a multi-collateral stablecoin with various blue-chip assets backing it, such as SOL, BTC, and ETH. The above results are meant to be demonstrations of the behaviour of
funding rates historically, but do not fully reflect reality due to their simplifications. In any case, UXD Protocol believes that the above results show the robustness of the insurance fund.
Although it is not guaranteed that UXD stablecoin will generate self-sustaining yield, it certainly has in most cases historically.
Effects of Insurance Fund Asset Management Returns
Finally, UXD investigated the effects of insurance fund asset management strategies on the insurance fund’s overall performance. These will include investments such as stablecoin liquidity provision,
stablecoin lending, etc.
For example, a 100% SOLPERP backed UXD receiving 100% of the funding rate (with funding rates similar to 2021 regime and $500mm UXD stablecoin outstanding) with different asset management returns
would have performed as follows:
Note: 0.1 = 10% constant annualized return on the assets in the insurance fund, -0.1 = -10% constant annualized return on assets in the insurance fund, with intermediate values shown.
What’s interesting about the above chart is the relative closeness of the lines, even for very different asset management performance. This implies that the primary factor in determining the
insurance fund performance is funding rate, rather than asset management performance. This is not surprising, since the insurance fund has a levered exposure to funding rates. A corollary is that the
asset management strategies of the insurance fund should be low-risk investments, as most of the variance in performance will come from funding rates.
If you’ve made it this far, kudos. Hopefully the above has been instructive in helping understand (i) what funding rates have been historically for two primary crypto assets (ii) why this is
important for UXD Protocol (iii) how UXD Protocol’s insurance fund would have performed in different market regimes (iv) how UXD Protocol should think about managing its insurance fund. | {"url":"https://uxdprotocol.medium.com/historical-funding-rates-a-study-with-implications-for-uxds-insurance-fund-d4875ddf3ce5","timestamp":"2024-11-10T07:41:51Z","content_type":"text/html","content_length":"151071","record_id":"<urn:uuid:39a51a9e-d0d7-4d9e-8b78-b35290976487>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00263.warc.gz"} |
Database of Original & Non-Theoretical Uses of Topology
Topological Feature Extraction for Comparison of Terascale Combustion Simulation Data (2011)
Ajith Mascarenhas, Ray W. Grout, Peer-Timo Bremer, Evatt R. Hawkes, Valerio Pascucci, Jacqueline H. Chen Abstract We describe a combinatorial streaming algorithm to extract features which identify
regions of local intense rates of mixing in twoterascale turbulent combustion simulations. Our algorithm allows simulation data comprised of scalar fields represented on 728x896x512 or 2025x1600x400
grids to be processed on a single relatively lightweight machine. The turbulence-induced mixing governs the rate of reaction and hence is of principal interest in these combustion simulations. We use
our feature extraction algorithm to compare two very different simulations and find that in both the thickness of the extracted features grows with decreasing turbulence intensity. Simultaneous
consideration of results of applying the algorithm to the HO2 mass fraction field indicates that autoignition kernels near the base of a lifted flame tend not to overlap with the high mixing rate | {"url":"https://donut.topology.rocks/?q=tag%3A%22Mesh+reduction%22","timestamp":"2024-11-03T14:00:24Z","content_type":"text/html","content_length":"7972","record_id":"<urn:uuid:1aee7a99-c421-40e8-b404-ef39443ff8a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00223.warc.gz"} |
6 Key Math Calculations That Help Students Excel in Competitive Exams
Competitive exams math calculations play a critical role in competitive exams.
Strong math skills can define a student’s success or failure.
Regardless of the exam’s nature, mathematical concepts frequently appear.
From problem-solving to logical reasoning, math enhances overall performance significantly.
Consequently, students must hone their mathematical abilities to excel.
Mastering key calculations can greatly enhance a student’s efficiency in exams.
Quick calculations allow students to save precious time, enabling them to attempt more questions.
Additionally, accuracy leads to higher scores, improving the chances of success.
Ultimately, a solid understanding of mathematics prepares students to face diverse exam challenges.
This post aims to outline essential math calculations every student should know.
We will explore six crucial calculations necessary for competitive exams.
By familiarizing yourself with these calculations, you can boost your confidence and performance.
Whether you’re preparing for entrance tests, competitive assessments, or certification exams, mastering these skills is vital.
In the following sections, we will delve deeper into each key math calculation.
We will discuss their significance, provide practical examples, and suggest effective strategies for improvement.
Embrace the challenge, and remember that practice makes perfect.
The journey to mathematical mastery may be demanding, but the rewards are worthwhile.
By understanding and applying these essential calculations, students can position themselves for success.
Understanding Basic Arithmetic Operations
Basic arithmetic operations form the backbone of mathematical proficiency.
These operations include addition, subtraction, multiplication, and division.
Mastering these skills enhances students’ confidence and performance in competitive exams.
Addition combines two or more numbers into a larger total.
It is often the first arithmetic operation students learn.
• For example, adding 5 and 3 gives 8.
• Addition can be done sequentially, such as adding 1, 2, 3, and 4 to get 10.
• Understanding the properties of addition helps reinforce this skill.
Students can use strategies like grouping numbers or the number line to simplify addition.
Practicing with larger numbers through worksheets can also improve speed and accuracy.
Subtraction involves taking one number away from another.
It is the inverse operation of addition.
• For instance, subtracting 4 from 10 results in 6.
• Students may find subtraction challenging because it requires understanding quantities.
• Borrowing is a technique often used in more complex subtraction.
To practice subtraction, students can use flashcards or online quizzes.
They should also solve real-life problems that require subtracting, such as calculating change after shopping.
Multiplication is an efficient way to perform repeated addition.
It helps students understand larger sets of numbers quickly.
• For example, multiplying 4 by 3 is the same as adding 4 three times.
• Students learn multiplication tables to facilitate quick recall.
• Visibility of patterns in numbers greatly helps in mastering multiplication.
Group study sessions can make learning multiplication enjoyable.
Additionally, employing games or apps that focus on multiplication can engage students and reinforce their skills.
Division splits a number into equal parts or groups.
It is often seen as the reverse of multiplication.
• For example, dividing 12 by 4 results in 3.
• Students can visualize division with objects to better understand the concept.
• Long division is a traditional method that many students learn in school.
To enhance division skills, students should practice with various types of problems.
Working on word problems can provide real-life applications of division.
Importance of Mental Math Skills
Developing mental math skills is crucial for quick calculations.
In competitive exams, time is often limited.
Being able to perform arithmetic operations in one’s head saves valuable seconds.
• Mental math enhances overall mathematical comprehension.
• It reduces reliance on calculators, especially during exams.
• Quick recall of basic arithmetic aids in more complex mathematics.
Practicing mental math can improve overall math skills.
Students should implement daily exercises that challenge their mental calculation abilities.
Simple activities like estimating sums or differences can promote this practice.
Master Every Calculation Instantly
Unlock solutions for every math, physics, engineering, and chemistry problem with step-by-step clarity. No internet required. Just knowledge at your fingertips, anytime, anywhere.
Tips for Practicing Arithmetic Operations Efficiently
Improving arithmetic operations requires consistent practice.
Here are some effective tips to enhance those skills:
• Daily Practice:Â Set aside 15-30 minutes daily for arithmetic exercises.
• Use Flashcards:Â Create flashcards for addition, subtraction, multiplication, and division facts.
• Incorporate Games:Â Engage in math games that challenge arithmetic skills.
• Online Resources:Â Utilize websites and apps offering math practice exercises.
• Group Study:Â Join a study group to keep motivation high and share tips.
• Real-World Applications:Â Apply math to daily activities, such as budgeting or cooking.
Tracking progress can also enhance learning.
Students should keep a journal to note improvements and areas for further practice.
Understanding and mastering basic arithmetic operations is essential for students aiming for success in competitive exams.
Mastery of these four operations—addition, subtraction, multiplication, and division—builds a strong mathematical foundation.
By developing mental math skills, students can enhance their problem-solving efficiency under timed conditions.
The suggestions provided in this section can help students practice arithmetic efficiently, laying the groundwork for future success.
Encourage students to remain positive and persistent.
With regular practice and dedication, they can significantly improve their arithmetic skills and excel in their exams.
1. Mastering Fractions and Decimals
Understanding Fractions and Decimals
Fractions and decimals represent numbers in different forms.
Their mastery is essential for success in competitive examinations.
Both features frequently appear in math problems, ranging from simple arithmetic to complex applications.
Competitors must grasp these concepts to solve problems effectively and efficiently.
Understanding fractions and decimals improves accuracy in calculations.
It also enhances overall mathematical understanding and problem-solving abilities.
Importance in Competitive Exams
Fractions and decimals are vital in various math areas, including:
• Basic Arithmetic:Â Competitors often need to perform calculations involving fractions and decimals.
• Algebra:Â Equations and inequalities frequently integrate fractions and decimals, underscoring their importance.
• Word Problems:Â Real-life scenarios often use these representations, making them essential to understand.
• Data Interpretation:Â Candidates analyze data presented in fractions and decimals, making quick conversions necessary.
Success in exams hinges on accurately translating and manipulating these forms.
A solid foundation in both equips students to tackle diverse problems across various topics.
Techniques for Converting Between Fractions and Decimals
Students frequently encounter situations requiring quick conversions between fractions and decimals.
Mastering these techniques can save time and improve accuracy.
Here are effective methods to achieve this:
Converting Fractions to Decimals
1. Division Method:Â To convert a fraction to a decimal, divide the numerator by the denominator.
For example, 1/4 converts to 1 ÷ 4 = 0.25.
2. Long Division:Â For more complex fractions, utilize long division.
For example, converting 3/8 involves dividing 3 by 8. The answer is 0.375.
3. Recognizing Common Fractions:Â Memorizing common fractions and their decimal equivalents aids in quick conversions.
Examples include: 1/2 = 0.5, 1/3 ≈ 0.33, and 3/4 = 0.75.
Converting Decimals to Fractions
1. Place Value Method:Â Write the decimal as a fraction using the place value.
For example, 0.75 is 75/100. Simplifying this gives 3/4.
2. Identifying Patterns: Recognize repeated decimals, such as 0.333…, which converts to 1/3.
3. Using a Calculator:Â For complex decimals, utilize a calculator to convert to fractions.
This is reliable and efficient.
Practicing these techniques enables students to convert between forms swiftly.
Frequent practice builds intuition and confidence.
Common Problems Involving Fractions and Decimals
Understanding common problems involving fractions and decimals prepares students for competitive exams.
Here are several types, along with examples:
Addition and Subtraction
Fractions and decimals can add or subtract, but requires a clear process:
• Fractions:Â Ensure the denominators match.
For 1/4 + 1/2, convert 1/2 to 2/4. The result is 3/4.
• Decimals:Â Align the decimal points.
For 0.75 + 0.25, the result is 1.00 or 1.
Multiplication and Division
These operations follow straightforward rules:
• Fractions:Â Multiply the numerators and denominators.
For 2/3 × 4/5, the answer is 8/15.
• Decimals:Â Ignore the decimal points while multiplying.
Adjust the decimal places in the answer.
For example, 0.6 × 0.4 = 0.24.
Mixed Operations
Mixed operations require care:
• When adding and subtracting fractions, remember to convert when necessary.
• With decimals, perform calculations in steps to maintain accuracy.
Word Problems
Real-world scenarios often involve complex fractions and decimals.
For instance:
• A recipe requires 1/3 of a cup of sugar.
If you want to double it, you’ll use 2/3 of a cup.
• A store offers a 25% discount.
To find the sale price, convert 25% to 0.25 and multiply by the original price.
Mastering fractions and decimals is crucial for students aiming to excel in competitive exams.
Students gain confidence by practicing conversion techniques regularly.
Additionally, understanding common problems equips them to tackle diverse scenarios.
Emphasizing the importance of both forms enhances overall mathematical skills, contributing to student success.
With focused practice, students can master these critical concepts.
They can approach their exams with confidence, ready to address any question involving fractions and decimals.
Read: Mathematical Calculations Simplified: Tips for Acing Your Exams
2. Exploring Percentages and Ratios
Understanding Percentages and Their Significance
Percentages play a crucial role in data interpretation across various fields such as finance, academics, and social sciences.
A percentage represents a portion of a whole, expressed out of 100.
For example, if a student scores 45 out of 50 on a test, their percentage score would be (45/50) * 100, which equals 90%.
Understanding percentages is essential for solving problems that involve comparison, growth rates, and statistical data.
Here are some key points on why percentages matter:
• Ease of Comparison:Â Percentages allow easy comparison between two or more sets of data.
For example, it is easier to say one candidate received 40% of the votes versus stating they received 200 votes.
• Trends Analysis:Â They help in analyzing trends over time, such as growth rates in industries or changes in population.
• Financial Literacy:Â Essential for understanding financial concepts like interest rates, discounts, and investment returns.
Exploring Ratios: Definitions and Types
Ratios compare two or more quantities.
They express the relative size of one value against another.
Ratios can be written in various forms including fractional, decimal, or with a colon.
Here are key aspects of ratios:
A ratio of ‘a’ to ‘b’ simplifies the relationship between the two quantities.
There are several types of ratios including:
• Part-to-Part Ratio: Compares individual parts of a whole, such as boys to girls in a class.
• Part-to-Whole Ratio: Compares a part to the total, like the ratio of students passing to total students.
• Rate: A specific type of ratio comparing different units, e.g., distance per time.
Ratios have practical applications in various fields, such as:
• Finance: Ratios like price-to-earnings determine the value of a stock.
• Chemistry: Ratios help in determining concentrations and mixtures.
• Sports: Comparing player statistics like goals scored in games played.
Solving Percentage Problems Effectively
To tackle percentage problems effectively, follow these steps:
• Finding the Percentage: Use the formula \( \text{Percentage} = \frac{\text{Part}}{\text{Whole}} \times 100 \).
• Calculating the Part: Rearrange the formula to find a part by using \( \text{Part} = \frac{\text{Percentage} \times \text{Whole}}{100} \).
• Finding the Whole: Use \( \text{Whole} = \frac{\text{Part}}{\text{Percentage}} \times 100 \) to find the total.
• Percent Increase/Decrease: To find the increase, use \( \text{New Value} = \text{Original Value} \times (1 + \text{Percentage Increase})\).
For decrease, use \( \text{New Value} = \text{Original Value} \times (1 – \text{Percentage Decrease})\).
Examples of Percentage Problems
Let’s consider a few practical examples of percentage calculations:
Example 1: If a smartphone costs $600 and is on sale for 25% off, how much do you pay?
• Calculate 25% of $600: \( 600 \times 0.25 = 150 \).
• Subtract the discount: \( 600 – 150 = 450 \).
• The final price is $450.
Example 2: A student scores 78 out of 120 on an exam.
What is the percentage score?
Use the percentage formula: \( \frac{78}{120} \times 100 = 65\% \).
Solving Ratio Problems Effectively
Ratios can also be solved using specific techniques:
• Identifying Ratios: Clearly define the quantities involved.
• Simplifying Ratios: Express the ratio in its simplest form, such as dividing each term by the greatest common divisor.
• Finding Missing Values: Set up an equation based on the ratio to find unknown quantities.
Examples of Ratio Problems
Here are some examples to illustrate how to work with ratios:
Example 1: If the ratio of cats to dogs in a shelter is 3:2, and there are 12 cats, how many dogs are there?
• Set up the equation: \( \frac{3}{2} = \frac{12}{x} \).
• Cross-multiply: \( 3x = 24 \).
• Solving gives \( x = 8 \) dogs.
Example 2: The ratio of boys to girls in a classroom is 5:7.
If there are 35 boys, how many girls are there?
• Set up the ratio: \( \frac{5}{7} = \frac{35}{y} \).
• Cross-multiply: \( 5y = 245 \).
• Solving gives \( y = 49 \) girls.
Mastering percentages and ratios provides essential skills for excelling in competitive exams.
These concepts foster a strong understanding of how to interpret data and solve real-world problems.
Practice ensures that students become adept at applying these calculations in various contexts.
Thus, students who invest time in honing their skills will undoubtedly find themselves better prepared for future challenges.
Read: How to Calculate and Solve for Standard Normal Variable | Probability
3. Developing Skills in Algebraic Expressions
Introduction to Algebra
Algebra is a crucial branch of mathematics that deals with symbols and the rules for manipulating those symbols.
It forms the foundation for higher-level math and is essential for many competitive exams.
Understanding algebra helps students develop logical thinking and problem-solving skills.
What are Variables, Constants, and Expressions?
In algebra, variables are symbols that represent unknown numbers.
For example, in the equation x + 5 = 10, x is a variable.
Constants are fixed values that do not change, such as 5 and 10 in this example.
Expressions combine variables and constants using mathematical operations.
• Variables:Â Symbols (like x, y, z) that represent unknown values.
• Constants:Â Fixed values (like 1, 2, 3) that do not change.
• Expressions:Â Combinations of variables and constants (e.g., 2x + 3).
Basic Operations with Algebraic Expressions and Equations
Algebra involves various operations that allow students to manipulate algebraic expressions and equations.
The basic operations include addition, subtraction, multiplication, and division.
These operations follow specific rules that make simplifying and solving expressions easier.
Addition and Subtraction
When adding or subtracting algebraic expressions, combine like terms.
Like terms share the same variable and exponent.
For example, in the expression 3x + 4x, you can add the coefficients: 3 + 4 = 7, resulting in 7x.
When multiplying algebraic expressions, you can apply the distributive property.
For instance, in the expression (x + 2)(x + 3), distribute each term: x*x + x*3 + 2*x + 2*3, which simplifies to x² + 5x + 6.
Dividing algebraic expressions often involves factoring.
Factor both the numerator and denominator and then cancel out common factors.
For example, in (x² – x) / x, factor the numerator: x(x – 1) / x. Canceling gives x – 1.
Tips for Simplifying Algebraic Equations Quickly
Simplifying algebraic equations can seem challenging, but with practice, students can improve their speed and accuracy.
Here are several tips to help streamline this process:
• Organize Work:Â Write each step clearly.
This helps you track your progress.
• Know the Rules:Â Familiarize yourself with algebraic rules and properties.
Apply them confidently.
• Combine Like Terms Early:Â At every step, look for like terms to combine.
This keeps expressions manageable.
• Use Substitution:Â If an equation feels complex, substitute values for variables to simplify the process.
• Practice Factoring:Â Master different factoring techniques, such as factoring by grouping and using the quadratic formula.
Understanding Algebraic Equations
Algebraic equations set two expressions equal to each other and often involve variables.
Solving these equations provides students with unknown values.
This process typically requires isolating the variable on one side of the equation.
Linear Equations
Linear equations, such as 2x + 5 = 15, have variables raised to the first power.
To solve linear equations, perform inverse operations.
Subtract 5, yielding 2x = 10. Then divide by 2 to find x = 5.
Quadratic Equations
Quadratic equations, like x² + 3x – 4 = 0, involve variables raised to the second power.
Students can solve these equations using various methods, such as factoring, completing the square, or applying the quadratic formula.
Factoring yields (x + 4)(x – 1) = 0. Therefore, x = -4 or x = 1.
Other Types of Equations
Students may encounter exponential equations, rational equations, and more.
Each type requires specific strategies for solving.
Understanding the distinctions between these equations is vital for success in competitive exams.
For example, solving an exponential equation may involve taking logarithms, while rational equations may require finding a common denominator.
Practice Regularly
Consistent practice plays a crucial role in mastering algebraic expressions and equations.
Students should engage in various exercises, ranging from basic problems to more complex challenges.
Use textbooks, online resources, and problem sets to diversify practice opportunities.
Algebra forms the backbone of many competitive exams.
Developing skills in manipulating algebraic expressions is crucial for success.
By mastering basic operations, simplifying equations, and developing strong problem-solving techniques, students can confidently approach various math challenges.
Encourage students to practice regularly and seek help when needed.
With determination and the right strategies, anyone can excel in algebra, paving the way for success in competitive exams.
Read: How to Calculate and Solve for Standard Deviation | Probability
4. Geometry Basics: Understanding Shapes and Formulas
Overview of Key Geometric Concepts
Geometry forms the backbone of many competitive exams.
Understanding its core concepts is crucial for success.
Students often encounter various geometric shapes, angles, and formulas during their study sessions.
This knowledge paves the way for tackling complex problems effectively.
Let’s explore some fundamental geometric concepts that every student should master.
Angles measure the rotation between two intersecting lines.
They are vital in constructing and understanding shapes.
Key types of angles include:
• Acute angles (less than 90°)
• Right angles (exactly 90°)
• Obtuse angles (greater than 90° but less than 180°)
• Straight angles (exactly 180°)
Familiarity with basic shapes helps in recognizing and classifying more complex figures.
Key shapes include:
• Quadrilaterals (like squares and rectangles)
• Circles
• Polygons (like pentagons and hexagons)
Area represents the space within a shape.
Memorizing formulas for calculating area is essential.
Common formulas include:
• Rectangle: Area = length × width
• Triangle: Area = 0.5 × base × height
• Circle: Area = Ï€ × radius²
Volume measures the space occupied by a solid.
Understanding how to calculate volume is critical, especially for three-dimensional problems.
Essential volume formulas include:
• Cube: Volume = side³
• Rectangular Prism: Volume = length × width × height
• Cylinder: Volume = Ï€ × radius² × height
Importance of Memorizing Main Geometric Formulas
Memorization plays a significant role in mastering geometry.
Quick reference to geometric formulas can save valuable time during exams.
Here are some compelling reasons to memorize these formulas:
• Time Efficiency: Quick recall of formulas prevents wasting time flipping through notes.
Each second counts during competitive exams, so time management is crucial.
• Confidence Building: Knowing formulas by heart boosts confidence levels.
With fewer uncertainties, students can focus on solving problems rather than searching for solutions.
• Complex Problem Solving: Many exam questions intertwine various concepts.
Memorizing formulas allows students to construct complex solutions efficiently.
• Better Comprehension: Repeated exposure to key formulas deepens understanding.
This comprehension assists in visualizing problems, which enhances overall problem-solving capabilities.
Real-life Applications of Geometry in Competition Settings
Geometry is not solely an academic subject; it has numerous real-life applications, especially in competitive environments.
Understanding its relevance helps students appreciate its significance.
Here are some ways geometry applies in real-life scenarios relevant to competitions:
• Architecture and Engineering: These fields rely heavily on geometry.
Architects and engineers use geometric principles to design structures.
Precision in calculating angles and areas is crucial for safety and aesthetics.
• Graphic Design: Graphic designers utilize geometry to create visual elements.
Understanding shapes and their properties aids in creating balanced and harmonious designs.
• Sports Strategy: Athletes and coaches apply geometric reasoning for better performance.
For example, understanding an optimal angle for a jump can improve an athlete’s result.
• Computer Graphics: In the tech industry, geometry plays a vital role.
Developers use geometric models in creating video games and simulations, emphasizing the significance of precise calculations.
• Navigation: Geometry also impacts navigation and mapping software.
Systems like GPS depend on angular calculations and geometric algorithms to provide accurate location services.
Basically, mastering geometry is vital for students aiming for success in competitive exams.
Understanding fundamental concepts like angles, shapes, area, and volume lays a strong foundation for tackling complex problems.
Memorizing essential geometric formulas enhances time efficiency, boosts confidence, and aids in complex problem-solving.
Moreover, recognizing the real-life applications of geometry underscores its importance and relevance.
Students who embrace these concepts will position themselves for success in their academic and competitive journeys.
Read: How to Calculate and Solve for Mean | Probability
5. Data Interpretation and Probability
Understanding Data Interpretation
Data interpretation is a crucial skill for competitive exams.
It involves analyzing information presented in various formats.
Students often encounter charts, graphs, and tables in their exams.
Mastering these formats can enhance comprehension and analysis skills.
Types of Data Representation
• Charts:Â Charts provide a visual way to present data.
They can be pie charts, bar charts, or line charts.
Each type helps represent different kinds of data effectively.
• Graphs:Â Graphs show the relationship between variables.
A line graph displays data trends over time.
A scatter plot helps visualize correlations between two variables.
• Tables:Â Tables organize data in rows and columns.
They allow quick comparisons across multiple data points.
Tables are efficient for presenting extensive datasets.
Benefits of Data Interpretation Skills
• Improved decision-making capabilities based on analytical reasoning.
• Enhanced ability to identify trends and patterns in data.
• Development of critical thinking skills necessary for problem-solving.
Basics of Probability
Probability is the study of uncertainty and chance.
It helps students understand how likely events are to occur.
Knowing the key concepts and formulas can significantly enhance problem-solving abilities in exams.
Key Concepts in Probability
• Probability of an Event:Â The probability of an event A is calculated as P(A) = Number of favorable outcomes / Total number of outcomes.
• Complementary Events:Â The probability of the complement of an event A is P(A’) = 1 – P(A).
• Independent Events:Â For two independent events A and B, the probability of both occurring is P(A and B) = P(A) * P(B).
• Dependent Events:Â For dependent events, P(A and B) = P(A) * P(B|A), where P(B|A) is the probability of B occurring after A has occurred.
• Mutually Exclusive Events:Â Mutually exclusive events cannot happen at the same time. Thus, P(A or B) = P(A) + P(B).
Important Probability Formulas
• Simple Probability:Â P(A) = Number of favorable outcomes / Total outcomes.
• Joint Probability: P(A and B) = P(A) × P(B | A) for dependent events.
• Conditional Probability:Â P(A | B) = P(A and B) / P(B).
• Bayes’ Theorem:Â P(A | B) = [P(B | A) * P(A)] / P(B).
Application of Data Interpretation and Probability in Competitive Exams
Competitive exams often incorporate data interpretation and probability questions.
Understanding how these questions are framed is essential for students.
Framing of Data Interpretation Questions
• Questions may provide a pie chart depicting market shares.
Students analyze the chart to determine the largest share holder.
• Exams might feature a bar graph comparing sales figures across several years.
Students might need to identify trends or calculate percentages.
• Some questions involve tables showing test scores for various subjects.
Students may be asked to find averages or rank performances.
Framing of Probability Questions
• Questions may ask the probability of drawing a specific card from a deck.
Students calculate based on favorable and total outcomes.
• Exams may present scenarios to determine the likelihood of events occurring.
For example, calculating the probability of rolling a certain number on a die.
• Some probability questions may involve real-world applications.
For instance, predicting the weather based on given probabilities.
Tips for Mastering Data Interpretation and Probability
Students looking to excel in these areas should focus on several strategies.
These tips will help improve competency in questions during competitive exams.
Effective Study Techniques
• Practice Regularly:Â Consistent practice familiarizes students with different types of charts, graphs, and tables.
• Use Real Data:Â Analyze real-world data sets.
Websites offer access to various data for practice.
• Work with Previous Exams:Â Review questions from prior exams.
For example, practicing past competitive exam papers can build confidence.
• Understand Key Concepts:Â Ensure a solid grasp of fundamental concepts of probability, as they form the basis of all calculations.
• Discuss with Peers:Â Collaborate with peers to solve complex problems.
Group study can enhance understanding.
Utilizing Resources
• Online Courses:Â Various platforms offer courses specifically on data interpretation and probability.
• Books and Guides:Â Invest in math guides focusing on competitive exams.
These resources often include valuable practice questions.
• Simulations and Tools:Â Use software and online tools that simulate competitive exams.
These tools can provide timed practice.
Mastering data interpretation and probability is vital for success in competitive exams.
Students equipped with these skills can analyze information critically.
They can also make informed decisions based on their assessments.
Through regular practice and a strategic approach, students can enhance their performance dramatically.
This increased proficiency not only aids in exams but also proves useful in academic pursuits and real-life situations.
6. Practice Strategies for Mastery
Importance of Regular Practice and Revision
Regular practice and revision form the cornerstone of mastering math calculations.
They help students solidify their understanding of concepts and improve retention.
Over time, consistent practice enhances speed and accuracy in problem-solving.
Students who practice regularly develop a better intuition for numbers.
This intuition allows them to recognize patterns and apply appropriate strategies effectively.
Moreover, frequent revision helps counteract the forgetting curve, ensuring knowledge remains fresh and accessible during exams.
Setting aside dedicated time for practice allows students to focus deeply.
It encourages them to tackle challenging concepts without the pressure of an upcoming exam.
When students engage in regular review sessions, they can identify and rectify their weaknesses more promptly.
Resources for Practicing Math Calculations
A plethora of resources exists to help students practice their math calculations effectively.
Utilizing these resources can significantly bolster a student’s proficiency and confidence.
Here are some valuable resources that students can leverage:
• Textbooks: Standard textbooks often contain a rich variety of problems.
Students can select exercises that target specific areas of difficulty.
• Online Learning Platforms: Websites like Khan Academy and Coursera provide interactive lessons.
They offer tailored quizzes and assignments to assess understanding.
• Mobile Apps: Math apps such as Photomath and Mathway enable students to practice on the go.
These apps provide step-by-step solutions to problems and interactive practice tasks.
• Mock Tests: Taking mock tests prepares students for the exam environment.
Timed practice helps improve speed and builds exam stamina.
• YouTube Channels: Various educational YouTube channels offer video tutorials.
Visual aids can simplify complex concepts and provide different viewpoints.
Incorporating a mix of these resources can keep practice sessions engaging.
Diverse materials prevent monotony and foster a deeper understanding of mathematical concepts.
Students should evaluate which resources resonate most with their learning style.
Encouragement to Join Study Groups
Joining study groups can drastically change the dynamics of math learning.
Collaborative learning harnesses the power of peer interaction, leading to enhanced comprehension.
Students can explain concepts to each other and fill gaps in their understanding.
Study groups also provide motivation and accountability.
When students gather regularly, they encourage each other to stay focused on their goals.
The social aspect of learning often makes studying more enjoyable and less isolating.
Furthermore, group discussions can unveil different problem-solving methods.
A student might discover a new technique by observing peers.
It fosters critical thinking and encourages students to approach math from multiple perspectives.
Finally, study groups can serve as a support system during stressful exam preparations.
Having a network of peers boosts confidence and reduces anxiety.
Emphasizing collaboration leads to a more comprehensive learning experience, aiding understanding and retention.
Strategies for Effective Practice and Mastery
To maximize the effectiveness of practice, students should consider various strategies.
Implementing these approaches can enhance learning and help solidify mathematical concepts.
• Set Specific Goals: Establish clear, measurable goals for each practice session.
This helps track progress efficiently and keeps motivation high.
• Use a Timer: Practice math problems within a set time frame.
Timed sessions simulate exam conditions and improve speed.
• Review Mistakes: Analyze errors to understand why they occurred.
This reflective practice transforms mistakes into valuable learning opportunities.
• MIX Practice Problems: Combine different types of problems into practice sessions.
This approach promotes versatility and prepares students for exam variety.
• Incorporate Real-Life Applications: Relate math problems to everyday situations.
Connecting abstract concepts to real-world applications deepens understanding.
Implementing these strategies can result in exponential learning gains.
Discipline and consistency in practice yield remarkable results over time.
In fact, mastering math calculations requires dedication and an effective approach to practice.
Regular revision, leveraging diverse resources, and engaging in collaborative learning significantly enhance understanding.
Students must take proactive steps towards their math learning journey.
By adopting targeted practice strategies, they can develop the confidence necessary to excel in competitive exams.
Ultimately, success in math builds a strong foundation for academic and professional pursuits.
In this discussion, we explored six crucial math calculations.
These calculations play a vital role in helping students excel in competitive exams.
Understanding percentages, ratios, algebraic expressions, geometry, arithmetic operations, and data interpretation forms the bedrock of mathematical proficiency.
Firstly, mastering percentages enables students to quickly calculate changes and comparisons.
This skill helps in tackling questions related to profit and loss effectively.
Secondly, knowing ratios aids in simplifying complex problems and offers a clear understanding of relationships between quantities.
Algebraic expressions are essential for solving equations efficiently.
Students who grasp this concept can tackle higher-level math problems with greater ease.
In geometry, visualizing shapes and understanding properties helps students answer spatial problems accurately.
Moreover, mastering arithmetic operations strengthens a student’s foundation.
Quick addition, subtraction, multiplication, and division enhance accuracy and speed during exams.
Lastly, data interpretation equips students to analyze graphs, charts, and tables, a skill necessary for many competitive assessments.
Each of these calculations contributes significantly to problem-solving abilities.
They also improve time management, a crucial skill in high-pressure exam situations.
By honing these calculations, students build confidence in their mathematical capabilities.
In general, mastering these essential math calculations can greatly enhance exam performance.
Students should prioritize practice in these areas to gain a competitive edge.
Each calculation discussed empowers students to solve problems more effectively and efficiently.
As you embark on your preparation journey, remember that proficiency in these calculations opens doors to success.
Dedication to mastering these skills will set you apart in any competitive environment.
Embrace the challenge, practice diligently, and watch your performance soar in your future endeavors! | {"url":"https://www.nickzom.org/blog/2024/10/10/competitive-exams-math-calculations/","timestamp":"2024-11-10T06:01:14Z","content_type":"text/html","content_length":"272342","record_id":"<urn:uuid:3c044621-9c3a-4198-bfb0-72442a5c25ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00848.warc.gz"} |
Application of Coarse Grained Drag Law In Computational Fluid Dynamics Simulations of Fluidized Beds
Scale-up of fluidized beds for industrial applications has typically been carried out using dimensional analysis similar to single-phase fluid flow. However, it is impossible to keep all
dimensionless groups constant, and therefore impossible to accurately scale up industrial fluidized solids processes using traditional scaling arguments. Because of this inability to fully understand
factors affecting scale-up, designs based on small cold-flow experiments or even larger pilot units may overlook critical factors influencing operability of commercial units.
Computational fluid dynamics (CFD) has the potential to directly simulate behavior of reactive commercial scale units, thus overcoming gaps in scale-up methodology. However, CFD technology itself is
not without limitations. The popular Eulerian-Eulerian approach (which treats gas and solids phases as interpenetrating continua) suffers from the dependence of predicted dynamics on the
computational mesh resolution. Because of this grid dependence, scale-up of the CFD simulations themselves can be difficult or impractical since the grid resolution must be held constant between a
typically small validation experiment and a large commercial scale simulation.
The dependence of predicted dynamics on grid resolution arises from ignoring small-scale flow structures which are not adequately resolved with coarse grids. The use of highly resolved computational
grids which fully resolve small-scale flow structures can overcome this hurdle (analogous to DNS for single-phase flows), but such fine grid resolutions are impractical for commercial scale systems.
Another more computationally efficient method uses so-called coarse grained models which adjust parameters such as interphase drag with changing grid size to capture effects of the small-scale flow
structures on overall bed dynamics. Thus, a commercial gas fluidized solids unit can be simulated at relatively low mesh resolution while capturing most features of the overall dynamics.
In this paper, we use such a coarse grained Eulerian-Eulerian model to simulate dynamics of a small-scale cold-flow model of a fluid coker. By comparing our CFD results with experimental
measurements, we demonstrate the utility of coarse grain models for fluid-solids flow and also the accuracy of our simulations. We are able to accurately predict gross flow structures in the
fluidized bed as well as local voidage and velocity profiles, thus building confidence in the simulation of commercial scale cokers. | {"url":"https://aiche.confex.com/aiche/2008/techprogram/P138358.HTM","timestamp":"2024-11-11T20:31:28Z","content_type":"text/html","content_length":"4051","record_id":"<urn:uuid:4c136606-c7a0-4b6b-936a-0ac2a52c27de>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00811.warc.gz"} |
Deep dive: Cancellation rate in SaaS business models
I wanted to expand on the practical and mathematical implementations of the cancellation rate I referred to in last week’s post.
Why cancellation rate is so important
As a preamble to the metrics, it’s useful to know what you’re measuring and why it’s vital.
[Cancellation rate] = [product utility] + [service quality] + [acceptable price]
I put in these particular elements because I did a study of the reasons people cancel at WP Engine, and these are the main reasons for cancellation. We log every cancellation — spending time running
after folks to wring out the cause — so we can deduce exactly what we can do to prevent it in future. (Of course you should do this too and get your own data.)
These three factors are, of course, critical to a healthy, growing startup, and yet individually they’re impossible to measure as precisely and easily as cancellation rate. (I’ve never seen a graph
of “usefulness of the product.”) So although it’s a single number combining several factors, and we know that averaging can obfuscate, I think cancellation rate is a good overall measure of how well
the company is servicing its customers, and with tools like our detailed log you can still break apart the single number into concrete, actionable influences.
Beyond the analytical breakdown, I have an emotional attachment to this number, because whenever someone cancels I think about what had to happen to get them to this point, and it kills me. Of
course people cancel only after they’re already a customer, which means they’ve already gotten through the barriers preventing them from buying: finding your website, not bouncing off the home page,
understanding what you offer, deciding it’s something they want, researching the competition, signing up, configuring settings, entering a credit card number, rolling through tech support, and maybe
even announcing to some Facebook “friends” that they just found something cool.
Barely anyone on Earth will ever power through this gauntlet. I turn myself inside out just to get a thousand people to bounce off the home page, praying that one makes it through to the end, like a
frog laying ten thousand eggs hoping three survive long enough to do the same.
And then, after all that… they cancel! Son of a bitch! I have to know why and I have to do something about it!
So we’re going to measure this bastard, and we’re going to compute another useful thing from it, but it turns out to be harder than it first appears.
The many kinds of cancellation rate
A “rate” is a ratio of “something divided by time,” and it’s unclear what the something is and over how much time.
Unfortunately, like the many moods of Binky, there are many kinds of so-called “cancellation rate,” each subtly but critically different.
All of the following definitions are meaningful, but each measures something different. It’s useful to describe each so you can decide which (or several!) make sense in your case:
• Percentage of current customers who canceled in a given day/week/month. If this spikes, something just changed in how you’re behaving across the board. We had a spike in this metric in February
at WP Engine when our Internet provider themselves had a datacenter-wide catastrophe which brought us down for twelve hours; of course not all spikes will have such obvious causes. We’ve also
rolled out new initiatives designed to reduce this rate, and for the most part we’ve been successful. When selecting the duration component, it has to be long enough to get a stable number, but
not so long that sharp changes in reality take weeks to register on the chart; I recommend using smallest possible time period without seeing the metric hit “0%.”
• Percentage of new customers in a given month which end up canceling at any later date. This is often called “cohort analysis,” where the new customers in January are tracked together as one
cohort, new customers from February as another, etc.. The idea is to compare the behavior of each cohort against subsequent cohorts, but comparing similar periods in those customers’ lifespans.
For example, how many who started in Jan cancelled in Feb compared to how many who started in Feb cancelled in Mar, and so on. This determines not whether your service is improving across the
board, but whether new customers are getting a better new experience. This is especially useful if — like most companies — you have a higher cancellation rate with new customers than with old
ones, and you (think you) have taken steps to improve that situation, and want to measure that progress.
• Absolute number of cancellations per week. This metric is supposed to increase over time at a growing SaaS company simply because there are more and more customers available to cancel. Still, you
should also be getting better at preventing cancellations, or at least certain types of cancellations, such as crappy tech support or lacking a feature. Our cancellation log implicitly represents
this metric because we review it weekly to look for trends. It’s not something we feel is also useful to graph because changes in the number aren’t necessarily actionable.
• Percentage of all customers who’ve cancelled over the lifetime of the company to-date. This is another way of measuring an “in/out” ratio — plotting relative number of new customers arriving
versus customers exiting. For a company laser-focussed on accelerating the number of active users, it might be actually worth having high cancellation rates if it meant an even higher acquisition
rate. This metric measures this relative change, so long it’s decreasing (or not increasing), you might be happy even if you’re not watching or optimizing for the cancellation rate in isolation.
For a quality-service company like WP Engine a high cancellation rate is a sign of terminal cancer even if acquisition rates are also increasing, so this metric isn’t useful to us. It’s also not
particularly useful when, again like WP Engine, the monthly cancellation rate is nice and low while new customer acquisition is healthy, because in this case — by definition — this metric
diminishes asymptotically. Whenever you know for certain what a metric will do, it’s not useful or actionable to measure it!
• Cancellations as deactivations. For paid services like WP Engine, a “cancellation” is literally “the customer called to cancel, or clicked the ‘stop charging my card’ button.” For many
consumer-Internet companies, most of your users aren’t paying, and therefore almost none will bother to take a cancel “action” even if they’re effectively cancelled. Rather, a lack of activity
signifies an effective cancellation. In this case, you need to have a clear definition of an “active user” (e.g. “has logged in at least once in the past week”) and consider the user “cancelled”
if they were active before but now are not.
• Revisionist history. What happens when a customer cancels but then returns? This is relatively rare in a company like WP Engine but is common for those consumer-Internet companies where
cancellation is identical to passive deactivation, because a user might be reinvigorated by a newsletter or a tweet. In that case, you can decide that user didn’t cancel after all and update
historical data in your charts. That’s perfectly fine, just as it’s perfectly fine that a customer today might become a cancellation tomorrow but that’s not yet in the chart either.
Of all these techniques, the one that is perhaps most important and useful for us at WP Engine — and likely you too — is cancellation rate by age. I call this a “continuous version of cohort
You take the number of cancellations in a given time period (we use months), broken out by age (e.g. younger than 30 days old versus older), and then compute the “cancellation rate” as a percentage
of all customers in that age group who cancelled. Like many companies, we’ve found that people who cancel soon after sign-up do so for very different reasons than those who cancel after a year, and
we care about those two groups differently, and we act on them differently.
For example, quick cancels are often due to situations things like “I decided I didn’t want a blog after all” or even “My son used my credit card to set up my website but I’m not going to pay for
hosting.” We probably can’t affect these circumstances much, or at least it’s not worth our time to try. So we’ll never get the short-term cancellation rate lower than a certain number — a number we
can actually compute by marking these in the cancellation log and totting them up separately.
This is useful, not only because it sets a sensible target “floor” for our activities that do reduce short-term cancellation, but because it calibrates our expectations on how much front-end sign-ups
we need to achieve our growth targets. It’s an “automatic drag” that we can just factor in to our projections.
On the other end of the time spectrum, every time we lose a long-term customer — which is thankfully almost never, knock wood — it’s cause for us to sit up and do a post-mortem. That rate had better
be very close to 0%. If it ever pops it would be a “tools-down, everyone get on that right now” sort of problem.
I do worry about this happening as we grow, as should any company. It’s common knowledge that expanding companies have a hard time maintaining the level of service which earned them that growth. You
sign up customers faster than your ability to hire quality people, so either existing people are stretched thin or you make the more fatal error of lowering standards to fill chairs. It’s harder to
train people and keep your culture going, and those founders who were brilliant at seeking market-fit and constructing the foundation of a startup might be ill-suited for scaling that organization.
It appears some of our WP Engine competitors are experiencing exactly this, right now. An increase in our long-term cancellation rate will be our first warning sign that we’re not handling the growth
properly, so we watch it like a hawk.
LTV, my way
The other useful thing we do with cancellation rate is to compute LTV (customer LifeTime Value), but I don’t use the simplistic technique espoused by many others.
“LTV” means “the total revenue you’ll get from a customer over its lifetime.” For a simple subscription business model the formula is easy to write but hard to compute:
[LTV] = [monthly revenue] × [number of months in lifetime]
The hard part is “number of months,” because of course you don’t know how many months you’ll keep a customer until that customer leaves, and hopefully most haven’t left yet. Worse, if you’re like WP
Engine you haven’t been around long enough to know how long most customers will stick around. For example, WP Engine is 15 months old and we’ve retained 95% of our customers, but the longest anyone
has stayed is by definition 15 months, and because we’re growing, the average is probably around 6! But how long will all these folks stick around if we keep at it for another five or ten years?
P.S. Who cares about LTV? It’s the main way to determine whether your company is profitable and how much money you can (should?) spend on marketing, but that subject is covered well elsewhere so
I’ll skip this and get back to geeking out over math.
It turns out you can compute the elusive “expected number of months” from your cancellation rate, even if you only have a few months of data to go on.
The typical formula is derived like so:
• Let p be the percentage of current customers who cancel in a given month. For example, if in March you had 200 paying customers and 10 cancelled, p = 0.05.
• In any month of N customers, Np will cancel, leaving N(1-p).
• Assuming you didn’t add any new customers, you would bleed customers according to that formula, every month. The customers remaining would be N(1-p) after the first month, then N(1-p)(1-p) after
the second, then N(1-p)(1-p)(1-p) after the third, and so forth.
• In each month we get $R revenue per customer, so that means the first month we get $RN, then $RN(1-p), then $RN(1-p)(1-p), and so forth.
• To compute the “lifetime” amount of revenue these customers provide, you add up this infinite series, so: $RN + $RN(1-p) + $RN(1-p)(1-p) + $RN(1-p)(1-p)(1-p) + …
• Factor out the $RN, so you get: $RN × [1 + (1-p) + (1-p)(1-p) + … ]
• That bracketed infinite series can be rewritten as just 1/p.
• So total expected revenue is $RN/p.
• And the average expected revenue per customer is $R/p.
• And the average number of months per customer is 1/p.
So with our example of a 5% monthly cancellation rate, p = 0.05 so expected months is 20. If it’s a base-level WP Engine customer, that’s 20 months at $50 per month, so $1000 total LTV.
It’s nice when the math turns out simple, but that doesn’t mean it’s right.
It is right if your cancellation rate is indeed p every month, for every customer. But I just got through explaining that this isn’t at all the case. Newer customers tend to have high cancellation
rates; older ones much smaller. In fact it’s not unusual for there to be a 10x difference between the short-term and long-term rates.
Therefore, I suggest a hybrid approach: First compute expected survivors over the short-term cancellation period, then use the “infinite sum” technique for the long-term customers.
Running an example will make this clear. Using the “cancellation rate by age” metric, suppose for the first 3 months of life your customers’ cancellation rate is 15%/mo, but after that the rate is 3%
So you retain 85% of your customers after the first month, 85% of those after the second, etc., for three months, for a retention after three months of (0.85)^3 = 0.61.
I then completely ignore the revenue received by those 39% of customers who stuck around for only a few months. Sure they gave us a little money, but surely it’s negated by the time-cost of messing
with them over tech support and processing cancellations and refunds. I want LTV to be a conservative metric, so I ignore this revenue.
Now, with 61% of my original customers remaining, it makes sense to use the formula above to predict they’ll stick around for another 1/0.03 = 33 months, and since they’ve already made it through 3
months, that’s a grand total of 36 months.
So, for a brand new customer, there’s a 61% chance they’ll deliver 36 months’ of revenue, and 39% chance I get nothing (significant), for an expected 0.61 × 36 = 22 months on average.
To demonstrate why this method, while more tedious, is superior than the simplistic one, observe that if you ignore age groups, this same company would appear to have a 6% cancellation rate (which
conceals the interesting customer behavior) and by the usual formula would have expected months of 1/0.06 = 16 months, which is incorrect by almost 40%. That’s a lot of error!
For those of you who like formulas, we can compile the example down to variables:
• r = short-term cancellation rate (e.g. 0.15)
• p = long-term cancellation rate (e.g. 0.03)
• s = number of months in the “short-term” age group (e.g. 3)
• (1-r)^s × (s + 1/p) = expected months
Yes, this is how I compute LTV at WP Engine, except worse! Because we’ve decided there’s actually three distinct age-groups, plus we treat coupon-based customers separately because have a
significantly higher cancellation rate, often for reasons that have nothing to do with our behavior.
As a final note on LTV, I personally don’t care much about lifetime revenue, and prefer to compute lifetime operational profit, meaning the net revenue after taking out known costs of service. In the
case of WP Engine, we knock off 2% for credit card fees and a certain amount for hosting and bandwidth costs.
Once you get down to that, you can easily answer questions like “how much money can we spend to acquire a customer.” For example, if LTV (net) is $500, it’s a pretty easy decision to spend $50 or
even $150 to acquire a customer. That might mean AdWords, tradeshows, give-aways, coupons, affiliate programs, or anything! Extremely useful for confidently measuring marketing campaigns against
profitable customer growth.
I guess that last sentence sounded like abstract business-speak drivel. But it’s true, and getting a solid handle on your cancellation rate is the key.
Besides, mathematical formulae aside, it’s perhaps the single best and easiest measure of whether you’re actually delivering on your promises to customers.
And what’s more important than that?
Did you make it all the way to here? Let’s continue the tips and tricks in the comments section.
39 responses to “Deep dive: Cancellation rate in SaaS business models”
1. I’d be interested to see the comparison between _enterprise_ SaaS and normal $39.95/month SaaS.
The churn on enterprise SaaS would presumably be incredibly low, because once a company invests in integration, workflows, training, etc. for a particular system, it’s going to be very hard to
□ Great point, but not necessarily. For HubSpot, for example, they have plenty of (100s, if not 1000s) enterprise-sized customers, and yet measuring CHI (which is their prediction of customer
happiness, defined as the likelihood that they’ll cancel) still drives almost everything that every employee does.
If the contract is yearly instead of monthly, you could make the argument that churn is necessarily over a longer timeframe, yes. But that doesn’t mean the churn is immaterial! It means that
straight up “cancels” — i.e. customer stops paying — isn’t the right metric for you. Rather, you need something more like CHI — something you can measure monthly or even daily which tells you
essentially the same thing, so you can react faster and keep those customers.
Because the flip-side to your argument is that enterprise customer acquisition is also extremely expensive, so you HAVE to get more than a year of service out of them, so cancellation is
still vital.
2. Great Post!
3. You might want to check out my posts on Renewal failures in subscription SAAS business at:
4. Didn’t you say that cancellation rates were not the most important thing in “The full story of “the one important thing” for startups”?
BTW, your blog’s home page takes *forever* to load. Can’t you get the guys at WP Engine to improve that…? :-)
□ Yes, I said that for us it’s not the #1 thing. But it’s still important!
☆ Glad to see your response. I agree.
Actually when I read your other post I saw the logic of it but it left me with a pit in my stomach that made me feel bad. While numerically more Google AdWords seems to make more logical
sense, working on cancellation rates makes more intuitive sense to me. Both are important but I think the latter is critical. Glad to see you didn’t toss it out with the bathwater.
○ Here’s a better way to put it:
While cancellation rate stays low, signups are the most important thing.
If we see a rise in cancellations, that means the company is literally broken, which means growth is no longer important.
5. I’m going to find a way to apply this to my semiconductor business IF IT KILLS ME.
We measure things by the socket. (specific customer, specific pn#.) Our cancellations are usually driven by customer’s product lifecycles and/or end market success, plus our $R is usually a
percentage of the total. (A commodity split btw 2-4 suppliers.)
And we’ve got a significant leadtime between “signup” (awareness, samples, price negotiation, ….) and 1st production shipments. Yes, I know. The measurements are more complicated.
My point? The model works – if you do a decent job of segmenting the pretenders from the winners aka “cancellation by age”.
Very nice post. Special props for the use of Binky.
□ Good for you! Check out Dharmesh Shah’s Business of Software talk from 2010 about “CHI.” They have a way of trying to predict “customer happiness,” meaning the % chance that they’re going to
cancel, based on correlating various factors.
The result is that far before they actually technically “cancel” you can tell it’s not looking good, and either take an action to fix or at least plan around it, both for forecasting and the
amount of time you spend with them.
6. You need to look into a great statistical simulation method called Monte Carlo. I won’t get into the details here as you can just search and come up with thousands of pages of info, but it’s
greatest benefit (in my mind) is getting out of the precise number mindset.
At the end of the analysis it will provide you with a probability for each number instead of an exact number. For example, instead of saying that LTV will be $1,000 it will say:
10% probability of $10
20% probability of $200
50% probability of $1,000
20% probability of $1,200
10% probability of $2,000
…or something like that. Much more useful in my mind. Let me know if you’re interested in this concept and I could whip up an example for you based on your real numbers.
□ Yup, I’m familiar. It’s a great idea — would be neat to have a web tool where you enter in a few parameters and you get the distribution instead of a number.
Note that it’s ALSO useful to have a single number when you’re combining with other things, to be simpler.
7. Have you performed cohort analysis by “why the customer signed up with us?” If you do decide you can afford give-aways, it may impact how long they stick around. The phone companies know this,
as they lock in customers to pay for the give-away (discounted new phone, etc.).
□ Terrific idea. We do segment “coupon” from “non-coupon” but your idea is better.
8. Big business may have the cash, but small businesses have more votes.
It is important that small business owners counter these financial
contributions to Senators and Representatives by making sure that their
voices are heard. Write your Congressman or Senator and tell them to
support small businesses.
9. Thanks for the great post!
You’ve mentioned 3 stages of churn rate, but I was wondering, if you have 15months of data, couldn’t you use 15 stages? Your churn rate will probably converge after some months anyway, and you
could use the last rate as the long term churn.
Does it sounds correct? Too complicated?
□ You could but it didn’t change the numbers much.
Usually there’s not that many forces dictating cancellation, and your model shouldn’t be more complex than reality. That is, people cancel after 8 months pretty much at the same rate and
reason as after 9.
Jason Cohen
10. How many month’s data should one take to compute an accurate picture of LTV and LT? 1, 3, 6 or more?
□ I would plot it at least monthly, more often if you have more transactions, and watch it over time.
The trend is often more telling than the absolute number.
11. Thank you for the very comprehensive post! I truly think it’s important to understand the cancellation rate metric and how to calculate it!
Creating a cohort analysis based on “time to cancel” can indeed allow focusing on long-term users, however I believe cohort can also be measured based on engagement level.
I’ve quoted your post and added my thoughts in Totango’s blog: http://blog.totango.com/2011/10/3-ways-to-do-cohort-analysis-on-saas-churn/
12. Excellent article, thanks for sharing all this with the community! I had a tough time finding how others compute LTV.
13. Grant
Thornton Georgia is a team of experienced public accountants and auditors,
specialist advisers in finance, business and management, as well as tax and
legal advisers working at Tbilisi office.
14. An amazing panel discussion on Cloud taking place in Toronto next month: http://www.eventbrite.com/event/2218230788/efblike
15. Jason, great post, very comprehensive.
Question: Have you trended your findings/ results over time? Are the number abetting worse of better, as you take propriate action(s)?
Thinking from a future protectionist viewpoint, I would want to know the underlying root causes of ‘why’ my customers were cancelling, and build a top 10 list with an appropriate prevention and
recovery list of tactics for each.
Applying the 80/20 rule to these lists, I would then build in tactical ways for the customer to experience these actions before cancelling (albeit in a different tense since they wouldn’t have
actually cancelled yet!)
One final consideration. Gym Memberships. Check out search archives of Harvard Business Review. They did an awesome study on why people cancel and one key point was ongoing usage of the service,
caused by psychological barriers that ‘creep’ over customers, over time. So, Gyms focussed on the enjoyment factor of the experience: free coffee, newspapers, handy health tips, humorous posters,
attractive assistants etc etc.
What can SaaS providers do?
Splash pages with cartoons on logout? Free Apps, Special reports via email, customer ezine tackling other challenges, links to value add… The list is huge. Surprise and fun is also important.
Built on tip top support and reliability.
Any thoughts on these and your original post??
□ On trending over time, you *must* do that. The trend is more important than the actual number. In fact different businesses will natural have different numbers — it’s whether it’s not growing
(or, if you’re actively trying to affect it, if it’s shrinking) that’s most interesting.
16. Jason, I agree with this thoughtful and thorough analysis, thank you for your insights.
I would like to contribute one additional layer of analysis for determining how much to invest in customer acquisition. When acquiring a customer to a SaaS offering, you’re paying the full cost
of acquisition upfront (the Google CPC, the trade show fee, etc.) only to receive revenues from the customer in monthly increments over their entire lifespan.
To determine an appropriate acquisition cost we must look at the present value of the future revenue. Using your simple example of $50 per month over 20 months yields $1,000 nominally. But the
present value of the $1,000 may only be $900 or $950. This present value of customer revenue is the number you should use to determine the appropriate investment in customer acquisition.
□ That’s true. Although nowadays interest rates are such that you probably can ignore NPV.
I think you need CAC very much less than LTV, so a small change in LTV shouldn’t change your mind.
Alternately, this is a good argument for getting annnual prepay, even providing a discount for it. The amount of te discount is of course your NPV calculation!
Jason Cohen
17. A smart bear indeed! Thank you for this great post.
I understand cancellation rates. Can you share your thoughts on how life time value can be computed if the product is not a periodic payment? We at http://graduatetutor.com/ provide private
tutoring for MBAs. So by definition it is personalized and students can use an hour a week, or a few hours a day and stay for just a day or a few weeks or months/years too.
□ No matter the payment system, you’re answering this question:
What is the expected value for total revenue?
“Expected value” is a statistical concept which could be loosely interpreted as “weighted average.” It’s not the “most likely value” but rather the average, weighted by the probability of
each value.
So a rough way to do it would be to *convert* your complex thing into an average monthly rate. For *each* customer you compute:
equiv_monthly_rate == total_revenue / total_months
Then you could average those rates to get your standard monthly rate, then use the logic from this article.
Of course using averages like this covers up important data, like how spiky it might be, or how certain customers might take hours consistently and others don’t, and all sorts of other
But it could at least get you a ballpark answer, and something you could track over time.
18. Any opinion about billing contributing/causing some cancellations? What I am asking is, have you analyzed, say, the model where a credit card is on file and monthly monies are just charged as
opposed to a quarterly or annual large invoice, where customers have to be notified that payment is due and they then realize now is the time to cancel if I plan to? They might just keep paying
if a credit card or direct debit were automatically taking the money without any red-alerts sounding ?
To bring it back to cancellation rate calculations, I am curious if anyone has analyzed how billing models affect cancellation rates, and if one model works better than another for retention. My
company now requires annual commitments for its SaaS service, but you can pay monthly with a credit card on file, quarterly via credit card, wire transfer, or invoiced payment, or annually (same
options, with a deep discount for this annual prepayment). My suspicion is that when we give a company 30 days warning that their annual payment is due, they start to really look at the service
hard and decide at times that this is the time to get out from under. My last company only did credit card or direct debit, unless the customer was very large and then was willing to manually
billed them.
□ That’s a great question, and no I don’t have data of my own.
It’s probably true that the smaller the payment and the less fuss is made over it, the less likely the customer is to cancel.
From a financial perspective that’s important because that’s money in your pocket.
From a “healthy business” perspective that’s NOT good because it’s masking the truth, which is that people don’t really need your stuff, which is a more fundamental problem.
☆ I agree that customers cancelling yearly prepayments/commitments brings a very valuable signal to the company. Did they not renew because price is too high in comparison with competitors?
A lack of new features compared to competitors? Customer service quality has gone down? Etc.
In other words, you’re gaining valuable information that you would otherwise not receive from customers who don’t think twice about a perceived lower cost (because of lower monthly
payments). Customers will do work for you that you should be doing anyway (reviewing price and value compared with competitive options), so make sure to capture the reason for these
cancellations very diligently.
□ It depends. Annual commitments definitely reduce churn when you have an average lifetime less than 12 months. However, this can and does affect signups…some just want to use the service for a
short time not a long time. That’s where it gets difficult to continue to show value. There are so many ways to manipulate these numbers. Starting from how and how often you communicate with
them. This is typically a big trigger because you may be reminding them to quit. What’s a better strategy is to automatically renew and require opt out to unsubscribe.
19. Hi Jason,
Some interesting stuff here. I’ve done some similar work on retention and the marginal advantage of keeping SaaS customers longer than average. Take a look and let me know what you think.
20. Thanks Jason. Will be something I refer to when I launch my own SAAS :)
21. Hi Jason, thanks for the great article! You mentioned that at WP Engine, you use three distinct age groups. I too am trying to calculate tenure for cohorts with three different “stages”, and
wanted to check that my math is right. Would the formula for lifespan of a cohort with three different age groups be (1-r)^s * (1-x)^n * (1 + 1/p), where n = number of months in the “medium
term”age group and x is the churn rate of that group?
Would really appreciate any guidance! Thanks!
□ Yes, except the last is just 1/p without adding one.
☆ Hey Jason, thanks so much for the quick reply. I thought about it more and after doing some math, I think it should actually be (prob reaching month s)*(s) + (prob reaching month n)*(n) +
(prob reaching month n+1)(expected lifetime for month n+1 onwards). This translates to s*(1-r)^s + n*(1-r)^s * (1-x)^(n-s) + (1/d)*(1-r)^s * (1-x)^(n-s) * (1-d), where n = number of
months in the medium-age group, x is that group’s churn rate, and d is the churn rate of the oldest group. An example equation would be (1-0.3)^1 + 3(1-0.3)(1-0.2)^2 + (1/0.1)*(1-0.3)
(1-0.2)^2(1-0.1)…does this make sense? Thanks so much again!
○ It hard to folllow. I think you might be skipping the fact that “chance to get to n” is really “chance to get to s” times “chance to get to n-s” in the first part, though maybe it’s
right in the second example.
Best might be to just model every month in a spreadsheet, one per line. Easier to think through what is happening each month, and tweak parameters or the model. | {"url":"https://blog.asmartbear.com/cancellation-rate-in-saas-business-models/","timestamp":"2024-11-09T14:25:12Z","content_type":"text/html","content_length":"129038","record_id":"<urn:uuid:e97152c2-f1aa-42ed-a0a3-282c0b6b8ca4>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00124.warc.gz"} |
09 Elementary Row Operations, Gauss Elimination, Inverse Matrices, Inverse Theorem
Lecture from 16.10.2024 | Video: Videos ETHZ
Elementary Row Operations
Why Elementary Row Operations Preserve the Solution Set
Each elementary row operation corresponds to an invertible transformation of the system of equations. Since each operation is invertible, we can always reverse the operation to recover the original
system, implying that the two systems (before and after applying operations) are equivalent.
Formally: (Lemma 3.3)
If we say then there exists such that . Hence, every solution of is a solution of .
Intuitively, if we have a which works out for and we simply multiply both side by the same thing, then must be true too. .
Gauss Elimination
Theorem 3.5
Let be a system of equations and variables. The following statements are equivalent:
1. Gauss Elimination works out
2. The columns of are linearly independent
We’ll show (i) (ii) and not (i) not (ii) (contrapositive).
(i) ⇒(ii):
If Gauss elimination succeeded, then we transformed , an upper diagonal matrix with for all . U has linearly independent columns (i.e. no column is a linear combination of the previous). It becomes
trivial to see that this is the case, since for every pivot the previous columns all have zeros in that position.
not (i) ⇒ not (ii)
If Gauss elimination fails, then for some column a pivot is zero.
The part before is alright, cause it’s pivots are non-zero, but at our column we have a zero pivot. Now we need to show that the columns of this matrix are linearly dependent.
To do that we’ll construct such that .
Let us set and choose the other such that:
Now this equation will have a solution since U is a valid upper triangular matrix with non-pivots which means that can be easily solved by back substitution (see: Gauss Elimination). We’ll then get a
non-zero vector implying linear dependence.
The runtime of Gauss Elimination is:
Inverse Matrices
In simpler terms, the inverse of a matrix “undoes” the effect of that matrix, similar to how division undoes multiplication for real numbers. For example, if you apply a matrix transformation to a
vector, applying the inverse matrix transformation will return the vector to its original state.
Invertible Matrices
Not all square matrices are invertible. (i.e. not all linear transformations are undo-able, for example ).
Case 1x1:
Case 2x2:
1. Let’s first make an augmented matrix:
2. We’ll make the pivot in the first row = 1. If
3. We’ll now eliminate the first element in the second row:
which simplifies to:
4. Make the pivot in the second row and second column 1
Next, we need to make the pivot in the second row and second column equal to 1. If (), we can divide the second row by (). The resulting matrix is:
5. Eliminate the second element in the first row
We subtract () times the second row from the first row: The resulting matrix becomes:
which holds provided that () (the determinant of is non-zero).
The inverse is unique
If we have a matrix with two inverses and , then .
Inverse of a product
If and are invertible matrices then is also invertible and .
Inverse of a Transposition
If we have a matrix which is invertible then:
Inverse Theorem (Theorem 3.11)
Let be a matrix. The following are equivalent.
1. A is invertible
2. For every , has a unique solution
3. The columns of are independent
We’ll prove the following implications:
1. (i) ⇒ (ii)
If A is invertible then: is a solution because
And it’s also unique because for any , .
(ii) ⇒ (iii)
If always has a unique solution, then also for . And we’ve proven that if has a unique (trivial) solution, then it’s linearly independent (See Gauss Elimination).
(iii) ⇒ (ii)
Let be solutions of .
If Gauss elimination succeeds, which it must since the columns are independent, then we’ll find a solution to our equation.
But since the columns of A are linearly independent, there must only exists one trivial solution .
(ii) ⇒ (i)
If has a unique solution for all , then also for (unit vectors). This implies that there are such that:
But now we also need to show .
Continue here: 10 Calculating the Inverse, LU and LUP Decomposition | {"url":"https://cs.shivi.io/Semesters/Semester-1/Linear-Algebra/Lecture-Notes/09-Elementary-Row-Operations,-Gauss-Elimination,-Inverse-Matrices,-Inverse-Theorem","timestamp":"2024-11-04T23:13:08Z","content_type":"text/html","content_length":"230888","record_id":"<urn:uuid:08537249-231f-4f83-a535-cad0bac5752a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00129.warc.gz"} |
TY - JOUR T1 - Neural Networks with Local Converging Inputs (NNLCI) for Solving Conservation Laws, Part II: 2D Problems AU - Huang , Haoxiang AU - Yang , Vigor AU - Liu , Yingjie JO - Communications
in Computational Physics VL - 4 SP - 907 EP - 933 PY - 2023 DA - 2023/11 SN - 34 DO - http://doi.org/10.4208/cicp.OA-2023-0026 UR - https://global-sci.org/intro/article_detail/cicp/22126.html KW -
Neural network, neural networks with local converging inputs, physics informed machine learning, conservation laws, differential equation, multi-fidelity optimization. AB -
In our prior work [10], neural networks with local converging inputs (NNLCI) were introduced for solving one-dimensional conservation equations. Two solutions of a conservation law in a converging
sequence, computed from low-cost numerical schemes, and in a local domain of dependence of the space-time location, were used as the input to a neural network in order to predict a high-fidelity
solution at a given space-time location. In the present work, we extend the method to two-dimensional conservation systems and introduce different solution techniques. Numerical results demonstrate
the validity and effectiveness of the NNLCI method for application to multi-dimensional problems. In spite of low-cost smeared input data, the NNLCI method is capable of accurately predicting shocks,
contact discontinuities, and the smooth region of the entire field. The NNLCI method is relatively easy to train because of the use of local solvers. The computing time saving is between one and two
orders of magnitude compared with the corresponding high-fidelity schemes for two-dimensional Riemann problems. The relative efficiency of the NNLCI method is expected to be substantially greater for
problems with higher spatial dimensions or smooth solutions. | {"url":"https://global-sci.org/intro/article_detail/getRis?article_id=22126","timestamp":"2024-11-10T04:58:53Z","content_type":"text/html","content_length":"2114","record_id":"<urn:uuid:504e3fb7-23d9-4ed3-ad6f-29d4f83d2a34>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00149.warc.gz"} |
Return the unique elements of x sorted in ascending order.
If the input x is a column vector then return a column vector; Otherwise, return a row vector. x may also be a cell array of strings.
If the optional argument "rows" is given then return the unique rows of x sorted in ascending order. The input must be a 2-D matrix to use this option.
If requested, return index vectors i and j such that y = x(i) and x = y(j).
Additionally, if i is a requested output then one of "first" or "last" may be given as an input. If "last" is specified, return the highest possible indices in i, otherwise, if "first" is
specified, return the lowest. The default is "last".
See also: union, intersect, setdiff, setxor, ismember. | {"url":"https://docs.octave.org/v4.0.0/Sets.html","timestamp":"2024-11-14T20:52:12Z","content_type":"text/html","content_length":"6486","record_id":"<urn:uuid:b08f169d-0740-4ebe-b101-a028b60585ae>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00653.warc.gz"} |
14 Best Algebra Books for Students
Algebra is a part of elementary mathematics education. However, academic curricula in most colleges in the US have algebra. Hence, a chunk of academic grade depends on the algebra scores. However,
going by the textbook provided by the educational institution is not always enough to get the best grades. Therefore, most students look for reference books or workbooks based on the subject, but
finding one that is useful, simple to understand, concise, objective, and logical is difficult. If you are in search of the best algebra books, read this blog. Here, we have presented a list of books
on algebra.
What are the Branches of Algebra?
Algebra is divided into several branches based on the complexity of numeric equations. It includes:
• Pre-algebra: It deals with the elemental ways of representing unknown values as variables to create mathematical expressions.
• Elementary algebra: Elementary algebra aims to solve algebraic expressions to get an operable answer. Here, you can symbolize simple variables like x and y in the shape of an equation. The second
degree of the variables is known as:
• Linear equation: presented in the format: ax + b = c, ax + b + c = 0.
• Quadratic equation: presented in the format: ax2+bx +c = 0
• Polynomial equations: represented in the format: axn+bxn-1+cxn-2 + …..k=0
Elemental algebra is based on the degrees of variables and branches in quadratic and polynomial equations.
• Abstract Algebra: Abstract Algebra is concerned with the utilization of abstract concepts like groups, rings, and vectors. The combined properties of addition and multiplication are used to find
the simple loops of degrees of abstraction Globe theory and ring theory are other fundamental concepts in abstract algebra.
• Universal Algebra: All other mathematical functions performed in trigonometry, calculus, and coordinated geometry that have algebraic expressions are considered universal algebra. However,
universal algebra does not study any algebraic models.
Why Should You Study Algebra?
Find the best algebra books and study the subject in detail to enjoy the following benefits:
· Make your life easier
Studying algebra is not all about learning the fundamentals of mathematics, the subject helps to recognize problems in life and solve them quickly. For example, in simple mathematics, adding 2 twelve
times makes 24. However, if algebraic tools are applied, the solution can be derived in seconds. Similarly, in practical life, graphing problems become impossible without algebra.
· Master Statistics and Calculus
Intricate branches of mathematics like statistics and calculus are based on the basic knowledge of algebra. Statisticians predict occurrences or ways to achieve specific outcomes using various
statistical tools and the vital concepts of arithmetic. Similarly, scientists, technological designers, and medical professionals need basic arithmetic knowledge and their professional skills to
define the complex processes of calculus, design new technologies, or develop new treatment methods.
· Make accurate financial decisions
Apart from helping with academic and professional purposes, reading the best books on algebra is useful for making the best financial decisions. For example, one can choose the investment funds and
healthcare plans that offer the greatest returns with help of two-variable equations. Besides that, you can choose suitable cell phone plans, custom-order furniture, or make crafts with the help of
basic algebraic equations.
· Algebra Reinforces Logical Thinking
Most importantly, algebra is helpful to instill and enhance logical thinking. It helps you analyze both sides of a situation and come to a useful conclusion.
Also read: What Is A Term In Math, And How To Solve A Mathematical Term?
The Best Algebra Books for Beginners
Following are the best algebra books for new learners:
1. Algebra Essentials Practice Workbook with Answers by Chris McMullen
This book is part of the Improve Your Math Fluency series. You can vouch for it as a resource for working on and enhancing fluency in fundamental algebra. The entire book is divided into 7 chapters
to help students acquire fluency in one algebraic method at a time. Here you can learn to solve the following:
• The standard equation for multiple unknown variables
• Linear equations using the quadratic formula
• Factorize quadratic equations
• Cross-multiply equations
• Systems of linear equations
Each chapter begins with a few pages of instructions for finding accurate solutions to algebraic problems. They are followed by useful examples.
• One of the best algebra books for: Getting a detailed review of algebra
2. No-Nonsense Algebra, 2nd edition by Fisher, Richard W
This book is based on pre-algebra. Every lesson comes with a complete review of the pre-algebraic chapter and online video tutorials. These chapters are short and have a smooth flow of words. They
smoothly transition from the easiest to the most complicated sections. Each section has the following items:
• A review
• Constructive hints
• Problem-solving exercises for putting knowledge into practical situations
The book also includes a complete review of pre-algebra and several award-winning online video tutorials, one for each lesson in the book. The latest edition also features an additional chapter on
quizzes and a useful glossary and resource center.
• One of the best algebra books for: Mastering the essential skills of algebra
3. The Humongous Book of Algebra Problems by W. Michael Kelley
This book is a simplified edition of the author’s previously published book, The Humongous Book of Calculus Problems. However, here, he has discussed on algebra. This book is best for those who find
algebraic workbooks complicated. You can recognize that Kelley has studied various academic programs offered in and across the USA in detail and understands the framework of all workbooks and the
issues students face with them. In this book, he has simplified the concepts and solutions available in any regular workbook and included missing steps and notes in the margins. It helps students
learn to interpret and solve algebraic problems in the way it is usually taught in their academic programs. Moreover, pertinent problems in algebraic courses that are rarely discussed in classrooms
are presented in this book. You will also come across annotations clarifying each problem and skull and crossbones symbols on the complicated problems to revisit later.
• One of the best algebra books for: Solving workbook problems
Also read: What is a Variable in Math, its Types and Uses?
4. Saxon Algebra 1/2, 3rd edition, by John H. Saxon Jr.
Algebra 1/2 by John H. Saxon Jr. deals with pre-algebra mathematics and presents all the topics covered in pre-algebra. The book also details some additional topics in geometry and discrete
mathematics for readers who find them interesting. This book can also be very helpful for seventh and eighth graders who want to take up first-year algebra in the next grade. Like all other books
written by Saxon, here he uses the “spiral method” to expand upon previously taught concepts.
With the assistance of this book, students can strengthen their understanding of pre-algebra topics like decimals, fractions, mixed numbers, percents, order of operations, signed numbers, solutions
for linear equations in one unknown, and evaluation of algebraic expressions. The author has arranged the problems in increasing order of difficulty, introducing new angles on old topics. As a
result, students will be able to test their knowledge and skills as they go along.
• One of the best algebra books for: Improving knowledge on almost all pre-algebra topics
5. Intermediate Algebra: Concepts and Applications by Marvin Bittinger, David Ellenbogen, and Barbara Johnson
Intermediate Algebra is a constituent of the Bittinger Concepts and Applications series. It aims to assist new students of algebra in learning and retaining mathematical concepts. This book is
concerned with a proven program that sets you up for the change from skills-oriented elementary algebra courses to more concept-oriented college-level mathematics courses. It supports students
efforts to expand their critical thinking and mathematical reasoning skills and detect and find solutions to mathematical problems.
This new edition includes a crisply integrated MyLab Math course and strongly emphasizes problem-solving, concepts, and practical world applications. You can call it an amalgamation of the workbook
and objective-based video programs. It also offers a more systematic review and preparation for practice.
• One of the best algebra books for: Revising and practicing algebra
6. Algebra 1 Workbook for Dummies by Mary Jane Sterling
If you ever come across a brain block while learning about algebra, the Algebra I Workbook for Dummies can be the best solution to relieve your problems. This book features numerous practice problems
and examples to meet the requirements of typical high school students. Each problem offers an exhaustive explanation to help students observe what they did correctly or incorrectly in every step.
This guide covers all the elemental concepts of algebra that you need to use in every other math class you may take.
The new third edition of the book offers the reader a vast online test bank with additional chapter quizzes. These will help you check your knowledge and ascertain weaker areas that you must review.
It also features a separate chapter that is concerned with graphs, formulas, and quadratic equations. This workbook can help you master the subject and offer comprehensive aid for study.
One of the best algebra books for: Exam preparation
Also read: Best Math Books To Augment Your Skills
The Best Algebra Books for High School Students
7. Practical Algebra: A Self-Teaching Guide, Second Edition by Slavin, Steve
Are you in search of a book that offers an excellent introduction to the algebra? If so, then Practical Algebra is just the book for you. The user-friendly and straightforward workout program
presented in this book will enable you to easily grasp algebra’s fundamental concepts and instruments.
The book has sensible, practical-life examples and applications. It helps you learn various fundamental and advanced but highly useful concepts, including the following:
• The fundamental approach and application of algebra to solving problems
• A comprehensive understanding of the number system
• Factoring algebraic expressions
• Monomials and polynomials
• Handling algebraic fractions
• Linear and fractional equations
• Roots and radicals
• Exponents
• Functions and graphs
• Ratio, proportion, and variation
• Solving word problems
• Quadratic equations
The authors emphasizes on practical algebra. They detail on various techniques for solving problems in multiple disciplines. It includes the following: physical sciences, life sciences, psychology,
and even business administration and sociology.
Practical Algebra guides you through solving algebraic problems and gives you the confidence to deal with comparable problems by yourself later. In addition, you can examine your progress with the
assistance of self-tests provided at the closing stages of each chapter.
• One of the best algebra books for: Learning to solve interdisciplinary algebra problems
8. High School Algebra II Unlocked: Your Key to Mastering Algebra II by The Princeton Review
The Princeton Review’s High School Unlocked series offers readers an assortment of principal techniques for dealing with various subjects. This book emphasizes Algebra II problems and will help you
recognize how abstract concepts are used in real-life situations. In addition, it offers a lot of scope for practice and permits you to develop your confidence as you go through the chapters.
Like other books in the series, High School Algebra II offers several methods to solve a specific problem or concept. If one method does not seem suitable for you, you can try an alternate one to
better understand the concept. The book also incorporates a variety of completely guided examples and independent practice problems after each chapter. Some of the most significant topics included in
this book are:
• Statistical modeling
• Trigonometric equations
• Logarithmic functions and operations
• Graphing and solving systems of equations
• Complex numbers and polynomials
• Radical and rational expressions and inequalities
• One of the best algebra books for: Learning to solve Algebra II problems through multiple techniques
9. Math for the Ages! SAT and High School Math by Mishra, Kishore, and Mishra Binapani
Intensification of one’s mental math skills significantly helps in setting down a concrete foundation for their later career. With this thought in mind, Math for the Ages concentrates on assisting
students in solving math problems quickly and precisely without scribbling pen to paper. It offers useful tips and ideating processes to deal with calculations instinctively.
The book focuses on 40 fundamental mathematical concepts with exceptional coverage, problem-solving techniques, and practice problems for enhancing students’ speed and correctness. Apart from
algebra, Mishra, Kishore, and Mishra Binapani have also incorporated the following branches of mathematics:
• probability
• word problems
• statistics
• trigonometry
• data interpretation
The authors have used typical problems to make algebra easy to understand and fun to deal with. It also helps the reader develop an interest in analytical problems.
Although many books are available on this topic, almost all of them lack a good presentation. However, it is not a problem for this book. Here, the authors appear to have researched everything
methodically and presented a book that is helpful for students of all academic levels, as its name rightfully suggests.
• One of the best algebra books for: Learning to solve algebra through mental math techniques
10. Pre-Algebra Concepts (Mastering Essential Math Skills) by Richard W
The book offers instructions in a user-friendly format for everyone to easily understand. Each chapter is short and specific. They flow logically and effortlessly into the next one. The author has
also incorporated a review of every lesson to help students remember what they have learned. There are lots of examples with detailed steps and solutions as well. Topics discussed in the book are:
• Solving algebraic equations
• Algebraic word problems
• Graphing equations
• Statistics
• Probability
• The slope of a line
• Scientific notation
• Order of operations
This book is an exceptional choice for high school students and is also useful for SAT and PSAT preparation.
• One of the best algebra books for: Preparing for the SAT and PSAT examinations
Also read: How to Solve Math Assignment Problems Faster?
11. Algebra 2 Workbook: Essential Practice for Advanced Math Topics by Carson Dellosa
The Algebra 2 Workbook is a division of the 100+ Series. It offers a decent overall review of complicated math topics. It mainly includes:
• Quadratic equations
• Trigonometric functions
• Factoring
• Polynomials
In addition, every page has an additional activity section for expanding student’s knowledge on the subject. Thus, this book can be one of the finest choices for daily revision of arithmetic problems
at home or in the classroom.
In line with the Common Core State Standards, the workbook incorporates quality diagrams and over 128 pages of specific activities to give students confidence to practice in all areas of algebra and
offer standards-based instruction. The problems incorporated here feature an excellent amalgamation of simple and complicated algebra. It helps to sharpen students’ skills while making them ready for
higher levels of math study.
• One of the best algebra books for: Revising advanced high school algebra topics
The Best Algebra Books for College Students
12. College Algebra by Stewart, James, Lothar Redlin, and Saleem Watson
The 7th edition of College Algebra helps the reader learn the techniques to consider things mathematically and cultivate vital problem-solving skills. It is simple, user-friendly, and permits the
student to recognize the basics of algebra in a variety of practical ways. The book also includes unique tools for the benefit of students, for example, learning objectives before each section to
practice them for the real lessons.
The book also has a useful list of formulae and main concepts discussed in a chapter at the end of it. It helps students to revise the things they have learnt. It also offers a wide range of
intriguing illustrations to showcase the ways one can use mathematics for modeling in physics, chemistry, biology, engineering, and business.
• One of the best algebra books for: Understanding the fundamentals of college algebra
13. Essentials of College Algebra by Margaret Lial, John Hornsby, and David Schneider
In the 12th edition of Essentials of College Algebra, these experienced teachers meet together to help students expand the analytical skills and theoretical understanding needed to succeed in
mathematics. In addition, this revised text provides a unique set of resources to uplift contemporary students and instructors. It is based on the updated college algebra courses.
The book is available with a complete set of instructional texts and videos that incorporate updates to MyLab Math and MathXL. It adopts a systematic approach to connecting with readers in the
learning process. The book endorses conceptual recognition and disapproves of rote memorization with the help of a wide variety of exercises. Apart from that, it also incorporates lots of scope for
revision throughout the book and at the end of chapters.
• One of the best algebra books for: Understanding the fundamental of college algebra
14. College Algebra with Intermediate Algebra: A Blended Course by Judith Beecher, Judith Penna, and Barbara Johnson
College Algebra with Intermediate Algebra: A Blended Course is an innovative and unique presentation from Judith Beecher, Judith Penna, and Barbara Johnson. The authors have developed the book to
deal with students’ changing requirements in Intermediate Algebra and College Algebra courses. It gets rid of the repetition in topic coverage all through the traditional two-course sequence. As a
result, you get a streamlined course experience that makes the most efficient use of time and resources.
The topics in the book have been carefully arranged, chapters are crisp and there is no redundant text in the book. It encourages the reader and builds a firm foundation of knowledge. This
inspirational, sleek, and modernized approach is augmented by the authors’ amazing efforts to achieve the following:
• Visualize the math through their prominence in ideation.
• Early foreword to functions and graphing
• Indicating relations between mathematical concepts and the practical world
• One of the best algebra books for: Experiencing a seamless algebra learning experience.
Also read : Learn About the 12 Popular Applications of Linear Algebra
Bookstores and online portals have a pool of books based on algebra. Choosing the best algebra books can be difficult. Therefore, we have listed different algebra books based on various algebra
branches. Go with the one that suits your requirements the best. However, if you struggle to solve the algebraic expressions, immediately contact us. | {"url":"https://us.greatassignmenthelp.com/blog/best-algebra-books/","timestamp":"2024-11-08T21:57:42Z","content_type":"text/html","content_length":"345693","record_id":"<urn:uuid:103b3441-4e3d-46ab-9d1f-1445d8237cf2>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00415.warc.gz"} |
Tropical semiring
Jump to navigation Jump to search
In idempotent analysis, the tropical semiring is a semiring of extended real numbers with the operations of minimum (or maximum) and addition replacing the usual ("classical") operations of addition
and multiplication.
The tropical semiring has various applications (see tropical analysis), and forms the basis of tropical geometry.
The min tropical semiring (or min-plus semiring or min-plus algebra) is the semiring (ℝ ∪ {+∞}, ⊕, ⊗), with the operations:
${\displaystyle x\oplus y=\min\{x,y\},}$
${\displaystyle x\otimes y=x+y.}$
The operations ⊕ and ⊗ are referred to as tropical addition and tropical multiplication respectively. The unit for ⊕ is +∞, and the unit for ⊗ is 0.
Similarly, the max tropical semiring (or max-plus semiring or max-plus algebra) is the semiring (ℝ ∪ {−∞}, ⊕, ⊗), with operations:
${\displaystyle x\oplus y=\max\{x,y\},}$
${\displaystyle x\otimes y=x+y.}$
The unit for ⊕ is −∞, and the unit for ⊗ is 0.
These semirings are isomorphic, under negation ${\displaystyle x\mapsto -x}$, and generally one of these is chosen and referred to simply as the tropical semiring. Conventions differ between authors
and subfields: some use the min convention, some use the max convention.
A tropical semiring is also referred to as a tropical algebra,^[1] though this should not be confused with an associative algebra over a tropical semiring.
Tropical exponentiation is defined in the usual way as iterated tropical products (see Exponentiation § In abstract algebra).
Valued fields[edit]
The tropical semiring operations model how valuations behave under addition and multiplication in a valued field. A real-valued field K is a field equipped with a function
${\displaystyle v\colon K\to \mathbb {R} \cup \{\infty \}}$
which satisfies the following properties for all a, b in K:
${\displaystyle v(a)=\infty }$ if and only if ${\displaystyle a=0,}$
${\displaystyle v(ab)=v(a)+v(b)=v(a)\otimes v(b),}$
${\displaystyle v(a+b)\geq \min\{v(a),v(b)\}=v(a)\oplus v(b),}$ with equality if ${\displaystyle v(a)eq v(b).}$
Therefore the valuation v is almost a semiring homomorphism from K to the tropical semiring, except that the homomorphism property can fail when two elements with the same valuation are added
Some common valued fields:
• Q or C with the trivial valuation, v(a) = 0 for all a ≠ 0,
• Q or its extensions with the p-adic valuation, v(p^na/b) = n for a and b coprime to p,
• the field of formal Laurent series K((t)) (integer powers), or the field of Puiseux series K{{t}}, or the field of Hahn series, with valuation returning the smallest exponent of t appearing in
the series.
• Litvinov, G. L. (2005). "The Maslov dequantization, idempotent and tropical mathematics: A brief introduction". arXiv:math/0507014v1. | {"url":"https://static.hlt.bme.hu/semantics/external/pages/v%C3%A9ges_%C3%A1llapot%C3%BA_transzducereket_(FST)/en.wikipedia.org/wiki/Tropical_semiring.html","timestamp":"2024-11-02T17:48:02Z","content_type":"text/html","content_length":"46986","record_id":"<urn:uuid:bd2bcfd0-5ce1-4f05-a4b1-e4302dc7b8e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00583.warc.gz"} |
Artin-Schreier sequence
Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
Discussion Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Welcome to nForum
If you want to take part in these discussions either
(if you have an account),
apply for one now
(if you don't). | {"url":"https://nforum.ncatlab.org/discussion/5506/artinschreier-sequence/","timestamp":"2024-11-12T02:56:27Z","content_type":"application/xhtml+xml","content_length":"12024","record_id":"<urn:uuid:fd6541f4-aaf7-45fe-b3c6-dcbde9b8fae3>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00451.warc.gz"} |
ity of
It must be saved in designated areas and you’ll be notified whenever you are allowed to get access to it. The wall of the home bounds one particular side. We would like to minimize the whole travel
time. All the points and lines that lie on the very same plane are supposedly coplanar. For every 3 points in space, a exceptional plane exists.
The elimination approach to solving systems of equations is also known as the addition technique. This result is called Cantor’s theorem. The absolute most http://benjaminleigh.co.uk/
the-fundamentals-of-literacy-math-revealed popular fractional and decimal equivalents are given below. To go past the observations is fraught with peril and is called extrapolation. Be aware that the
denominator of a fraction cannot be 0, as it would produce the fraction undefined. It’s only with denominators where you could end up with a point that’s undefined.
What is Actually Going on with Undefined in Mathematics
A point is a thing, but it’s not considered a spot in math. This wasn’t obvious to the humans before. The team’s findings today support the concept that black holes are, in reality, hairless. It’s
that latter view that is accepted by mathematicians hop over to this website and many others. Slope is a significant concept so we’ll review some vital facts here. The slope is also called the rise
over run.
The True Meaning of Undefined in Mathematics
You can have points be collinear, in other words, they share exactly the same line. A set doesn’t will need to ordered, like an array. In addition, it makes it possible for us to iterate through an
array. You’ll frequently be directed to evaluate a specific function for a specific value of x. It’s a two-dimensional object.
Finding Undefined in Mathematics
However, it’s not essential to earn 80% within the initial three quiz attempts. Accuracy of the floating-point methods is measured with respect to ulps, units in the previous spot. Since omitting the
factor will underestimate the conventional error, it ought to be included for smaller samples. It can have a very long time to debug such errors. In the event the outcomes will probably have occurred
under the claim, then you don’t reject H0 (such as a jury decides not guilty).
To begin with, find the worth privatewriting.com of the term that consists of x, then locate the worth of the whole expression. This informative article explains the fundamental commands to display
equations. 2x is an expression with a single term. Its value will be contingent on the worth of x. Despite the fact that the value 5 is repeated, it’s still one and only a single value.
Algebrator improved my sons grades in merely a day or two! You make this decision by making up a number, known as a p-value. All of them work and the decision is an issue of taste. Let’s look at an
example to learn how this is completed. I would like to just write that another important point.
The Debate Over Undefined in Mathematics
Points are used a good deal in geometry. JavaScript has just one kind of numbers. This is the reason why it isn’t allowed. Checking that the number is composed of 4,5 and 6 will take some time. It
consists of an endless number of planes. Completely accurate, but not so helpful!
The Undefined in Mathematics Pitfall
The procedure for matrix multiplication will become clearer when working a issue with real numbers. Normally, a variable representing a constant is among the very first letters in the alphabet.
There’s no response to a problem whenever you have division by zero. I will reveal to you a issue and you type in the answer as quickly as possible. It follows that a positive shift in y is related
to a positive shift in x. Our prior example demonstrated that this isn’t always the instance.
Characteristics of Undefined in Mathematics
Since x and y form a proper triangle, it’s possible to calculate d employing the Pythagorean theorem. This still doesn’t mean that y is brought on by x. It would not be possible to discover the midst
of a line or ray which never ends! In the same way, the 306090 triangle has to be memorized, somehow. The lines a transversal crosses might or might not be parallel.
Below are a few of the critical concepts and terms that you’ll need to understand so as to start your study of geometry. Be aware that the subroutine clobbers A and G, so you need to save them should
you need to use them again. They’re tailored to assist you comprehend the lesson better. But it really is essential that you learn these math facts. By knowing the definitions of algebra vocabulary
inside this list, you will have the ability to construct and solve algebra problems considerably more easily. | {"url":"http://newportswimmingclub.co.uk/dirty-facts-about-undefined-in-mathematics-uncovered-2/","timestamp":"2024-11-10T15:54:55Z","content_type":"application/xhtml+xml","content_length":"43494","record_id":"<urn:uuid:a7a11b9d-a56c-4c59-907e-fd020652764a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00036.warc.gz"} |
004.9:621.7 Mathematical modeling of the metal deformation process on a casting and forging module with a modified drive of the side strikers
doi: 10.18698/2309-3684-2021-3-323
This paper presents the mathematical formulation and the results of calculations of the problem of metal deformation on a casting-forging module with modified side strikers’ drive. A complex
spatial problem of determination the stress-strain state of the flow region under loading with an external load that changes over time is considered. The fundamental equations are based on flow
theory. At solving the problem, a proven numerical method is used, as well as numerical schemes and the software package used earlier at solving similar problems. The software package implements
a step-by-step loading algorithm considering the history of the process and the changing geometry of the flow region. A small time step is associated with a 10° rotation of the eccentric shaft.
The deformation area is divided into elements by an orthogonal system of surfaces (elements have an orthogonal shape). For each element, the formulated system of equations is written in a
difference form, which is solved according to the developed numerical schemes and algorithms, that consider the initial and boundary conditions. The result of the solution is the fields of
stresses and velocities of displacements in the spatial area. The analysis of the obtained results is given. A comparison with the results of the current structure module solving has been made.
Lead is taken as a deformed material, the physical properties of which are approximated by an analytical dependence according to the available experimental data. The physical nonlinearity of the
system of equations is realized during solving by the iterative method. Local calculations of the solution of the problem were carried out on three variants of division of the area into elements.
The choice of the mesh density imposed on the considered deformation region is substantiated. The solution results are presented graphically. The efficiency of the deformation process according
to the improved method on a new design of the casting and forging module is shown.
Одиноков В.И., Дмитриев Э.А., Евстигнеев А.И., Потянихин Д.А., Квашнин А.Е. Математическое моделирование процесса деформации металла на литейно-ковочном модуле с измененным приводом боковых
бойков. Математическое моделирование и численные методы, 2021, № 3, с. 3–23.
519.63:536.4 Numerical modelling of porcesses of formation, growth, and decomposition of agglomerates in porous medium under different modes of heating
doi: 10.18698/2309-3684-2021-3-2441
The paper considers a numerical model of flow in a porous medium containing particles of a melting component (polymer). When heated, these particles swell, deform and fill the pore spaces, as a
result of which the permeability is significantly reduced. The relationship between porosity and permeability is described by a simple Kozeny-Karman formula. Then, near the lower (inlet)
boundary, a region with low permeability (i.e.agglomerate) is formed, the growth of which is determined by the conditions at the side wall and inlet boundaries. As a result of calculations,
typical scenarios of porous medium blocking at different heating temperatures were obtained. It is shown that when heated through the wall, the polymer may decompose, so the porous medium
partially restores its permeability. When heated by the inlet gas, agglomerate is much more stable, since it blocks the heating source.
Донской И.Г. Численное моделирование процессов образования, роста и разложения агломератов в пористой среде при разных режимах нагрева. Математическое моделирование и численные методы, 2021, № 3,
с. 24–41.
539.3 Coupled modeling of high-speed aerothermodynamics and internal heat and mass transfer in composite aerospace structures
doi: 10.18698/2309-3684-2021-3-4261
A coupled problem of high-speed aerothermodynamics and internal heat and mass transfer in heat-shielding structures of reentry spacecraft made of ablative polymer composite materials is
considered. To determine the heat fluxes in the shock layer of the reentry vehicle, the chemical composition of the atmosphere is taken into account. The mathematical formulation of the conjugate
problem is formulated and an algorithm for the numerical solution is proposed. An example of the numerical solution of the problem for the reentry spacecraft Stardust is presented. It is shown
that taking into account chemical reactions in the gas flow around the surface of the reentry vehicle is essential for the correct determination of the gas temperature in the boundary layer. It
is also shown that the developed numerical method for solving the problem makes it possible to determine the parameters of phase transformations in a heat-shielding structure depending on the
heating time, in particular, it allows calculating the pore pressure field of gaseous products of thermal decomposition of a polymer composite, which, under certain conditions, can lead to
material destruction.
мДимитриенко Ю.И., Коряков М.Н., Юрин Ю.В., Захаров А.А., Сборщиков С.В., Богданов И.О. Сопряженное моделирование высокоскоростной аэротермодинамики и внутреннего тепломассопереноса в композитных
аэрокосмических конструкциях. Математическое моделирование и численные методы, 2021, № 3, с. 42–61.
519.6:629.7.02 Application of a genetic algorithm in the problem of modeling and optimization of hydraulic systems for synchronous movement of actuators
doi: 10.18698/2309-3684-2021-3-6273
A model of genetic algorithm with binary coding with independent Schaeffer selection is constructed, which allows one to search for a global optimum by several criteria without their
scalarization. The calculations take into account the range of all possible motions of actuators under uncertain external influences in some predetermined range. An algorithm has been developed
that allows storing intermediate results to eliminate the problem of a large number of repeated calculations in the course of the evolutionary algorithm, which reduced the computation time. The
effectiveness of the optimization algorithm is demonstrated on the example of solving a model problem.
Бушуев А.Ю., Резников А.О. Применение генетического алгоритма в задаче моделирования и оптимизации пневмогидравлической системы синхронизации исполнительных органов. Математическое моделирование
и численные методы, 2021, № 3, с. 62–73.
519.6 Modeling and optimization of low–mass satellite control when flying from Earth orbit to Mars orbit under a solar sail
doi: 10.18698/2309-3684-2021-3-7487
In this paper, the optimization of the transfer of a low–mass satellite from the Earth's orbit to the Mars orbit under a solar sail is considered. Optimization of the control of the pitch angle
of the solar sail is carried out using the Pontryagin maximum principle while minimizing the flight time. In contrast to previous works on this topic, the solution of the boundary value problem,
to the solution of which the maximum principle is reduced, was obtained by the false position method. The calculation program is written in the C++ programming language. Despite the computational
difficulties arising when using the false position method, it was possible to achieve good convergence of the Newton method underlying the algorithm. The analysis of the accuracy of the results
obtained is carried out and the possibility of using the false position method in solving such problems is shown. A comparison is made with the data of previously published works. Despite some
assumptions used in the development of the calculation algorithm, the work has its value in terms of assessing the possibility of using the false position method, which gives the most accurate
numerical optimization results.
Мозжорина Т.Ю., Рахманкулов Д.А. Моделирование и оптимизация управлением спутника малой массы при перелете с орбиты Земли на орбиту Марса под солнечным парусом. Математическое моделирование и
численные методы, 2021, № 3, с. 74–87.
004.85:551.5051 Methods of data mining in the nowcasting model of dangerous phenomena
doi: 10.18698/2309-3684-2021-3-88104
This work is devoted to the study and application of methods of intellectual analysis for the implementation of the scheme of the nowcasting of dangerous phenomena. In the course of the work,
data sets were formed with differ in the methods of information processing for their preparation. For each set, a number of mathematical models were constructed for classifying cloud cells
according to the degree of danger of tornadoes forming from them. The Python programming language has been chosen as the main development language. The work is of great practical importance in
the field of forecasting weather events. Its novelty lies in the use of modern machine learning methodology, instead of the traditional approach to data extrapolation, widely used in various
schemes of nowcasting.
Шершакова А.О., Пархоменко В.П. Методы интеллектуального анализа данных в модели наукастинга опасных явлений. Математическое моделирование и численные методы, 2021, № 3, с. 88–104.
519.6 Agent-based model of cultural interactions on non-metrizable Hausdorff spaces
doi: 10.18698/2309-3684-2021-3-105119
The need to develop formalized computer-oriented approaches to conducting interdisciplinary research of intercultural interactions is an urgent task. The article describes an approach to the
development of agent models of intercultural interactions based on the use of non-metrizable Hausdorff spaces using genetic algorithms to introduce dynamic changes in the structure of cultural
agents under consideration. The article considers a prototype of an agent model in which the state of agents is described in Hausdorff spaces. Using the choice of reference points for each agent,
the Uryson function is built, which allows you to enter the preferences of agents. Further, using the technology of gentic algorithms, it is possible to obtain the clock dynamics of changes in
the entire system of agents. The article describes some simulation experiments. Possible prospects for the development of this approach are discussed.
Белотелов Н.В, Павлов С.А. Агентная модель культурных взаимодействий на неметризуемых хаусдорфовых пространствах. Математическое моделирование и численные методы, 2021, № 3, с. 105–119. | {"url":"https://mmcm.bmstu.ru/archive/32/","timestamp":"2024-11-05T20:04:37Z","content_type":"text/html","content_length":"26670","record_id":"<urn:uuid:ff9c3691-ae24-4247-a3a4-00896bbc0e12>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00880.warc.gz"} |
different results for Hamiltonian written in different form
Here I have a Bose-Hubbard model (with self-defined siteset file) and the density-density interaction can be written in two identical forms as follows
ampo += U, "Nb", i, "Nb", i;
ampo += -U, "Nb", i;
ampo += U, "Adagb", i, "Adagb", i, "Ab", i, "Ab", i;
this two forms of "ampo" should be the same because @@N_b=A^{\dagger}A@@. However, the final results of dmrg is quite different for this two forms. In fact, the first form gives the correct answer
while the second one does not and it just converge very slowly.
Here I presents the siteset file for Bose Hubbard model
#ifndef __ITENSOR_BH_H
#define __ITENSOR_BH_H
#include "itensor/mps/siteset.h"
#include <cmath>
namespace itensor
class BoseHubbard;
using BH=BasicSiteSet<BoseHubbard>;
auto Sqrt3=1.7320508075688772;
class BoseHubbard
IQIndex s;
BoseHubbard() {}
BoseHubbard(IQIndex I): s(I) {}
BoseHubbard(int n, Args const& args=Args::global())
// spinless boson (3 boson/site at most)
IQIndex index() const {return s;}
IQIndexVal state(std::string const& state)
if (state=="Emp")
return s(1);
else if (state=="b1")
return s(2);
else if (state=="b2")
return s(3);
else if (state=="b3")
return s(4);
IQTensor op(std::string const& opname, Args const& args) const
auto sP=prime(s);
IQTensor Op(dag(s),sP);
// boson single-site operator
if (opname=="Nb")
else if (opname=="Ab")
else if (opname=="Adagb")
else if (opname=="Id")
Error("Operator " + opname + " name not recognized !");
return Op;
Here I restricts that at most 3 bosons can lie in the same site. The main file is prsented as follows
// this file tests spinless Bose-Hubbard model
#include "itensor/all.h"
#include <typeinfo>
#include <iostream>
#include <fstream>
#include <vector>
#include <stdlib.h>
#include <cmath>
using namespace itensor;
using namespace std;
int main()
auto N = 20;
auto J = 1.0;
auto U = 0.5;
auto V = 1.0;
auto sites = BH(N);
auto ampo = AutoMPO(sites);
// site index must start from 1
for (int i = 1; i < N; i++) {
// hopping term
ampo += -J, "Adagb", i, "Ab", i+1;
ampo += -J, "Adagb", i+1, "Ab", i;
// onsite interaction
// ampo += U/2.0, "Nb", i, "Nb", i;
// ampo += -U/2.0, "Nb", i;
ampo += U/2.0, "Adagb", i, "Ab", i, "Adagb", i, "Ab", i;
ampo += -U/2.0, "Adagb", i, "Ab", i;
// NN interaction
// ampo += V, "Nb", i, "Nb", i+1;
ampo += V, "Adagb", i, "Ab", i, "Adagb", i+1, "Ab", i+1;
// ampo += U/2.0, "Nb", N, "Nb", N;
// ampo += -U/2.0, "Nb", N;
ampo += U/2.0, "Adagb", N, "Ab", N, "Adagb", N, "Ab", N;
ampo += -U/2.0, "Adagb", N, "Ab", N;
auto Hamil = IQMPO(ampo);
auto state = InitState(sites);
for (int i = 1; i <= N; i++)
if (i%2 == 0){
state.set(i, "b2");
else {
state.set(i, "Emp");
auto psi=IQMPS(state);
// DMRG parameter
auto sweeps = Sweeps(10);
sweeps.maxm() = 160;
sweeps.cutoff() = 1E-12;
// perform DMRG algorithm
auto energy=dmrg(psi,Hamil,sweeps,{"Quiet",true});
println("Ground state energy = ",energy);
Hi. I was just looking at it and, maybe is it because that for the first form the single particle state is a eigenstate, while the second is not?
@@H_{1}\left\vert 1\right\rangle =0\left\vert 1\right\rangle @@
@@H_{2}\left\vert 1\right\rangle =0 @@
Hi Junjie,
I have a similar question to Yixuan above. While your two definitions look ok to me for the case of 0, 1, and 2 particles on a single site, are they still the same for 3 or more particles on a single
site? Does your Hilbert space allow more than 2 particles on the same site?
I have posted my siteset file of bose Hubbard model. I have checked it and it should be ok. Here I restricted that at single site, there should be no more than 3 bosons. As suggested by yixuan, I
change the order of creation and annihilation operator to ensure that the single particle state is a eigenstate for both forms. However, the problem remains.
Hi Junjie,
I think if you rearranged the order of creation and annihilation then the eigenvalues would be different, like this (if you meant switching the middle two operators)
$$H_{1}\left\vert 1\right\rangle=(N_{b}N_{b}-N_{b})\left\vert 1\right\rangle =0\left\vert
1\right\rangle $$
$$H_{2}\left\vert 1\right\rangle=A_{b}^{\dagger }A_{b}A_{b}^{\dagger }A_{b}\left\vert 1\right\rangle
=1\left\vert 1\right\rangle $$
Even the single particle state is a eigenstate for both forms, they are still different Hamiltonion.
hi,yixuan. As you see in my main file, I have subtracted Nb so the total Hamiltonian should be the same.
Hi Junjie,
That's helpful to see more of your code. What about the following: it looks like you might have defined "Ab" and "Adagb" backwards in your site set file. The convention about operators in ITensor is
that the index with primelevel=0 is the one acting on the initial state, and the one with primelevel=1 is the index corresponding to the final state.
Can anyone elaborate this a bit please? What exactly need to change in order to correct the site set file for BH model?
Adding onto this comment, Junjie - would you be willing to share your boson site set? I would be happy to add it to ITensor officially. | {"url":"http://itensor.org/support/762/different-results-for-hamiltonian-written-different-form?show=1069","timestamp":"2024-11-03T22:00:14Z","content_type":"text/html","content_length":"43388","record_id":"<urn:uuid:1ab139f4-0174-41c7-9111-2a0720105304>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00103.warc.gz"} |
Python Contest
There is a python contest at http://www.pycontest.net/
The task is writing the shortest program to drive a seven-segment LCD thingy.
I have no hope of winning, but here's a helpful hint:
If your code is any longer than this (191 chars), it will not win ;-)
a=' _ '
c=' '
d=' |'
e=' _|'
f='|_ '
g='| |'
def seven_seg(x):
return '\n'.join([eval('+'.join([v[int(l)*3+i]for l in x]))for i in 0,1,2])+'\n'
Note: I edited this item way too many times already ;-)
And yes, I can save two characters moving the return up.
A much uglier, yet much shorter (151) version:
def seven_seg(x):return''.join([''.join(['| ||__ __ || | |'[int('a302ho6nqyp9vxvpeow',36)/10**(int(l)*3+u)%10::7]for l in x])+'\n'for u in 0,1,2])
And yes, that pretty much proves you can write ugly python. And I am giving up. A shorter version probably involves a different algorithm, and I can't find any.
I am particularly proud of saving one character by writing 104004334054154302114514332064 as int('a302ho6nqyp9vxvpeow',36).
Also interesting is that the number I was using there originally started with 4 and was exactly the same length written both ways ;-)
Since that number is pretty arbitrary (it's an index table into the "graphics" array), I just shuffled the 1 and the 4. The 0 would have been better but then it didn't work, of course | {"url":"https://home.ralsina.me/tr/es/weblog/posts/P332.html","timestamp":"2024-11-03T22:06:59Z","content_type":"text/html","content_length":"17110","record_id":"<urn:uuid:3345c0cc-9ebc-4365-a51f-5af749853730>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00284.warc.gz"} |
Kiloyards to Planck length Converter
Enter Kiloyards
Planck length
β Switch toPlanck length to Kiloyards Converter
How to use this Kiloyards to Planck length Converter π €
Follow these steps to convert given length from the units of Kiloyards to the units of Planck length.
1. Enter the input Kiloyards value in the text field.
2. The calculator converts the given Kiloyards into Planck length in realtime β using the conversion formula, and displays under the Planck length label. You do not need to click any button. If
the input changes, Planck length value is re-calculated, just like that.
3. You may copy the resulting Planck length value using the Copy button.
4. To view a detailed step by step calculation of the conversion, click on the View Calculation button.
5. You can also reset the input by clicking on button present below the input field.
What is the Formula to convert Kiloyards to Planck length?
The formula to convert given length from Kiloyards to Planck length is:
Length[(Planck length)] = Length[(Kiloyards)] × 5.658240762982675e+37
Substitute the given value of length in kiloyards, i.e., Length[(Kiloyards)] in the above formula and simplify the right-hand side value. The resulting value is the length in planck length, i.e.,
Length[(Planck length)].
Calculation will be done after you enter a valid input.
Consider that a race track is 2 kiloyards long.
Convert this distance from kiloyards to Planck length.
The length in kiloyards is:
Length[(Kiloyards)] = 2
The formula to convert length from kiloyards to planck length is:
Length[(Planck length)] = Length[(Kiloyards)] × 5.658240762982675e+37
Substitute given weight Length[(Kiloyards)] = 2 in the above formula.
Length[(Planck length)] = 2 × 5.658240762982675e+37
Length[(Planck length)] = 1.131648152596535e+38
Final Answer:
Therefore, 2 kyd is equal to 1.131648152596535e+38 Planck length.
The length is 1.131648152596535e+38 Planck length, in planck length.
Consider that a golf course has a fairway measuring 1.5 kiloyards.
Convert this distance from kiloyards to Planck length.
The length in kiloyards is:
Length[(Kiloyards)] = 1.5
The formula to convert length from kiloyards to planck length is:
Length[(Planck length)] = Length[(Kiloyards)] × 5.658240762982675e+37
Substitute given weight Length[(Kiloyards)] = 1.5 in the above formula.
Length[(Planck length)] = 1.5 × 5.658240762982675e+37
Length[(Planck length)] = 8.4873611444740125e+37
Final Answer:
Therefore, 1.5 kyd is equal to 8.4873611444740125e+37 Planck length.
The length is 8.4873611444740125e+37 Planck length, in planck length.
Kiloyards to Planck length Conversion Table
The following table gives some of the most used conversions from Kiloyards to Planck length.
Kiloyards (kyd) Planck length (Planck length)
0 kyd 0 Planck length
1 kyd 5.658240762982675e+37 Planck length
2 kyd 1.131648152596535e+38 Planck length
3 kyd 1.6974722288948025e+38 Planck length
4 kyd 2.26329630519307e+38 Planck length
5 kyd 2.8291203814913373e+38 Planck length
6 kyd 3.394944457789605e+38 Planck length
7 kyd 3.9607685340878724e+38 Planck length
8 kyd 4.52659261038614e+38 Planck length
9 kyd 5.092416686684407e+38 Planck length
10 kyd 5.6582407629826745e+38 Planck length
20 kyd 1.1316481525965349e+39 Planck length
50 kyd 2.8291203814913376e+39 Planck length
100 kyd 5.658240762982675e+39 Planck length
1000 kyd 5.658240762982674e+40 Planck length
10000 kyd 5.658240762982675e+41 Planck length
100000 kyd 5.658240762982675e+42 Planck length
A kiloyard (ky) is a unit of length equal to 1,000 yards or approximately 914.4 meters.
The kiloyard is defined as one thousand yards, providing a convenient measurement for longer distances that are not as extensive as miles but larger than typical yard measurements.
Kiloyards are used in various fields to measure length and distance where a scale between yards and miles is appropriate. They offer a practical unit for certain applications, such as in land
measurement and engineering.
Planck length
The Planck length is a fundamental unit of length in physics, representing the smallest measurable distance in the universe. One Planck length is approximately 1.616 Γ 10^(-35) meters.
The Planck length is defined based on fundamental physical constants, including the speed of light, the gravitational constant, and Planck's constant. It represents a theoretical limit below which
the concept of distance may not have any physical meaning due to quantum fluctuations and the effects of gravity.
The Planck length is used in theoretical physics to explore the limits of our understanding of space and time, particularly in quantum gravity and theories of quantum mechanics. It provides a scale
for studying the fundamental structure of the universe and the interplay between quantum mechanics and gravity.
Frequently Asked Questions (FAQs)
1. What is the formula for converting Kiloyards to Planck length in Length?
The formula to convert Kiloyards to Planck length in Length is:
Kiloyards * 5.658240762982675e+37
2. Is this tool free or paid?
This Length conversion tool, which converts Kiloyards to Planck length, is completely free to use.
3. How do I convert Length from Kiloyards to Planck length?
To convert Length from Kiloyards to Planck length, you can use the following formula:
Kiloyards * 5.658240762982675e+37
For example, if you have a value in Kiloyards, you substitute that value in place of Kiloyards in the above formula, and solve the mathematical expression to get the equivalent value in Planck | {"url":"https://convertonline.org/unit/?convert=kiloyards-planck_length","timestamp":"2024-11-15T03:11:19Z","content_type":"text/html","content_length":"92534","record_id":"<urn:uuid:00a208be-bd3e-4578-9355-4bbb3a87b26c>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00150.warc.gz"} |
Visualization Variables
Define variables
At the beginning of the genericGWTVisualization you can define variables. A variable can later be used to:
1. create another variable that depends on this variable
2. be displayed in the canvas (does not apply for number variables)
3. be displayed in text-part as a formula, or number
1 \begin{variables}
2 \end{variables}
Random numbers
Supported are random integers, doubles, and rationals.
(!) Unlike the generic problem, the random variables only exists on runtime. It will be generated every time the page is reloaded. (!)
1. randint
1 \randint{name}{min}{max} %creates a random integer from [min, max] and store it as name
2 \randint[Z]{name}{min}{max} %same as above, but avoiding zero
2. randdouble
1 \randdouble{name}{min}{max} %creates a random real number between [min, max]
3. randrat
1 \randrat{name}{minNumerator}{maxNumerator}{minDenominator}{maxDenominator} %creates a random rational number with numerator from [minNumerator, maxNumerator] and denominator from
Only numbers are allowed to be used as min and max value.
You can use \randadjustIf also in the variables environment of a generic visualization.
The syntax is the same as \randadjustIf in generic problems, but only random variables can be used in
the variables argument.
1 \randadjustIf{variables}{condition}
1 \randint[Z]{x1}{-5}{5}
2 \randint[Z]{x2}{-5}{5}
3 \randadjustIf{x1, x2}{x1 >= x2} % regenerate both x1 and x2 if x2 is not greater than x1.
Other variables
This group consists of numbers, functions and geometry variables. This type of variables can also be editable by the user and be dependant on other variables.
The common syntax of the variables is:
1 \type[editable]{name}{field}{value}
• editable is an optional argument that enables user interactivity. If you use an editable variable later in
a \text{}, then the user can edit it. If you put editable points, lines and vectors on a \plot{}, users are able to drag and move them around.
• name is the name of the variable
• field defines the number class. You can choose from integer, real, rational, complex, and operation
• value defines the value of the variable. Here expressions are expected. Depending on the type it may
have one or more comma separated expressions. You can use other variables by using the syntax var(name)
1. Numbers
1 %simply a number
2 \number{simple}{real}{0.5} % creates a real number with the value of 0.5
3 \number{sipmle2}{real}{1/2} % also creates 0.5
4 \number{simpleint}{integer}{0.5} % creates an integer with value 0
5 %complex
6 \number{c1}{complex}{1,-1} % creates a complex number with the value 1 - i
7 %operation
8 \number{op}{operation}{sin(pi)} % creates an operation sin(\pi)
9 %using other variables
10 \randint{randomA}{-5}{5} %create random value for a
11 \randint{randomB}{-5}{5} %create random value for b
12 \number[editable]{a}{integer}{var(randomA)} %creates an editable variable a with a random default value
13 \number[editable]{b}{integer}{var(randomB)} %creates an editable variable b with a random default value
14 \number{a+b}{operation}{var(a) + var(b)} %creates an operation a + b, where a and b will be replaced by the current value of a and b.
15 \number{result}{integer}{var(a)+var(b)} %creates an integer with the value of a+b. The value of result will be updated everytime the user edits either a or b.
Add a slider to manipulate your number variable
Numbers which are added to a graph can also be adjusted by a slider. To achieve this, define your number
as is done above and then use the following command inside the variables environment:
1 \slider[<step_size>]{<var_name>}{<num_var>}{<left_bound>, editable}{<right_bound>, editable}
• var_name the variable name of the slider.
• num_var the number that the slider will influence, an existing number variable must be used here and can be
of type integer or real. If the variable number is editable, it will remain directly editable by the user. If the variable number is not editable, the number is only editable via the slider.
• left_bound : the left bound value of the slider. Provide the extra parameter editable in order to make the bound editable by the user. The value should be a number and not a variable name.
• right_bound : similar to left_bound but then regarding the right_bound value for the slider.
Optional parameters (displayed in square brackets)
• step_size the step size with which the slider will adjust the number. The default value for numbers of type real
is based on 100 steps between left and right bound, so a range of [0 10] would give a step size of 0.1. The default for numbers of type integer is _1_.
It is up to the author to make sure that step sizes and boundary values match up. When this is not the case, e.g. taking a left boundary of -1, right boundary of 9, a step size of two and initial
number variable of 4; the slider might behave differently as expected.
The slider must then be added to the graph. The syntax for this is as following:
1 \begin{canvas}
2 \slider{<slider_var1>, <slider_var2, ...}
3 \end{canvas}
• Inside the canvas environment the \slider command is used to display the sliders defined in the variable environment.
• Sliders can only be displayed in a canvas environment and not be used in a normal text environment.
A simple example of the syntax
\slider[0.2]{n_slider}{n}{-5}{5, editable}
\label{h}{$\textcolor{BLACK}{b =}$}
Results in the following:
with step sizes of 0.2.
Note that a slider can only be displayed in combination with a graph, see example below.
A complete example with a graph containing sliders
This example creates a graph which plots two points, (1) real number n and (2) absolute of number n.
Two sliders are created, both adjusting number n.
• Slider 1 fixed range from -5 to 5 and default step size.
• Slider 2 editable left (default -10) and right bound (default 10), with a step size of 1.
The points and the slider are then added to the canvas.
3 \title{Absolutbetrag reeller Zahlen}
4 \text{Wählen Sie reelle Zahlen $a$ zwischen $-5$ und $5$ und
5 beoachten Sie auf dem Zahlenstrahl die Zahl selbst und ihren Betrag $|a|$.}
6 \begin{variables}
7 \number[editable]{n}{real}{-1}
8 \slider{n_s1}{n}{-5}{5}
9 \slider[1]{n_s2}{n}{-10, editable}{10, editable}
10 \number{absn}{real}{absn(var(n))}
11 \point{p1}{real}{var(n),0}
12 \point{p2}{real}{var(absn),0}
13 \end{variables}
14 \label{n}{$n_1 =$}
15 \begin{canvas}
16 \plotSize{500,80}
17 \plotLeft{-6}
18 \plotRight{6}
19 \plot[numberLine]{p1,p2}
20 \slider{n_s1, n_s2}
21 \end{canvas}
Results in the following:
When adding a slider to the canvas the slider will be placed either to the left of the canvas (if there is enough room on the side), or at the top of the canvas (when there is not enough room). For
this reason you might want to put content that is related to the slider number above the canvas.
In most cases you would probably want to add a label to the
number variable
(and not the slider variable) as it was done in both examples above. For more information on labels and their colors, see
Adding color and label to variables
2. Function
1 \function{f}{real}{sin(x)}
3. Point
1 \point[editable]{p0}{real}{0,0} % creates an editable point at 0,0
2 \randint[Z]{a}{-3}{3}
3 \randint[Z]{b}{-3}{3}
4 \point[editable]{p1}{real}{var(a), var(b)} % creates an editable point with random default coordinate
You can extract the x and y coordinates of a point and use it as in *value* for other variables by using var(p1)[x] and var(p1)[y]
4. Line and Line Segment
Line and line segment expects you to use 2 point variables as value
1 \line{g}{real}{var(p0), var(p1)} % crates a line that runs through p0 and p1
2 \segment{g}{real}{var(p0), var(p1)} % crates a line segment between p0 and p1
5. Vector and Affine Vector
Vector is an arrow starting from the origin and ending at a point, the command expects only one point as value.
Affine vector expects as input two vectors. It is an arrow starting at the endpoint of the first vector, and being parallel to the second vector.
1 \point{p1}{real}{1,1}
2 \point{p2}{real}{3,0}
3 \vector{v1}{real}{var(p1)} % creates an arrow starting from origin to (1,1)
4 \vector{v2}{real}{var(p2)} % creates an arrow starting from origin to (3,0)
5 \affine{aff1}{real}{var(v1),var(v2)} % creates an arrow starting from (1,1) with coordinate (3,0)
Similar to points, you can extract the x and y coordinates of a vector and use it as in *value* for other variables by using var(v1)[x] and var(v1)[y]
6. Circle
1 \point{center}{real}{1,1}
2 \circle{c1}{real}{var(center), 2} %creates a circle on center with radius 2
7. Angle
1 \point{center}{real}{1,1}
2 \angle{c1}{real}{var(center), 2, 0, pi/2} %creates an angle on center with radius 2 from 0 to pi/2
8. Set
With set you can display section(s) of the 2d coordinate system (basically a set of (x,y)-tuple) that fulfills the given relation.
1 \set{set2}{real}{|var(p)x+var(q)y+var(c)|>var(d)}
Note that the value is a relation and you can combine relations with AND and OR. For example, to create a set equivalent to the above example, you can also use:
1 \set{set2}{real}{var(p)x+var(q)y+var(c)>var(d) OR -(var(p)x+var(q)y+var(c))>var(d)}
You can use the square brackets to group a part of the relation
1 \set{set2}{real}{y > 0 AND [x < -3 OR x > 3]}
9. Parametric Function in 2D
\parametricFunction plots a function with one parameter (t). The value of \parametricFunction has the following arguments:
1. fx - the expression defining the x component
2. fy - the expression defining the y component
3. min - the lower bound of the parameter t
4. max - the upper bound of the parameter t
5. steps - number of vertices between the bounds
1 \parametricFunction{f}{real}{2abs(cos(2t))*cos(t)-3, 5*abs(sin(t))*sin(t)-3, 0, 2*pi, 1000}
10. Point on Curve
You can add a point to the graph of a function or to the curve of a parametric function by using the commands
\pointOnCurve or \pointOnParametricCurve.
\pointOnCurve has as the following arguments:
1. (optional) limits for the function variable, default: -inf,inf
2. name of the point
3. number class
4. function (explicit or the variable of a function)
5. initial value of the function variable for the point
1 \function[editable]{f}{real}{sin(x)}
2 \pointOnCurve{p}{real}{var(f)}{1}
3 \pointOnCurve[0,2*pi]{q}{real}{cos(x)}{0}
\pointOnParametricCurve has as the following arguments:
1. name of the point
2. number class
3. parametric function (explicit or the variable of a parametric function)
(see above for details on the arguments of a parametric function)
4. initial value of the function parameter t for the point
1 \parametricFunction{f}{real}{2abs(cos(2t))*cos(t), 5*abs(sin(t))*sin(t), 0, 2*pi, 1000}
2 \pointOnParametricCurve{p}{real}{var(f)}{0}
3 \pointOnParametricCurve{q}{real}{cos(t), sin(t), 0, 2*pi, 1000)}{pi}
The point in the canvas will be always editable, but non-editable in the text area above or below the canvas if added by \text{\$\var{<identifier of a point on curve>}\$}
3D Variables
This feature is currently not supported. We intend to enhance it again in the new framework that we are currently developing. | {"url":"https://wiki.mumie.net/wiki/Visualization-Variables.md","timestamp":"2024-11-12T03:49:33Z","content_type":"text/html","content_length":"895989","record_id":"<urn:uuid:fbb622a0-dcd3-46b9-a94f-febe99fb4fc8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00861.warc.gz"} |